Search results
Results From The WOW.Com Content Network
Cronbach's alpha (Cronbach's ), also known as tau-equivalent reliability or coefficient alpha (coefficient ), is a reliability coefficient and a measure of the internal consistency of tests and measures. [1] [2] [3] It was named after the American psychologist Lee Cronbach.
Note further that Cronbach's alpha is necessarily higher for tests measuring more narrow constructs, and lower when more generic, broad constructs are measured. This phenomenon, along with a number of other reasons, argue against using objective cut-off values for internal consistency measures. [4]
The average Cronbach's alpha is .71. It is available in 19 languages and has been used worldwide. Support materials for interpretation and training are available in each language.
The most common internal consistency measure is Cronbach's alpha, which is usually interpreted as the mean of all possible split-half coefficients. [9] Cronbach's alpha is a generalization of an earlier form of estimating internal consistency, Kuder–Richardson Formula 20. [9]
It is a special case of Cronbach's α, computed for dichotomous scores. [2] [3] It is often claimed that a high KR-20 coefficient (e.g., > 0.90) indicates a homogeneous test. However, like Cronbach's α, homogeneity (that is, unidimensionality) is actually an assumption, not a conclusion, of reliability coefficients.
[8] [9] The internal consistency for the BDI-IA was good, with a Cronbach's alpha coefficient of around 0.85, meaning that the items on the inventory are highly correlated with each other. [10] However, this version retained some flaws; the BDI-IA only addressed six out of the nine DSM-III criteria for depression. This and other criticisms were ...
The Cronbach's alpha was used to obtain reliability measures. Across one group of nine studies, alpha measures were 0.71-0.89, reflecting good internal consistency. The test adequately measures for depressive symptoms. [1] In another group of 16 studies of test-retest reliability, alpha measures were reported as 0.38–0.87. [1]
Krippendorff's alpha [16] [17] is a versatile statistic that assesses the agreement achieved among observers who categorize, evaluate, or measure a given set of objects in terms of the values of a variable. It generalizes several specialized agreement coefficients by accepting any number of observers, being applicable to nominal, ordinal ...