When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Cohen's kappa - Wikipedia

    en.wikipedia.org/wiki/Cohen's_kappa

    Cohen's kappa coefficient (κ, lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. [1] It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement ...

  3. Intra-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Intra-rater_reliability

    In statistics, intra-rater reliability is the degree of agreement among repeated administrations of a diagnostic test performed by a single rater. [ 1 ] [ 2 ] Intra-rater reliability and inter-rater reliability are aspects of test validity .

  4. Fleiss' kappa - Wikipedia

    en.wikipedia.org/wiki/Fleiss'_kappa

    Fleiss' kappa is a generalisation of Scott's pi statistic, [2] a statistical measure of inter-rater reliability. [3] It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. [4]

  5. Reliability (statistics) - Wikipedia

    en.wikipedia.org/wiki/Reliability_(statistics)

    Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions. [4] This includes intra-rater reliability. Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to ...

  6. Psychological statistics - Wikipedia

    en.wikipedia.org/wiki/Psychological_statistics

    (A) Inter-Rater reliability: Inter-Rater reliability is estimate of agreement between independent raters. This is most useful for subjective responses. Cohen's Kappa, Krippendorff's Alpha, Intra-Class correlation coefficients, Correlation coefficients, Kendal's concordance coefficient, etc. are useful statistical tools. (B) Test-Retest ...

  7. Intraclass correlation - Wikipedia

    en.wikipedia.org/wiki/Intraclass_correlation

    Single measures: even though more than one measure is taken in the experiment, reliability is applied to a context where a single measure of a single rater will be performed; Average measures: the reliability is applied to a context where measures of k raters will be averaged for each subject. Consistency or absolute agreement:

  8. Berg Balance Scale - Wikipedia

    en.wikipedia.org/wiki/Berg_Balance_Scale

    The BBS has been shown to have excellent inter-rater (ICC = 0.98) and intra-rater relative reliability (ICC = 0.97), with an absolute reliability varying between 2.8/56 and 6.6/56, with poorer reliability near the middle of the scale, [6] and is internally consistent (0.96). [2]

  9. Expanded Disability Status Scale - Wikipedia

    en.wikipedia.org/wiki/Expanded_Disability_Status...

    Nonetheless, it has many criticisms, [5] including the fact that it has moderate intra-rater reliability (EDSS kappa values between 0.32 and 0.76 and between 0.23 and 0.58 for the individual FSs were reported), offers poor assessment of upper limb and cognitive function, and lacks linearity between score difference and clinical severity. Other ...