When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Inter-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Inter-rater_reliability

    In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

  3. Intra-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Intra-rater_reliability

    In statistics, intra-rater reliability is the degree of agreement among repeated administrations of a diagnostic test performed by a single rater. [ 1 ] [ 2 ] Intra-rater reliability and inter-rater reliability are aspects of test validity .

  4. Cohen's kappa - Wikipedia

    en.wikipedia.org/wiki/Cohen's_kappa

    Cohen's kappa coefficient (κ, lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. [1] It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement ...

  5. Fleiss' kappa - Wikipedia

    en.wikipedia.org/wiki/Fleiss'_kappa

    Fleiss' kappa is a generalisation of Scott's pi statistic, [2] a statistical measure of inter-rater reliability. [3] It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. [4]

  6. Concordance correlation coefficient - Wikipedia

    en.wikipedia.org/wiki/Concordance_correlation...

    The concordance correlation coefficient is nearly identical to some of the measures called intra-class correlations.Comparisons of the concordance correlation coefficient with an "ordinary" intraclass correlation on different data sets found only small differences between the two correlations, in one case on the third decimal. [2]

  7. Intraclass correlation - Wikipedia

    en.wikipedia.org/wiki/Intraclass_correlation

    Single measures: even though more than one measure is taken in the experiment, reliability is applied to a context where a single measure of a single rater will be performed; Average measures: the reliability is applied to a context where measures of k raters will be averaged for each subject. Consistency or absolute agreement:

  8. Reliability (statistics) - Wikipedia

    en.wikipedia.org/wiki/Reliability_(statistics)

    This includes intra-rater reliability. Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to be ruled out. When dealing with forms, it may be termed parallel-forms reliability. [6]

  9. Scott's Pi - Wikipedia

    en.wikipedia.org/wiki/Scott's_Pi

    Scott's pi (named after William A Scott) is a statistic for measuring inter-rater reliability for nominal data in communication studies.Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi.