When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Cohen's kappa - Wikipedia

    en.wikipedia.org/wiki/Cohen's_kappa

    Cohen's kappa coefficient (κ, lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. [1] It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement ...

  3. Analytical quality control - Wikipedia

    en.wikipedia.org/wiki/Analytical_quality_control

    Because of the complex inter-relationship between analytical method, sample concentration, limits of detection and method precision, the management of Analytical Quality Control is undertaken using a statistical approach to determine whether the results obtained lie within an acceptable statistical envelope.

  4. Round-robin test - Wikipedia

    en.wikipedia.org/wiki/Round-robin_test

    In experimental methodology, a round-robin test is an interlaboratory test (measurement, analysis, or experiment) performed independently several times. [1] This can involve multiple independent scientists performing the test with the use of the same method in different equipment, or a variety of methods and equipment.

  5. Laboratory quality control - Wikipedia

    en.wikipedia.org/wiki/Laboratory_quality_control

    Quality control (QC) is a measure of precision, or how well the measurement system reproduces the same result over time and under varying operating conditions. Laboratory quality control material is usually run at the beginning of each shift, after an instrument is serviced, when reagent lots are changed, after equipment calibration, and ...

  6. Coefficient of variation - Wikipedia

    en.wikipedia.org/wiki/Coefficient_of_variation

    CV measures are often used as quality controls for quantitative laboratory assays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are ...

  7. Inter-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Inter-rater_reliability

    In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

  8. Intraclass correlation - Wikipedia

    en.wikipedia.org/wiki/Intraclass_correlation

    An important aspect of this problem is that there is both inter-observer and intra-observer variability. Inter-observer variability refers to systematic differences among the observers — for example, one physician may consistently score patients at a higher risk level than other physicians. Intra-observer variability refers to deviations of a ...

  9. Repeatability - Wikipedia

    en.wikipedia.org/wiki/Repeatability

    The repeatability coefficient is a precision measure which represents the value below which the absolute difference between two repeated test results may be expected to lie with a probability of 95%. [citation needed] The standard deviation under repeatability conditions is part of precision and accuracy. [citation needed]