When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Repeatability - Wikipedia

    en.wikipedia.org/wiki/Repeatability

    If the correlation between separate administrations of the test is high (e.g. 0.7 or higher as in this Cronbach's alpha-internal consistency-table [6]), then it has good testretest reliability. The repeatability coefficient is a precision measure which represents the value below which the absolute difference between two repeated test results ...

  3. Reliability (statistics) - Wikipedia

    en.wikipedia.org/wiki/Reliability_(statistics)

    This halves reliability estimate is then stepped up to the full test length using the Spearman–Brown prediction formula. There are several ways of splitting a test to estimate reliability. For example, a 40-item vocabulary test could be split into two subtests, the first one made up of items 1 through 20 and the second made up of items 21 ...

  4. Computer-based test interpretation in psychological assessment

    en.wikipedia.org/wiki/Computer-Based_Test...

    Although CBTI programs are successful in test-retest reliability, there have been major concerns and criticisms regarding the programs' ability to assess inter-rater and internal consistency reliability. Research has shown that the validity of CBTI programs has not been confirmed, due to the varying reports of individual programs. CBTI programs ...

  5. Test–retest - Wikipedia

    en.wikipedia.org/wiki/Testretest

    Testretest or retest or may refer to: Testretest reliability; Monitoring (medicine) by performing frequent tests; Doping retest, of an old sports doping sample using improved technology, to allow retrospective disqualification

  6. Intra-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Intra-rater_reliability

    In statistics, intra-rater reliability is the degree of agreement among repeated administrations of a diagnostic test performed by a single rater. [ 1 ] [ 2 ] Intra-rater reliability and inter-rater reliability are aspects of test validity .

  7. Kuder–Richardson formulas - Wikipedia

    en.wikipedia.org/wiki/Kuder–Richardson_formulas

    The name of this formula stems from the fact that is the twentieth formula discussed in Kuder and Richardson's seminal paper on test reliability. [1] It is a special case of Cronbach's α, computed for dichotomous scores. [2] [3] It is often claimed that a high KR-20 coefficient (e.g., > 0.90) indicates a homogeneous test. However, like ...

  8. Spearman–Brown prediction formula - Wikipedia

    en.wikipedia.org/wiki/Spearman–Brown_prediction...

    Predicted reliability, ′, is estimated as: ′ = ′ + ′ where n is the number of "tests" combined (see below) and ′ is the reliability of the current "test". The formula predicts the reliability of a new test composed by replicating the current test n times (or, equivalently, creating a test with n parallel forms of the current exam).

  9. Wechsler Individual Achievement Test - Wikipedia

    en.wikipedia.org/wiki/Wechsler_Individual...

    The test takes 45–90 minutes to administer depending on the age of the participant. The mean score for the WIAT-II is 100 with a standard deviation of 15, and the scores on the test may range from 40 to 160. 68% of participants in the UK standardisation sample obtained scores of 85-115 and 95% obtained scores of 70-130.