When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Kolmogorov–Smirnov test - Wikipedia

    en.wikipedia.org/wiki/KolmogorovSmirnov_test

    Illustration of the KolmogorovSmirnov statistic. The red line is a model CDF, the blue line is an empirical CDF, and the black arrow is the KS statistic.. KolmogorovSmirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to test whether a sample came from a ...

  3. Cramér–von Mises criterion - Wikipedia

    en.wikipedia.org/wiki/Cramér–von_Mises_criterion

    In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function compared to a given empirical distribution function , or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as.

  4. Lilliefors test - Wikipedia

    en.wikipedia.org/wiki/Lilliefors_test

    Lilliefors test. Lilliefors test is a normality test based on the KolmogorovSmirnov test. It is used to test the null hypothesis that data come from a normally distributed population, when the null hypothesis does not specify which normal distribution; i.e., it does not specify the expected value and variance of the distribution. [1]

  5. Kuiper's test - Wikipedia

    en.wikipedia.org/wiki/Kuiper's_test

    Kuiper's test is closely related to the better-known KolmogorovSmirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics D + and D − represent the absolute sizes of the most positive and most negative differences between the two cumulative distribution functions that are being compared

  6. Empirical distribution function - Wikipedia

    en.wikipedia.org/wiki/Empirical_distribution...

    The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to ...

  7. Benford's law - Wikipedia

    en.wikipedia.org/wiki/Benford's_law

    The KolmogorovSmirnov test and the Kuiper test are more powerful when the sample size is small, particularly when Stephens's corrective factor is used. [54] These tests may be unduly conservative when applied to discrete distributions. Values for the Benford test have been generated by Morrow. [55]

  8. Normality test - Wikipedia

    en.wikipedia.org/wiki/Normality_test

    Simple back-of-the-envelope test takes the sample maximum and minimum and computes their z-score, or more properly t-statistic (number of sample standard deviations that a sample is above or below the sample mean), and compares it to the 68–95–99.7 rule: if one has a 3σ event (properly, a 3s event) and substantially fewer than 300 samples, or a 4s event and substantially fewer than 15,000 ...

  9. Nonparametric statistics - Wikipedia

    en.wikipedia.org/wiki/Nonparametric_statistics

    KolmogorovSmirnov test: tests whether a sample is drawn from a given distribution, or whether two samples are drawn from the same distribution. Kruskal–Wallis one-way analysis of variance by ranks: tests whether > 2 independent samples are drawn from the same distribution.