When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Kolmogorov–Smirnov test - Wikipedia

    en.wikipedia.org/wiki/KolmogorovSmirnov_test

    Illustration of the KolmogorovSmirnov statistic. The red line is a model CDF, the blue line is an empirical CDF, and the black arrow is the KS statistic.. KolmogorovSmirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to test whether a sample came from a ...

  3. Cramér–von Mises criterion - Wikipedia

    en.wikipedia.org/wiki/Cramér–von_Mises_criterion

    In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function compared to a given empirical distribution function , or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as.

  4. Normality test - Wikipedia

    en.wikipedia.org/wiki/Normality_test

    Simple back-of-the-envelope test takes the sample maximum and minimum and computes their z-score, or more properly t-statistic (number of sample standard deviations that a sample is above or below the sample mean), and compares it to the 68–95–99.7 rule: if one has a 3σ event (properly, a 3s event) and substantially fewer than 300 samples, or a 4s event and substantially fewer than 15,000 ...

  5. Lilliefors test - Wikipedia

    en.wikipedia.org/wiki/Lilliefors_test

    Lilliefors test. Lilliefors test is a normality test based on the KolmogorovSmirnov test. It is used to test the null hypothesis that data come from a normally distributed population, when the null hypothesis does not specify which normal distribution; i.e., it does not specify the expected value and variance of the distribution. [1]

  6. Benford's law - Wikipedia

    en.wikipedia.org/wiki/Benford's_law

    Although the chi-squared test has been used to test for compliance with Benford's law it has low statistical power when used with small samples. The KolmogorovSmirnov test and the Kuiper test are more powerful when the sample size is small, particularly when Stephens's corrective factor is used. [54]

  7. Empirical distribution function - Wikipedia

    en.wikipedia.org/wiki/Empirical_distribution...

    The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to ...

  8. Kuiper's test - Wikipedia

    en.wikipedia.org/wiki/Kuiper's_test

    Kuiper's test is closely related to the better-known KolmogorovSmirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics D + and D − represent the absolute sizes of the most positive and most negative differences between the two cumulative distribution functions that are being compared

  9. Goodness of fit - Wikipedia

    en.wikipedia.org/wiki/Goodness_of_fit

    The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g. to test for normality of residuals, to test ...