When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Z-test - Wikipedia

    en.wikipedia.org/wiki/Z-test

    Difference between Z-test and t-test: Z-test is used when sample size is large (n>50), or the population variance is known. t-test is used when sample size is small (n<50) and population variance is unknown. There is no universal constant at which the sample size is generally considered large enough to justify use of the plug-in test.

  3. Multiple comparisons problem - Wikipedia

    en.wikipedia.org/wiki/Multiple_comparisons_problem

    A normal quantile plot for a simulated set of test statistics that have been standardized to be Z-scores under the null hypothesis. The departure of the upper tail of the distribution from the expected trend along the diagonal is due to the presence of substantially more large test statistic values than would be expected if all null hypotheses were true.

  4. Paired difference test - Wikipedia

    en.wikipedia.org/wiki/Paired_difference_test

    Suppose we are using a Z-test to analyze the data, where the variances of the pre-treatment and post-treatment data σ 1 2 and σ 2 2 are known (the situation with a t-test is similar). The unpaired Z-test statistic is ¯ ¯ / + /, The power of the unpaired, one-sided test carried out at level α = 0.05 can be calculated as follows:

  5. Statistical hypothesis test - Wikipedia

    en.wikipedia.org/wiki/Statistical_hypothesis_test

    This ensures that the hypothesis test maintains its specified false positive rate (provided that statistical assumptions are met). [35] The p-value is the probability that a test statistic which is at least as extreme as the one obtained would occur under the null hypothesis. At a significance level of 0.05, a fair coin would be expected to ...

  6. Asymptotic theory (statistics) - Wikipedia

    en.wikipedia.org/wiki/Asymptotic_theory_(statistics)

    Most statistical problems begin with a dataset of size n. The asymptotic theory proceeds by assuming that it is possible (in principle) to keep collecting additional data, thus that the sample size grows infinitely, i.e. n → ∞. Under the assumption, many results can be obtained that are unavailable for samples of finite size.

  7. Z-fighting - Wikipedia

    en.wikipedia.org/wiki/Z-fighting

    It can also vary as the scene or camera is changed, causing one polygon to "win" the z test, then another, and so on. The overall effect is flickering, noisy rasterization of two polygons which "fight" to color the screen pixels. This problem is usually caused by limited sub-pixel precision, floating point and fixed point round-off errors.

  8. Upgrade to a faster, more secure version of a supported browser. It's free and it only takes a few moments:

  9. Bonferroni correction - Wikipedia

    en.wikipedia.org/wiki/Bonferroni_correction

    With respect to FWER control, the Bonferroni correction can be conservative if there are a large number of tests and/or the test statistics are positively correlated. [9] Multiple-testing corrections, including the Bonferroni procedure, increase the probability of Type II errors when null hypotheses are false, i.e., they reduce statistical power.