When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Post hoc analysis - Wikipedia

    en.wikipedia.org/wiki/Post_hoc_analysis

    Tukey’s Test (see also: Studentized Range Distribution) However, with the exception of Scheffès Method, these tests should be specified "a priori" despite being called "post-hoc" in conventional usage. For example, a difference between means could be significant with the Holm-Bonferroni method but not with the Turkey Test and vice versa.

  3. Tukey's range test - Wikipedia

    en.wikipedia.org/wiki/Tukey's_range_test

    Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSD (honestly significant difference) test, [1] is a single-step multiple comparison procedure and statistical test.

  4. Family-wise error rate - Wikipedia

    en.wikipedia.org/wiki/Family-wise_error_rate

    The procedures of Bonferroni and Holm control the FWER under any dependence structure of the p-values (or equivalently the individual test statistics).Essentially, this is achieved by accommodating a `worst-case' dependence structure (which is close to independence for most practical purposes).

  5. Multiple comparisons problem - Wikipedia

    en.wikipedia.org/wiki/Multiple_comparisons_problem

    The problem of multiple comparisons received increased attention in the 1950s with the work of statisticians such as Tukey and Scheffé. Over the ensuing decades, many procedures were developed to address the problem. In 1996, the first international conference on multiple comparison procedures took place in Tel Aviv. [3]

  6. Holm–Bonferroni method - Wikipedia

    en.wikipedia.org/wiki/Holm–Bonferroni_method

    The Holm–Bonferroni method is "uniformly" more powerful than the classic Bonferroni correction, meaning that it is always at least as powerful. There are other methods for controlling the FWER that are more powerful than Holm–Bonferroni. For instance, in the Hochberg procedure, rejection of () …

  7. Šidák correction - Wikipedia

    en.wikipedia.org/wiki/Šidák_correction

    For example, for = 0.05 and m = 10, the Bonferroni-adjusted level is 0.005 and the Šidák-adjusted level is approximately 0.005116. One can also compute confidence intervals matching the test decision using the Šidák correction by computing each confidence interval at the ⋅ {\displaystyle \cdot } (1 − α) 1/ m % level.

  8. Bonferroni correction - Wikipedia

    en.wikipedia.org/wiki/Bonferroni_correction

    With respect to FWER control, the Bonferroni correction can be conservative if there are a large number of tests and/or the test statistics are positively correlated. [9] Multiple-testing corrections, including the Bonferroni procedure, increase the probability of Type II errors when null hypotheses are false, i.e., they reduce statistical power.

  9. Newman–Keuls method - Wikipedia

    en.wikipedia.org/wiki/Newman–Keuls_method

    To determine if there is a significant difference between two means with equal sample sizes, the Newman–Keuls method uses a formula that is identical to the one used in Tukey's range test, which calculates the q value by taking the difference between two sample means and dividing it by the standard error: