When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Statistical distance - Wikipedia

    en.wikipedia.org/wiki/Statistical_distance

    In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points.

  3. Total variation distance of probability measures - Wikipedia

    en.wikipedia.org/wiki/Total_variation_distance...

    The total variation distance is half of the L 1 distance between the probability functions: on discrete domains, this is the distance between the probability mass functions [4] (,) = | () |, and when the distributions have standard probability density functions p and q, [5]

  4. Divergence (statistics) - Wikipedia

    en.wikipedia.org/wiki/Divergence_(statistics)

    Its formal use dates at least to Bhattacharyya (1943), entitled "On a measure of divergence between two statistical populations defined by their probability distributions", which defined the Bhattacharyya distance, and Bhattacharyya (1946), entitled "On a Measure of Divergence between Two Multinomial Populations", which defined the ...

  5. Integral probability metric - Wikipedia

    en.wikipedia.org/wiki/Integral_probability_metric

    In probability theory, integral probability metrics are types of distance functions between probability distributions, defined by how well a class of functions can distinguish the two distributions. Many important statistical distances are integral probability metrics, including the Wasserstein-1 distance and the total variation distance .

  6. Total variation - Wikipedia

    en.wikipedia.org/wiki/Total_variation

    However, when μ and ν are probability measures, the total variation distance of probability measures can be defined as ‖ ‖ where the norm is the total variation norm of signed measures. Using the property that ( μ − ν ) ( X ) = 0 {\displaystyle (\mu -\nu )(X)=0} , we eventually arrive at the equivalent definition

  7. Bhattacharyya distance - Wikipedia

    en.wikipedia.org/wiki/Bhattacharyya_distance

    In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. [1] It is closely related to the Bhattacharyya coefficient , which is a measure of the amount of overlap between two statistical samples or populations.

  8. Wasserstein metric - Wikipedia

    en.wikipedia.org/wiki/Wasserstein_metric

    In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space. It is named after Leonid Vaseršteĭn .

  9. Pinsker's inequality - Wikipedia

    en.wikipedia.org/wiki/Pinsker's_inequality

    is the (non-normalized) variation distance between two probability density functions and on the same alphabet . [2] This form of Pinsker's inequality shows that "convergence in divergence" is a stronger notion than "convergence in variation distance".