When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Statistical distance - Wikipedia

    en.wikipedia.org/wiki/Statistical_distance

    In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points.

  3. Bhattacharyya distance - Wikipedia

    en.wikipedia.org/wiki/Bhattacharyya_distance

    In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. [1] It is closely related to the Bhattacharyya coefficient , which is a measure of the amount of overlap between two statistical samples or populations.

  4. Jensen–Shannon divergence - Wikipedia

    en.wikipedia.org/wiki/Jensen–Shannon_divergence

    The geometric Jensen–Shannon divergence [7] (or G-Jensen–Shannon divergence) yields a closed-form formula for divergence between two Gaussian distributions by taking the geometric mean. A more general definition, allowing for the comparison of more than two probability distributions, is:

  5. Similarity measure - Wikipedia

    en.wikipedia.org/wiki/Similarity_measure

    Hamming distance; Jaro distance; Similarity between two probability distributions. Typical measures of similarity for probability distributions are the Bhattacharyya distance and the Hellinger distance. Both provide a quantification of similarity for two probability distributions on the same domain, and they are mathematically closely linked.

  6. Hellinger distance - Wikipedia

    en.wikipedia.org/wiki/Hellinger_distance

    In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions. It is a type of f-divergence. The Hellinger distance is defined in terms of the Hellinger integral, which was introduced by Ernst Hellinger in 1909.

  7. Total variation distance of probability measures - Wikipedia

    en.wikipedia.org/wiki/Total_variation_distance...

    The total variation distance is half of the L 1 distance between the probability functions: on discrete domains, this is the distance between the probability mass functions [4] (,) = | () |, and when the distributions have standard probability density functions p and q, [5]

  8. Stein's method - Wikipedia

    en.wikipedia.org/wiki/Stein's_method

    Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric.It was introduced by Charles Stein, who first published it in 1972, [1] to obtain a bound between the distribution of a sum of -dependent sequence of random variables and a standard normal distribution in the Kolmogorov (uniform ...

  9. Mahalanobis distance - Wikipedia

    en.wikipedia.org/wiki/Mahalanobis_distance

    The Mahalanobis distance is a measure of the distance between a point and a distribution, introduced by P. C. Mahalanobis in 1936. [1] The mathematical details of Mahalanobis distance first appeared in the Journal of The Asiatic Society of Bengal in 1936. [ 2 ]