Search results
Results From The WOW.Com Content Network
In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions. It is a type of f-divergence. The Hellinger distance is defined in terms of the Hellinger integral, which was introduced by Ernst Hellinger in 1909.
Notably, except for total variation distance, all others are special cases of -divergence, or linear sums of -divergences. For each f-divergence D f {\displaystyle D_{f}} , its generating function is not uniquely defined, but only up to c ⋅ ( t − 1 ) {\displaystyle c\cdot (t-1)} , where c {\displaystyle c} is any real constant.
The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. [10] Numerous references to earlier uses of the symmetrized divergence and to other statistical distances are given in Kullback (1959 , pp. 6–7, §1.3 Divergence).
The only divergence for probabilities over a finite alphabet that is both an f-divergence and a Bregman divergence is the Kullback–Leibler divergence. [8] The squared Euclidean divergence is a Bregman divergence (corresponding to the function x 2 {\displaystyle x^{2}} ) but not an f -divergence.
Many terms are used to refer to various notions of distance; these are often confusingly similar, and may be used inconsistently between authors and over time, either loosely or with precise technical meaning. In addition to "distance", similar terms include deviance, deviation, discrepancy, discrimination, and divergence, as well as others ...
Hellinger distance; K. Kullback–Leibler divergence; T. Total variation distance of probability measures This page was last edited on 3 April 2023, at 01:17 (UTC). ...
The Kullback–Leibler divergence of one normal distribution ... The Hellinger distance between the same distributions is equal to (,) = + (() +) The Fisher ...
There are several ways to measure the distance between two distribution. The three more common possibilities are Kullback-Leibler divergence; Total variation distance of probability measures; Kolmogorov distance; The strongest of these distances is the Kullback-Leibler divergence and the weakest is the Kolmogorov distance.