Search results
Results From The WOW.Com Content Network
In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points.
In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance, statistical difference or variational distance.
In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. [1] It is closely related to the Bhattacharyya coefficient , which is a measure of the amount of overlap between two statistical samples or populations.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space. It is named after Leonid Vaseršteĭn .
In general statistics and probability, "divergence" generally refers to any kind of function (,), where , are probability distributions or other objects under consideration, such that conditions 1, 2 are satisfied. Condition 3 is required for "divergence" as used in information geometry.
In probability theory, integral probability metrics are types of distance functions between probability distributions, defined by how well a class of functions can distinguish the two distributions. Many important statistical distances are integral probability metrics, including the Wasserstein-1 distance and the total variation distance .
The Mahalanobis distance is a measure of the distance between a point and a distribution, introduced by P. C. Mahalanobis in 1933. [1] The mathematical details of Mahalanobis distance first appeared in the Journal of The Asiatic Society of Bengal in 1933. [ 2 ]
The Jensen–Shannon divergence is bounded by 1 for two probability distributions, given that one uses the base 2 logarithm: [8] ().With this normalization, it is a lower bound on the total variation distance between P and Q: