When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Anomaly detection - Wikipedia

    en.wikipedia.org/wiki/Anomaly_detection

    In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behavior. [1]

  3. Local outlier factor - Wikipedia

    en.wikipedia.org/wiki/Local_outlier_factor

    In anomaly detection, the local outlier factor (LOF) is an algorithm proposed by Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng and Jörg Sander in 2000 for finding anomalous data points by measuring the local deviation of a given data point with respect to its neighbours.

  4. Grubbs's test - Wikipedia

    en.wikipedia.org/wiki/Grubbs's_test

    In statistics, Grubbs's test or the Grubbs test (named after Frank E. Grubbs, who published the test in 1950 [1]), also known as the maximum normalized residual test or extreme studentized deviate test, is a test used to detect outliers in a univariate data set assumed to come from a normally distributed population.

  5. Outlier - Wikipedia

    en.wikipedia.org/wiki/Outlier

    In various domains such as, but not limited to, statistics, signal processing, finance, econometrics, manufacturing, networking and data mining, the task of anomaly detection may take other approaches. Some of these may be distance-based [19] [20] and density-based such as Local Outlier Factor (LOF). [21]

  6. CUSUM - Wikipedia

    en.wikipedia.org/wiki/CUSUM

    The low CUSUM value, detecting a negative anomaly, + = (, +) where ω {\displaystyle \omega } is a critical level parameter (tunable, same as threshold T) that's used to adjust the sensitivity of change detection: larger ω {\displaystyle \omega } makes CUSUM less sensitive to the change and vice versa.

  7. Change detection - Wikipedia

    en.wikipedia.org/wiki/Change_detection

    More generally change detection also includes the detection of anomalous behavior: anomaly detection. In offline change point detection it is assumed that a sequence of length T {\displaystyle T} is available and the goal is to identify whether any change point(s) occurred in the series.

  8. k-nearest neighbors algorithm - Wikipedia

    en.wikipedia.org/wiki/K-nearest_neighbors_algorithm

    The distance to the kth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score in anomaly detection. The larger the distance to the k -NN, the lower the local density, the more likely the query point is an outlier. [ 24 ]

  9. Random sample consensus - Wikipedia

    en.wikipedia.org/wiki/Random_sample_consensus

    A simple example is fitting a line in two dimensions to a set of observations. Assuming that this set contains both inliers, i.e., points which approximately can be fitted to a line, and outliers, points which cannot be fitted to this line, a simple least squares method for line fitting will generally produce a line with a bad fit to the data including inliers and outliers.