Search results
Results From The WOW.Com Content Network
The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group with P positive instances and N negative instances of some condition.
In a classification task, the precision for a class is the number of true positives (i.e. the number of items correctly labelled as belonging to the positive class) divided by the total number of elements labelled as belonging to the positive class (i.e. the sum of true positives and false positives, which are items incorrectly labelled as belonging to the class).
The specificity of the test is equal to 1 minus the false positive rate. In statistical hypothesis testing, this fraction is given the Greek letter α, and 1 − α is defined as the specificity of the test. Increasing the specificity of the test lowers the probability of type I errors, but may raise the probability of type II errors (false ...
Example of receiver operating characteristic (ROC) curve highlighting the area under the curve (AUC) sub-area with low sensitivity and low specificity in red and the sub-area with high or sufficient sensitivity and specificity in green.
The log diagnostic odds ratio can also be used to study the trade-off between sensitivity and specificity [5] [6] by expressing the log diagnostic odds ratio in terms of the logit of the true positive rate (sensitivity) and false positive rate (1 − specificity), and by additionally constructing a measure, :
The relationship between sensitivity and specificity, as well as the performance of the classifier, can be visualized and studied using the Receiver Operating Characteristic (ROC) curve. In theory, sensitivity and specificity are independent in the sense that it is possible to achieve 100% in both (such as in the red/blue ball example given above).
For example, the ACR criteria for systemic lupus erythematosus defines the diagnosis as presence of at least 4 out of 11 findings, each of which can be regarded as a target value of a test with its own sensitivity and specificity. In this case, there has been evaluation of the tests for these target parameters when used in combination in regard ...
The positive predictive value (PPV), or precision, is defined as = + = where a "true positive" is the event that the test makes a positive prediction, and the subject has a positive result under the gold standard, and a "false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard.