Classification Algorithm performance metrics
This matrix is used to evaluate the accuracy of a classifier and is presented in the table below.
False Positive (FP) moves a trusted email to junk in an anti-spam engine.
False Negative (FN) in medical screening can incorrectly show desease absense, when it is actually positive.
This metric is the basis one. It indicates the number of correctly classified items compared to the total number of items.
Keep in mind that accuracy metric has some limitations: it doesn’t work well with unbalanced classes that can have many items of the same class and few other classes.
Recall Metric shows how many True Positives the model has classified from the total number of positive values.
This metric represents the number of True Positives which are really positive compared to the total number of positively predicted values.
This metric is a combination of precision and recall metrics which serves as a comprise. The best F1 score equals 1, while the worst one is 0.