Confusion Matrix
The performance of the classification models for a given set of test data is evaluated using a confusion matrix.
The performance of the classification models for a given set of test data is evaluated using a confusion matrix. There are four different predicted and actual value combinations in the matrix. Recall, precision, specificity, accuracy, and—most importantly—AUC-ROC curves can all be measured using a confusion matrix.
The matrix has two dimensions: actual and predicted values, and the total number of predictions.
The matrix assesses how well classification models perform when they make predictions based on test data and indicates how effective our classification model is. It also detects not only the classification error but also the specific sort of error, such as type-I or type-II error.