The easiest way to evaluate the qualification of a classification model is to match the values we expected and the predicted values of the model and count all the cases in which we were correct or incorrect; That is: build a confusion matrix.
For anyone who has found classification problems in automatic learning, a confusion matrix is a fairly familiar concept. He plays a vital role to help us evaluate classification models and provide clues about how we can improve its performance.
Although classification tasks can produce discrete results, these models tend to have a certain degree of uncertainty.
Most model results can be expressed in terms of probabilities of belonging to the class. Typically, A decision threshold which allows a model to map the probability of exit to a discreet class is established in the prediction step. More frequently, this probability threshold is established at 0.5.
However, depending on the case of use and how well the model can capture the correct information, this threshold can be adjusted. We can analyze how the model works on several thresholds to achieve the desired results.