2
$\begingroup$

Can I make the following statement about a binary classification, please?

Precision 1: 0.10 Recall 1: 0.83

Statement: "We can expect 90% false alarms (1 - 0.10). But for the remaining 10%, we can be around 83% certain (Recall 1: 0.83), that we caught a label 1."

Thanks in advance!

enter image description here

$\endgroup$

1 Answer 1

1
$\begingroup$

I would phrase it like so:

"Of all records that were labelled 1 by the model, 10% were actually 1 (90% incorrect predictions). Of all records that were truly labelled 1 we predicted 83% correctly."

While this is out of context of your question, if support refers to the number of records then it would be beneficial to get a more balanced dataset. The reason your precision is so poor for label 1 is because there are many more "negatives" (0's) than "positives" (1) increasing the chance for false positives to occur, affecting your precision.

EDIT: This on Cross Validated will help provide more explanation.

$\endgroup$
2
  • $\begingroup$ Thank you for your explanation HFulcher! This helps a lot. Poor label? It's 35 million entries with 0.25% label 1. AUROC 0.97. I would call that a success. $\endgroup$ Commented Feb 22, 2019 at 18:25
  • 1
    $\begingroup$ @FrancoSwiss happy to help! A high ROC score doesn't necessarily mean that your model has succeeded in dealing with one of the labels well. You can see from the F1 scores that the model is heavily biased towards predicting 0 due to the imbalance in the training set. I don't know the constraints of your dataset so this could very well be a success in this context! If you feel that I have answered your question sufficiently please mark it as answered, otherwise I would be happy to elaborate :) $\endgroup$
    – HFulcher
    Commented Feb 22, 2019 at 18:40

Not the answer you're looking for? Browse other questions tagged or ask your own question.