Skip to main content

Timeline for Precision vs. Recall

Current License: CC BY-SA 4.0

4 events
when toggle format what by license comment
Feb 22, 2019 at 19:52 vote accept FrancoSwiss
Feb 22, 2019 at 18:40 comment added HFulcher @FrancoSwiss happy to help! A high ROC score doesn't necessarily mean that your model has succeeded in dealing with one of the labels well. You can see from the F1 scores that the model is heavily biased towards predicting 0 due to the imbalance in the training set. I don't know the constraints of your dataset so this could very well be a success in this context! If you feel that I have answered your question sufficiently please mark it as answered, otherwise I would be happy to elaborate :)
Feb 22, 2019 at 18:25 comment added FrancoSwiss Thank you for your explanation HFulcher! This helps a lot. Poor label? It's 35 million entries with 0.25% label 1. AUROC 0.97. I would call that a success.
Feb 22, 2019 at 17:11 history answered HFulcher CC BY-SA 4.0