Skip to main content
13 events
when toggle format what by license comment
Aug 17, 2023 at 14:37 comment added Murilo It does work for multiclass problems if you one-hot-encode the output and use categorical_crossentropy as loss function and softmax as activation function of last layer. But it does not work if you don't provide the output in one-hot-encode format and try to use sparse_categorical_crossentropy as loss. In this last case, precision, recall and f1 are always higher than 1.
Nov 8, 2022 at 16:09 comment added Eli Halych Doesn't work well for a 3-class classification problem. Precision is always 0, and f1 score starts above 1.0 and goes down over time.
Oct 27, 2020 at 14:23 review Suggested edits
Oct 27, 2020 at 17:32
Aug 27, 2020 at 11:40 comment added Zeeshan Ali @Panathinaikos these functions work right only for binary classification.
May 8, 2020 at 12:25 comment added rsd96 recall and precision going higher than 1 for categorical classification
Mar 29, 2020 at 10:02 comment added Panathinaikos Is there a reason why I get recall values higher than 1?
S Mar 12, 2020 at 16:14 history suggested TQA CC BY-SA 4.0
Format Python code
Mar 12, 2020 at 15:43 review Suggested edits
S Mar 12, 2020 at 16:14
Jan 12, 2020 at 7:07 comment added Rodrigo Ruiz Any idea why this is not working on validation for me? works fine for training.
Feb 6, 2019 at 15:08 vote accept ZelelB
Feb 6, 2019 at 14:03 comment added Tasos Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
Feb 6, 2019 at 13:52 comment added ZelelB if they can be misleading, how to evaluate a Keras' model then?
Feb 6, 2019 at 13:35 history answered Tasos CC BY-SA 4.0