23

After fitting the model (which was running for a couple of hours), I wanted to get the accuracy with the following code:

train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(nb_epoch)

of the trained model, but was getting an error, which is caused by the deprecated methods I was using.

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-233-081ed5e89aa4> in <module>()
      3 train_loss=hist.history['loss']
      4 val_loss=hist.history['val_loss']
----> 5 train_acc=hist.history['acc']
      6 val_acc=hist.history['val_acc']
      7 xc=range(nb_epoch)

KeyError: 'acc'

The code I used to fit the model before trying to read the accuracy, is the following:

hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
            verbose=1, validation_data=(X_test, Y_test))


hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, 
            verbose=1, validation_split=0.2)

Which produces this output when running it:

Epoch 1/20
237/237 [==============================] - 104s 440ms/step - loss: 6.2802 - val_loss: 2.4209
    .....
    .....
    .....
Epoch 19/20
    189/189 [==============================] - 91s 480ms/step - loss: 0.0590 - val_loss: 0.2193
    Epoch 20/20
    189/189 [==============================] - 85s 451ms/step - loss: 0.0201 - val_loss: 0.2312

I've noticed that I was running deprecated methods & arguments.

So how can I read the accuracy and val_accuracy without having to fit again, and waiting for a couple of hours again? I tried to replace train_acc=hist.history['acc'] with train_acc=hist.history['accuracy'] but it didn't help.

1
  • Save your current model using this and then load it and then print the accuracy by specifying the metrics. Commented Jun 26, 2018 at 18:08

4 Answers 4

33

You probably didn't add "acc" as a metric when compiling the model.

model.compile(optimizer=..., loss=..., metrics=['accuracy',...])

You can get the metrics and loss from any data without training again with:

model.evaluate(X, Y)
9
  • 2
    Yeah, so I have to add it now, AND have to wait for another couple of hours after calling fit again? Or is there a solution to get the accuracy without having to fit again? My question was actually how I could get it without re-fitting and waiting again? :-/
    – ZelelB
    Commented Jun 26, 2018 at 16:42
  • 4
    Use model.evaluate(X, Y,...) Commented Jun 26, 2018 at 16:44
  • that gives just the loss, as there weren't any other metrics given. Tried print(model.metrics_names) and got just ['loss'] returned
    – ZelelB
    Commented Jun 26, 2018 at 16:48
  • 5
    The returned value of model.evaluate does contain loss and metrics. If it doesn't, the model wasn't compiled with metrics. Commented Jun 26, 2018 at 16:52
  • Try using this: scikit-learn.org/stable/modules/generated/… Commented Jun 27, 2018 at 6:25
11
  1. add a metrics = ['accuracy'] when you compile the model

  2. simply get the accuracy of the last epoch . hist.history.get('acc')[-1]

  3. what i would do actually is use a GridSearchCV and then get the best_score_ parameter to print the best metrics

1
  • 1
    with tensflow 2.3.0, use hist.history['accuracy'][-1] instead of the 'acc'
    – Sylvain
    Commented Nov 12, 2020 at 14:55
11

Just tried it in tensorflow==2.0.0. With the following result:

Given a training call like:

history = model.fit(train_data, train_labels, epochs=100,
                    validation_data=(test_images, test_labels))

The final accuracy for the above call can be read out as follows:

history.history['accuracy']

Printing the entire dict history.history gives you overview of all the contained values. You will find that all the values reported in a line such as:

7570/7570 [==============================] - 42s 6ms/sample - loss: 1.1612 - accuracy: 0.5715 - val_loss: 0.5541 - val_accuracy: 0.8300

can be read out from that dict.

For the sake of completeness, I created the model as follows:

model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.0001,
                                       beta_1=0.9,
                                       beta_2=0.999,
                                       epsilon=1e-07,
                                       amsgrad=False,
                                       name='Adam'
                                       ),
          loss='sparse_categorical_crossentropy',
          metrics=['accuracy']
0

There is a way to take the most performant model accuracy by adding callback to serialize that Model such as ModelCheckpoint and extracting required value from the history having the lowest loss:

best_model_accuracy = history.history['acc'][argmin(history.history['loss'])]

Not the answer you're looking for? Browse other questions tagged or ask your own question.