22

I am attempting to debug a keras model that I have built. It seems that my gradients are exploding, or there is a division by 0 or some such. It would be convenient to be able to inspect the various gradients as they back-propagate through the network. Something like the following would be ideal:

model.evaluate(np.array([[1,2]]), np.array([[1]])) #gives the loss
model.evaluate_gradient(np.array([[1,2]]), np.array([[1]]), layer=2) #gives the doutput/dloss at layer 2 for the given input
model.evaluate_weight_gradient(np.array([[1,2]]), np.array([[1]]), layer=2) #gives the dweight/dloss at layer 2 for the given input
1

1 Answer 1

21

You need to create a symbolic Keras function, taking the input/output as inputs and returning the gradients. Here is a working example :

import numpy as np
import keras
from keras import backend as K

model = keras.Sequential()
model.add(keras.layers.Dense(20, input_shape = (10, )))
model.add(keras.layers.Dense(5))
model.compile('adam', 'mse')

dummy_in = np.ones((4, 10))
dummy_out = np.ones((4, 5))
dummy_loss = model.train_on_batch(dummy_in, dummy_out)

def get_weight_grad(model, inputs, outputs):
    """ Gets gradient of model for given inputs and outputs for all weights"""
    grads = model.optimizer.get_gradients(model.total_loss, model.trainable_weights)
    symb_inputs = (model._feed_inputs + model._feed_targets + model._feed_sample_weights)
    f = K.function(symb_inputs, grads)
    x, y, sample_weight = model._standardize_user_data(inputs, outputs)
    output_grad = f(x + y + sample_weight)
    return output_grad


def get_layer_output_grad(model, inputs, outputs, layer=-1):
    """ Gets gradient a layer output for given inputs and outputs"""
    grads = model.optimizer.get_gradients(model.total_loss, model.layers[layer].output)
    symb_inputs = (model._feed_inputs + model._feed_targets + model._feed_sample_weights)
    f = K.function(symb_inputs, grads)
    x, y, sample_weight = model._standardize_user_data(inputs, outputs)
    output_grad = f(x + y + sample_weight)
    return output_grad


weight_grads = get_weight_grad(model, dummy_in, dummy_out)
output_grad = get_layer_output_grad(model, dummy_in, dummy_out)

The first function I wrote returns all the gradients in the model but it wouldn't be difficult to extend it so it supports layer indexing. However, it's probably dangerous because any layer without weights in the model will be ignored by this indexing and you would end up with different layer indexing in the model and the gradients.
The second function I wrote returns the gradient at a given layer's output and there, the indexing is the same as in the model, so it's safe to use it.

Note : This works with Keras 2.2.0, not under, as this release included a major refactoring of keras.engine

6
  • 1
    How to use the K.function later to get the values with actual inputs? Commented Mar 7, 2019 at 0:13
  • 3
    As @DanielMöller has already commented, this works if I provide a given input, but usually you will want to somehow record and log the gradients (or some function of them, e.g. their norm) during the training. How would I do that?
    – Alex
    Commented Mar 7, 2019 at 21:54
  • Can you explain what format (shape) is returned by either of the functions if the input is a batch of samples? Commented Mar 18, 2019 at 11:46
  • 1
    I know the question is old, but how would you modify this for a network with multiple inputs and outputs? Mine has two inputs and two outputs with two loss functions. When I apply this solution as is I get an error related to incompatible sizes.
    – Mastiff
    Commented Jul 17, 2019 at 16:00
  • 1
    @Mastiff use this: [dummy_in1 dummy_in2] instead dummy_in. Do same for multiple outputs
    – Heaven
    Commented Apr 11, 2021 at 17:04

Not the answer you're looking for? Browse other questions tagged or ask your own question.