3

OS Platform and Distribution: Linux Ubuntu16.04 ; TensorFlow version : '1.4.0'

I can run properly with the following code:

import tensorflow as tf
from tensorflow.python.keras.layers import Dense
from tensorflow.python.keras.backend import categorical_crossentropy
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import  Input

mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)
img_size_flat = 28*28
batch_size = 64

def gen(batch_size=32):
    while True:
        batch_data, batch_label = mnist_data.train.next_batch(batch_size)
        yield batch_data, batch_label   


inputs = Input(shape=(img_size_flat,))
x = Dense(128, activation='relu')(inputs)  # fully-connected layer with 128 units and ReLU activation
x = Dense(128, activation='relu')(x)
preds = Dense(10, activation='softmax')(x)  # output layer with 10 units and a softmax activation
model = Model(inputs=inputs, outputs=preds)

model.compile(optimizer='rmsprop',
               loss='categorical_crossentropy',
               metrics=['accuracy'])


model.fit_generator(gen(batch_size), steps_per_epoch=len(mnist_data.train.labels)//batch_size, epochs=2)

But if I want to write loss function with my own code like:

preds_softmax = tf.nn.softmax(preds)
step1 = tf.cast(y_true, tf.float32) * tf.log(preds_softmax)
step2 = -tf.reduce_sum(step1, reduction_indices=[1])
loss = tf.reduce_mean(step2)       # loss

Can I using customized loss function and train it based on keras's model.fit_generator?

Is something like the following code on tensorflow?

inputs = tf.placeholder(tf.float32, shape=(None, 784))
x = Dense(128, activation='relu')(inputs) # fully-connected layer with 128 units and ReLU activation
x = Dense(128, activation='relu')(x)
preds = Dense(10, activation='softmax')(x) # output layer with 10 units and a softmax activation

y_true = tf.placeholder(tf.float32, shape=(None, 10))

How can I do based on above code(part I)? Thanks for any help!!

1 Answer 1

5

Just wrap your loss into a function, and provide it to model.compile.

def custom_loss(y_true, y_pred):
    preds_softmax = tf.nn.softmax(y_pred)
    step1 = y_true * tf.log(preds_softmax)
    return -tf.reduce_sum(step1, reduction_indices=[1])

model.compile(optimizer='rmsprop',
              loss=custom_loss,
              metrics=['accuracy'])

Also note that,

  • you don't need to cast y_true into float32. It is done automatically by Keras.
  • you don't need to take the final reduce_mean. Keras will also take care of that.
0

Not the answer you're looking for? Browse other questions tagged or ask your own question.