Skip to main content
added 38 characters in body
Source Link
John Ladasky
  • 1.1k
  • 8
  • 18

This is my third attempt to get a deep learning project off the ground. First I'm working with protein sequences. First I tried TFLearn, then raw TensorFlow, and now I'm trying Keras.

The previous two attempts taught me a lot, and gave me some code and concepts that I can re-use. However there has always been an obstacle, and I've asked questions that the developers can't answer (in the case of TFLearn), or I've simply gotten bogged down (TensorFlow object introspection is tedious).

I have written this TensorFlow loss function, and I know it works:

def l2_angle_distance(pred, tgt):
    with tf.name_scope("L2AngleDistance"):
        # Scaling factor
        count = tgt[...,0,0]
        scale = tf.to_float(tf.count_nonzero(tf.is_finite(count)))
        # Mask NaN in tgt
        tgt = tf.where(tf.is_nan(tgt), pred, tgt)
        # Calculate L1 losses
        losses = tf.losses.cosine_distance(pred, tgt, -1, reduction=tf.losses.Reduction.NONE)
        # Square the losses, then sum, to get L2 scalar loss.
        # Divide the loss result by the scaling factor.
        return tf.reduce_sum(losses * losses) / scale

My target values (tgt) can include NaN, because my protein sequences are passed in a 4D Tensor, despite the fact that the individual sequences differ in length. Before you ask, the data can't be resampled like an image. So I use NaN in the tgt Tensor to indicate "no prediction needed here." Before I calculate the L2 cosine loss, I replace every NaN with the matching values in the prediction (pred) so the loss for every NaN is always zero.

Now, how can I re-use this function in Keras? It appears that the Keras Lambda core layer is not a good choice, because a Lambda only takes a single argument, and a loss function needs two arguments.

Alternately, can I rewrite this function in Keras? I shouldn't ever need to use the Theano or CNTK backend, so it isn't necessary for me to rewrite my function in Keras. I'll use whatever works.

I just looked at the Keras losses.py file to get some clues. I imported keras.backend and had a look around. I also found https://keras.io/backend/. I don't seem to find wrappers for ANY of the TensorFlow function calls I happen to use: to_float(), count_nonzero(), is_finite(), where(), is_nan(), cosine_distance(), or reduce_sum().

Thanks for your suggestions!

This is my third attempt to get a deep learning project off the ground. First I tried TFLearn, then raw TensorFlow, and now I'm trying Keras.

The previous two attempts taught me a lot, and gave me some code and concepts that I can re-use. However there has always been an obstacle, and I've asked questions that the developers can't answer (in the case of TFLearn), or I've simply gotten bogged down (TensorFlow object introspection is tedious).

I have written this TensorFlow loss function, and I know it works:

def l2_angle_distance(pred, tgt):
    with tf.name_scope("L2AngleDistance"):
        # Scaling factor
        count = tgt[...,0,0]
        scale = tf.to_float(tf.count_nonzero(tf.is_finite(count)))
        # Mask NaN in tgt
        tgt = tf.where(tf.is_nan(tgt), pred, tgt)
        # Calculate L1 losses
        losses = tf.losses.cosine_distance(pred, tgt, -1, reduction=tf.losses.Reduction.NONE)
        # Square the losses, then sum, to get L2 scalar loss.
        # Divide the loss result by the scaling factor.
        return tf.reduce_sum(losses * losses) / scale

My target values (tgt) can include NaN, because my protein sequences are passed in a 4D Tensor, despite the fact that the individual sequences differ in length. Before you ask, the data can't be resampled like an image. So I use NaN in the tgt Tensor to indicate "no prediction needed here." Before I calculate the L2 cosine loss, I replace every NaN with the matching values in the prediction (pred) so the loss for every NaN is always zero.

Now, how can I re-use this function in Keras? It appears that the Keras Lambda core layer is not a good choice, because a Lambda only takes a single argument, and a loss function needs two arguments.

Alternately, can I rewrite this function in Keras? I shouldn't ever need to use the Theano or CNTK backend, so it isn't necessary for me to rewrite my function in Keras. I'll use whatever works.

I just looked at the Keras losses.py file to get some clues. I imported keras.backend and had a look around. I also found https://keras.io/backend/. I don't seem to find wrappers for ANY of the TensorFlow function calls I happen to use: to_float(), count_nonzero(), is_finite(), where(), is_nan(), cosine_distance(), or reduce_sum().

Thanks for your suggestions!

This is my third attempt to get a deep learning project off the ground. I'm working with protein sequences. First I tried TFLearn, then raw TensorFlow, and now I'm trying Keras.

The previous two attempts taught me a lot, and gave me some code and concepts that I can re-use. However there has always been an obstacle, and I've asked questions that the developers can't answer (in the case of TFLearn), or I've simply gotten bogged down (TensorFlow object introspection is tedious).

I have written this TensorFlow loss function, and I know it works:

def l2_angle_distance(pred, tgt):
    with tf.name_scope("L2AngleDistance"):
        # Scaling factor
        count = tgt[...,0,0]
        scale = tf.to_float(tf.count_nonzero(tf.is_finite(count)))
        # Mask NaN in tgt
        tgt = tf.where(tf.is_nan(tgt), pred, tgt)
        # Calculate L1 losses
        losses = tf.losses.cosine_distance(pred, tgt, -1, reduction=tf.losses.Reduction.NONE)
        # Square the losses, then sum, to get L2 scalar loss.
        # Divide the loss result by the scaling factor.
        return tf.reduce_sum(losses * losses) / scale

My target values (tgt) can include NaN, because my protein sequences are passed in a 4D Tensor, despite the fact that the individual sequences differ in length. Before you ask, the data can't be resampled like an image. So I use NaN in the tgt Tensor to indicate "no prediction needed here." Before I calculate the L2 cosine loss, I replace every NaN with the matching values in the prediction (pred) so the loss for every NaN is always zero.

Now, how can I re-use this function in Keras? It appears that the Keras Lambda core layer is not a good choice, because a Lambda only takes a single argument, and a loss function needs two arguments.

Alternately, can I rewrite this function in Keras? I shouldn't ever need to use the Theano or CNTK backend, so it isn't necessary for me to rewrite my function in Keras. I'll use whatever works.

I just looked at the Keras losses.py file to get some clues. I imported keras.backend and had a look around. I also found https://keras.io/backend/. I don't seem to find wrappers for ANY of the TensorFlow function calls I happen to use: to_float(), count_nonzero(), is_finite(), where(), is_nan(), cosine_distance(), or reduce_sum().

Thanks for your suggestions!

Source Link
John Ladasky
  • 1.1k
  • 8
  • 18

How to wrap a custom TensorFlow loss function in Keras?

This is my third attempt to get a deep learning project off the ground. First I tried TFLearn, then raw TensorFlow, and now I'm trying Keras.

The previous two attempts taught me a lot, and gave me some code and concepts that I can re-use. However there has always been an obstacle, and I've asked questions that the developers can't answer (in the case of TFLearn), or I've simply gotten bogged down (TensorFlow object introspection is tedious).

I have written this TensorFlow loss function, and I know it works:

def l2_angle_distance(pred, tgt):
    with tf.name_scope("L2AngleDistance"):
        # Scaling factor
        count = tgt[...,0,0]
        scale = tf.to_float(tf.count_nonzero(tf.is_finite(count)))
        # Mask NaN in tgt
        tgt = tf.where(tf.is_nan(tgt), pred, tgt)
        # Calculate L1 losses
        losses = tf.losses.cosine_distance(pred, tgt, -1, reduction=tf.losses.Reduction.NONE)
        # Square the losses, then sum, to get L2 scalar loss.
        # Divide the loss result by the scaling factor.
        return tf.reduce_sum(losses * losses) / scale

My target values (tgt) can include NaN, because my protein sequences are passed in a 4D Tensor, despite the fact that the individual sequences differ in length. Before you ask, the data can't be resampled like an image. So I use NaN in the tgt Tensor to indicate "no prediction needed here." Before I calculate the L2 cosine loss, I replace every NaN with the matching values in the prediction (pred) so the loss for every NaN is always zero.

Now, how can I re-use this function in Keras? It appears that the Keras Lambda core layer is not a good choice, because a Lambda only takes a single argument, and a loss function needs two arguments.

Alternately, can I rewrite this function in Keras? I shouldn't ever need to use the Theano or CNTK backend, so it isn't necessary for me to rewrite my function in Keras. I'll use whatever works.

I just looked at the Keras losses.py file to get some clues. I imported keras.backend and had a look around. I also found https://keras.io/backend/. I don't seem to find wrappers for ANY of the TensorFlow function calls I happen to use: to_float(), count_nonzero(), is_finite(), where(), is_nan(), cosine_distance(), or reduce_sum().

Thanks for your suggestions!