2

I want to create a neural network with my own loss function. For this purpose, I created this loss function:

class my_loss(tf.keras.losses.Loss):

    def __init__(self,e1,e2,**kwargs):
        assert e1 > e2 , "e1 must be greater than e2"
        self.e1 = e1
        self.e2 = e2
        super().__init__(**kwargs)

    def call(self,Y_true,Y_pred):
        d = tf.reduce_mean(tf.abs(Y_true-Y_pred))
        l1 = d**1.5  # Where the error is large, show the loss much more
        l2 = d*1.5   # Where the error is moderate, show the loss slightly more.
        l3 = d
        res = tf.experimental.numpy.select([d >= self.e1,self.e2 < d < self.e1,d <= self.e2], [l1,l2,l3])
        return res

    def get_config(self):
        parent_config = super().get_config()
        return {**parent_config,"e1":self.e1,"e2":self.e2}

model = tf.keras.models.Sequential()
model.add(layers.Dense(50,input_dim=9)) # Length of features is 9
model.add(layers.Dense(50))
model.add(layers.Dense(50))
model.add(layers.Dense(1))

model.compile(
   loss=my_loss(2,0.5),
   optimizer="adam",
   # metrics=["accuracy"]
   )

hist = model.fit(x_train,y_train,epochs=50)

But I get this error when fitting the model

Output: OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.

1 Answer 1

2

You're getting that error when you do tf.experimental.numpy.select, right?

It is because, as the error suggests, you can't use a tf.Tensor as a Python bool. So you cannot do something like this d >= self.e1. You have to use proper tf functions to do that kind of operations.

In particular tf.math.logical_and, returns the truth value of x AND y element-wise. And tf.math.greater_equal returns the truth value of (x >= y) element-wise.

So, in order to fix the error substitute that line with this:

res = tf.experimental.numpy.select([
      tf.greater_equal(d, self.e1), 
      tf.math.logical_and(self.e2 < d, d < self.e1),
      tf.greater_equal(self.e2, d)
      ], [l1,l2,l3]
)
1
  • 1
    Ahh... I was also looking at this, the d >= self.e1 and self.e2 >= d lines were valid and produce the same thing as tf.greater_equal(d, self.e1) and tf.greater_equal(self.e2, d), so I think it was only complaining about the self.e2 < d < self.e1 part.
    – Andrew Wei
    Commented Sep 2, 2022 at 6:46

Not the answer you're looking for? Browse other questions tagged or ask your own question.