Whether you are using Tensorflow or Theano is irrelevant for your question. Google the meaning of 'tensor' if the term confuses you.
Take a look at how Keras own loss function tests have been implemented here:
def test_metrics():
y_a = K.variable(np.random.random((6, 7)))
y_b = K.variable(np.random.random((6, 7)))
for metric in all_metrics:
output = metric(y_a, y_b)
print(metric.__name__)
assert K.eval(output).shape == (6,)
You can't simply feed a float or int into tensor calculations. Note also the use of K.eval to obtain the result you're looking for.
So try something similar with your function:
from keras import backend as K
import numpy as np
y_a = K.variable(np.random.random((6, 7)))
y_b = K.variable(np.random.random((6, 7)))
output = weighted_loss(y_a,y_b)
result = K.eval(output)
There is also no need to define your custom function in keras.backend - what if you decide to update Keras later on?
Instead you could do the following in your own code: define a function that returns your loss function
def weighted_loss(y_true, y_pred):
return K.mean( K.square(y_pred - y_true) * K.exp(-K.log(1.7) * (K.log(1. + K.exp((y_true - 3)/5 )))),axis=-1 )
Then when you want to compile your model with your loss function, you can do:
model.compile(loss = weighted_loss)
In case you want to define a more general loss function, where the weighting depends on some input, you'll need to wrap the function. So for example:
def get_weighted_loss(my_input):
def weighted_loss(y_true, y_pred):
return K.mean( K.square(y_pred - y_true) * K.exp(-K.log(1.7) * (K.log(1. + K.exp((y_true - 3)/my_input )))),axis=-1 )
return weighted_loss
Then when you want to compile your model with your loss function, you can do:
model.compile(loss = get_weighted_loss(5))