Skip to content
Advertisement

List comprehension in keras custom loss function

I want to make my custom loss function. First, the model’s output shape is (None, 7, 3). So I want split the output to 3 lists. But I got an error as follows:

    OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.

I think upper_b_true = [m[0] for m in y_true] is not supported. I don’t know how to address this problem.

class new_loss(tf.keras.losses.Loss):
    def __init__(self, tr1, tr2):
        super(new_loss, self).__init__()
        self.tr1 = tr1
        self.tr2 = tr2

    def call(self, y_true, y_pred):
        #pre-determined value
        tr1 = tf.constant(self.tr1)
        tr2 = tf.constant(self.tr2)
        
        #sep
        upper_b_true = [m[0] for m in y_true]
        y_med_true = [m[1] for m in y_true]
        lower_b_true = [m[2] for m in y_true]
        
        upper_b_pred = [m[0] for m in y_pred]
        y_med_pred = [m[1] for m in y_pred]
        lower_b_pred = [m[2] for m in y_pred]
        
        #MSE part
        err = y_med_true - y_med_pred
        mse_loss = tf.math.reduce_mean(tf.math.square(err))
        
        #Narrow bound
        bound_dif = upper_b_pred - lower_b_pred
        bound_loss = tf.math.reduce_mean(bound_dif)
        
        #Prob metric
        in_upper = y_med_pred <= upper_b_pred
        in_lower = y_med_pred >= lower_b_pred
        prob = tf.logical_and(in_upper,in_lower)
        prob = tf.math.reduce_mean(tf.where(prob,1.0,0.0))
        
        return mse_loss + tf.multiply(tr1, bound_loss) + tf.multiply(tr2, prob)

I tried to execute it while partially annotating it, but I think the problem is the list compression part I mentioned.

Advertisement

Answer

You should use tf.unstack:

Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.

upper_b_true, y_med_true, lower_b_true = tf.unstack(y_true, axis=-1)
User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement