I want to make my custom loss function. First, the model’s output shape is (None, 7, 3). So I want split the output to 3 lists. But I got an error as follows:
JavaScript
x
2
1
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
2
I think upper_b_true = [m[0] for m in y_true]
is not supported.
I don’t know how to address this problem.
JavaScript
1
36
36
1
class new_loss(tf.keras.losses.Loss):
2
def __init__(self, tr1, tr2):
3
super(new_loss, self).__init__()
4
self.tr1 = tr1
5
self.tr2 = tr2
6
7
def call(self, y_true, y_pred):
8
#pre-determined value
9
tr1 = tf.constant(self.tr1)
10
tr2 = tf.constant(self.tr2)
11
12
#sep
13
upper_b_true = [m[0] for m in y_true]
14
y_med_true = [m[1] for m in y_true]
15
lower_b_true = [m[2] for m in y_true]
16
17
upper_b_pred = [m[0] for m in y_pred]
18
y_med_pred = [m[1] for m in y_pred]
19
lower_b_pred = [m[2] for m in y_pred]
20
21
#MSE part
22
err = y_med_true - y_med_pred
23
mse_loss = tf.math.reduce_mean(tf.math.square(err))
24
25
#Narrow bound
26
bound_dif = upper_b_pred - lower_b_pred
27
bound_loss = tf.math.reduce_mean(bound_dif)
28
29
#Prob metric
30
in_upper = y_med_pred <= upper_b_pred
31
in_lower = y_med_pred >= lower_b_pred
32
prob = tf.logical_and(in_upper,in_lower)
33
prob = tf.math.reduce_mean(tf.where(prob,1.0,0.0))
34
35
return mse_loss + tf.multiply(tr1, bound_loss) + tf.multiply(tr2, prob)
36
I tried to execute it while partially annotating it, but I think the problem is the list compression part I mentioned.
Advertisement
Answer
You should use tf.unstack:
Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.
JavaScript
1
2
1
upper_b_true, y_med_true, lower_b_true = tf.unstack(y_true, axis=-1)
2