Skip to content
Advertisement

TypeError: Input ‘y’ of ‘Mul’ Op has type float32 that does not match type int64 of argument ‘x’

after this code i am getting the error in categoricalfocalloss i m not getting whereint64 error is coming

def categorical_focal_loss(gamma=2., alpha=.25):
    def categorical_focal_loss_fixed(y_true, y_pred):
        y_pred /= K.sum(y_pred, axis=-1, keepdims=True)
        epsilon = K.epsilon()
        y_pred = K.clip(y_pred, epsilon, 1. - epsilon)
        y_pred = tf.cast(y_pred, dtype= tf.float32)
        cross_entropy = -y_true * K.log(y_pred)
        loss = alpha * K.pow(1 - y_pred, gamma) * cross_entropy
        return K.sum(loss, axis=1)
    return categorical_focal_loss_fixed

model description here in this code , in the loss categoricalfocal loss is used

    with strategy.scope():
        ef7 =tf.keras.Sequential()
        ef7.add(enet)
        ef7.add(tf.keras.layers.MaxPooling2D())
        ef7.add(tf.keras.layers.Conv2D(4096,3,padding='same'))
        ef7.add(tf.keras.layers.BatchNormalization())
        ef7.add(tf.keras.layers.ReLU())
        ef7.add(tf.keras.layers.GlobalAveragePooling2D())
        ef7.add(tf.keras.layers.Dropout(0.35))
        ef7.add(tf.keras.layers.Flatten())
    
        ef7.add(tf.keras.layers.Dense(2048,activation='relu'))
        ef7.add(tf.keras.layers.BatchNormalization())
        ef7.add(tf.keras.layers.LeakyReLU())
        ef7.add(tf.keras.layers.Dropout(0.35))
    
        ef7.add(tf.keras.layers.Dense(1024,activation='relu'))
        ef7.add(tf.keras.layers.BatchNormalization())
        ef7.add(tf.keras.layers.LeakyReLU())
        ef7.add(tf.keras.layers.Dropout(0.25))
        ef7.add(tf.keras.layers.Dense(3,activation='softmax'))
        ef7.compile(
                    optimizer=tf.optimizers.Adam(lr=0.0001),
                    loss=categorical_focal_loss(gamma=2., alpha=.25),
                    metrics=['categorical_accuracy',
                            tf.keras.metrics.Recall(),
                            tf.keras.metrics.Precision(),   
                            tf.keras.metrics.AUC(),
                            tfa.metrics.F1Score(num_classes=3, average="macro")
                           ])

here in the model i used categorical focal loss when i run this ,in train dataset i am not getting how tcovert itintointoint64

    h7=ef7.fit(
    train_dataset,
    steps_per_epoch=train_labels.shape[0] // BATCH_SIZE,
    callbacks=[lr_callback],
    epochs=EPOCHS)

error is got is mentioned below

        Epoch 1/20
    
    ```Epoch 00001: LearningRateScheduler reducing learning rate to 1e-05.```
    ---------------------------------------------------------------------------
    >TypeError                                 Traceback (most recent call last)
    <ipython-input-133-d27eee469b2b> in <module>()
          3     steps_per_epoch=train_labels.shape[0] // BATCH_SIZE,
          4     callbacks=[lr_callback],
    ----> 5     epochs=EPOCHS)
    
    9 frames
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
        975           except Exception as e:  # pylint:disable=broad-except
        976             if hasattr(e, "ag_error_metadata"):
    --> 977               raise e.ag_error_metadata.to_exception(e)
        978             else:
        979               raise
    
    TypeError: in user code:
    
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
            >return step_function(self, iterator)
        ><ipython-input-68-de42355e464e>:7 categorical_focal_loss_fixed  *
            cross_entropy = -y_true * K.log(y_pred)
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:1180 binary_op_wrapper
            >raise e
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:1164 binary_op_wrapper
            >return func(x, y, name=name)
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:1496 _mul_dispatch
            >return multiply(x, y, name=name)
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
            return target(*args, **kwargs)
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:518 multiply
            >return gen_math_ops.mul(x, y, name)
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py:6078 mul
         >   "Mul", x=x, y=y, name=name)
        >/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:558 _apply_op_helper
         >   inferred_from[input_arg.type_attr]))
    
        >TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int64 of argument 'x'.```

Advertisement

Answer

The error points to this line of code:

cross_entropy = -y_true * K.log(y_pred)

and is being thrown from the multiply function in math_ops.py within the tensorflow package. Digging into that file I found this summary for the argument requirements.

 Args:
    x: A Tensor. Must be one of the following types: `bfloat16`,
      `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`,
      `int16`, `int32`, `int64`, `complex64`, `complex128`.
    y: A `Tensor`. Must have the same type as `x`.
    name: A name for the operation (optional).
  Returns:
  A `Tensor`.  Has the same type as `x`.
  Raises:
   * InvalidArgumentError: When `x` and `y` have incompatible shapes or types

Looking back at the error

TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int64 of argument 'x'.```

This means that -y_true is 'x' and K.log(y_pred) is 'y'. To perform this operations you’ll have to cast -y_true to a float32 or cast K.log(y_pred) to an int64 or cast them both into any other type as long as they match. .

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement