I have trained a Keras-based autoencoder model with the following input layer:
depth = 1 width = height = 100 input_shape = (height, width, depth) inputs = Input(shape=input_shape) # rest of network definition ...
Width and height of my training images were 100 pixels in grayscale, thus with a depth of 1. Now I want to load my trained model in another script, load an image there, resize and send it to the Keras model:
size = 100 image = cv2.imread(args.image, cv2.IMREAD_GRAYSCALE) image = cv2.resize(image, (size, size), interpolation=cv2.INTER_AREA) image = image.astype("float32") / 255.0 image = np.expand_dims(image, axis=-1) # at this point image.shape = (100, 100, 1) recon = autoencoder.predict(image)
However, the call to autoencoder.predict(image)
leads to the following error:
WARNING:tensorflow:Model was constructed with shape (None, 100, 100, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, 100, 100, 1), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'"), but it was called on an input with incompatible shape (None, 100, 1, 1).
I don’t understand this, as the shape of the image when calling predict()
is (100, 100, 1)
, which looks fine to me. Why is Keras complaining about an incompatible input shape of (None, 100, 1, 1)
?
Advertisement
Answer
These simple lines of code generate the error
X = np.random.uniform(0,1, (100,100,1)) inp = Input((100,100,1)) out = Dense(1)(Flatten()(inp)) model = Model(inp, out) model.predict(X)
This is because your Keras model expects data in this format (n_sample, 100, 100, 1)
A simple reshape when you predict a single image does the trick
model.predict(X.reshape(1,100,100,1))