I use the deeptrack library (that also uses tensorflow) to train a model dealing with cell counting using UNet.
This is the code defines the UNet model using deeptrack (dt) library:
JavaScript
x
8
1
model = dt.models.unet(
2
(256, 256, 1),
3
conv_layers_dimensions=[8, 16, 32],
4
base_conv_layers_dimensions=[32, 32],
5
loss=dt.losses.weighted_crossentropy((10, 1)),
6
output_activation="sigmoid"
7
)
8
And this is the summary of the model I trained:
JavaScript
1
24
24
1
Model: "model_2"
2
3
Layer (type) Output Shape Param # Connected to
4
==================================================================================================
5
input_3 (InputLayer) [(None, 256, 256, 1 0 []
6
)]
7
8
conv2d_22 (Conv2D) (None, 256, 256, 8) 80 ['input_3[0][0]']
9
10
activation_20 (Activation) (None, 256, 256, 8) 0 ['conv2d_22[0][0]']
11
12
max_pooling2d_6 (MaxPooling2D) (None, 128, 128, 8) 0 ['activation_20[0][0]']
13
14
15
# (not relevant for the question)
16
17
18
conv2d_32 (Conv2D) (None, 256, 256, 1) 145 ['activation_29[0][0]']
19
20
==================================================================================================
21
Total params: 58,977
22
Trainable params: 58,977
23
Non-trainable params: 0
24
And when I try to make a prediction with the model I trained, with a 256X256 image (both color and grayscale) – I encounter the following error:
JavaScript
1
25
25
1
---------------------------------------------------------------------------
2
ValueError Traceback (most recent call last)
3
<ipython-input-5-78c33765d4d3> in <module>()
4
138 model = tf.keras.models.load_model('model7.h5', compile=False)
5
--> 139 prediction = model.predict([img])
6
140
7
141 plt.figure(figsize=(15, 5))
8
9
1 frames
10
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
11
1145 except Exception as e: # pylint:disable=broad-except
12
1146 if hasattr(e, "ag_error_metadata"):
13
-> 1147 raise e.ag_error_metadata.to_exception(e)
14
1148 else:
15
1149 raise
16
17
ValueError: in user code:
18
19
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1801, in predict_function *
20
return step_function(self, iterator)
21
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
22
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
23
24
ValueError: Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 256, 256, 1), found shape=(32, 256, 3)
25
I couldn’t understand why the error message shows image dimensions of 32X256 when in practice it is 256X256?
How can I overcome the above problem?
Advertisement
Answer
You need to add a batch dimension to your image, try:
JavaScript
1
2
1
prediction = model.predict(img[None, ])
2