Skip to content
Advertisement

How do you add a dimension to the datapoints without using Lambda layer?

I am trying to classify the fashion_mnistdataset using the Conv2D layer and as I know it can be easily done using the following code:

import tensorflow as tf

fashion_mnist = tf.keras.datasets.fashion_mnist

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

train_images = train_images / 255.0
test_images = test_images / 255.0

model = tf.keras.Sequential([
    tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1)),
    tf.keras.layers.Input(shape=(28,28),batch_size=32),      
    tf.keras.layers.Conv2D(4,kernel_size=3),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10, activation="softmax")
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(x=train_images, y=train_labels, validation_data=(test_images, test_labels), epochs=10)

However, I am required to not use Lambda layer. so, the above solution is not correct.

So, I am wondering, how can i classify the mnist_fashion dataset without using Lambda layer ?

Update: When i add the a dimention using the below code:

train_images = train_images / 255.0
train_images = tf.expand_dims(train_images,axis=0)

test_images = test_images / 255.0
test_images = tf.expand_dims(test_images,axis=0)

and run it against the same model, i get the following error:

ValueError: Data cardinality is ambiguous:
  x sizes: 1
  y sizes: 60000
Make sure all arrays contain the same number of samples.

Advertisement

Answer

There are a couple options: Using expand_dims directly on train_images and changing the input shape or using a Reshape layer instead of a Lambda layer or removing the layer completely and changing the input_shape tf.keras.layers.Input(shape=(28,28, 1),batch_size=32). Depends what you want. Here is the expand_dims option across axis=-1:

train_images = train_images / 255.0
train_images = tf.expand_dims(train_images, axis=-1)

test_images = test_images / 255.0
test_images = tf.expand_dims(test_images, axis=-1)

And change the Input layer to tf.keras.layers.Input(shape=(28, 28, 1)

Q1 Answer: Because Conv2D layers require a 3D input (excluding the batch size) so something like: (rows, cols, channels). Your data had the shape (samples, 28, 28). If your channels came before rows and cols, you could use expand_dims on the axis=1 resulting in (samples, 1, 28, 28) instead of (samples, 28, 28, 1) when using axis=-1. If the former is the case, you would have to set the data_format parameter in the Conv2D layer to channels_first. Using axis=0 results in the shape (1 samples, 28, 28), which is incorrect, because the first dimension should be reserved as the batch dimension.

Q2 Answer: I used shape=(28, 28, 1) because the fashion mnist images are grayscale images. That is, they have one channel (which we have explicitly defined). RGB images, on the other hand, have 3 channels: Red channel, Green channel and Blue channel.

User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement