Skip to content
Advertisement

How to train a Keras autoencoder with custom dataset?

I am reading this tutorial in order to create my own autoencoder based on Keras. I followed the tutorial step by step, the only difference is that I want to train the model using my own images data set. So I changed/added the following code:


IMAGES = "/path/to/my/images"
SHAPE = (200, 200)
INIT_LR = 1e-3
EPOCHS = 20
BS = 32

(encoder, decoder, autoencoder) = ConvAutoencoder.build(SHAPE[0], SHAPE[1], 3)
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
autoencoder.compile(loss="mse", optimizer=opt)

image_generator = ImageDataGenerator(rescale=1.0 / 255)
train_gen = image_generator.flow_from_directory(
    os.path.join(IMAGES, "training"), 
    class_mode=None, target_size=SHAPE, batch_size=BS,
)
val_gen = image_generator.flow_from_directory(
    os.path.join(IMAGES, "validation"), 
    class_mode=None, target_size=SHAPE, batch_size=BS,
)

hist = autoencoder.fit(train_gen, validation_data=val_gen, epochs=EPOCHS, batch_size=BS)

My images are normal .jpg files in RGB format. However, as soon as training starts the fit() method throws the following exception:

ValueError: No gradients provided for any variable: ['conv2d/kernel:0', 'conv2d/bias:0', 'batch_normalization/gamma:0', 'batch_normalization/beta:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'batch_normalization_1/gamma:0', 'batch_normalization_1/beta:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0', 'conv2d_transpose/kernel:0', 'conv2d_transpose/bias:0', 'batch_normalization_2/gamma:0', 'batch_normalization_2/beta:0', 'conv2d_transpose_1/kernel:0', 'conv2d_transpose_1/bias:0', 'batch_normalization_3/gamma:0', 'batch_normalization_3/beta:0', 'conv2d_transpose_2/kernel:0', 'conv2d_transpose_2/bias:0'].

Any ideas what I am missing here?

Advertisement

Answer

Use class_mode=”input” at the flow_from_directory so returned Y will be same as X

https://github.com/tensorflow/tensorflow/blob/v2.4.1/tensorflow/python/keras/preprocessing/image.py#L867-L958

class_mode: One of “categorical”, “binary”, “sparse”, “input”, or None. Default: “categorical”. Determines the type of label arrays that are returned: – “categorical” will be 2D one-hot encoded labels, – “binary” will be 1D binary labels, “sparse” will be 1D integer labels, – “input” will be images identical to input images (mainly used to work with autoencoders). – If None, no labels are returned (the generator will only yield batches of image data, which is useful to use with model.predict()). Please note that in case of class_mode None, the data still needs to reside in a subdirectory of directory for it to work correctly.

Code should end up like:

image_generator = ImageDataGenerator(rescale=1.0 / 255)
train_gen = image_generator.flow_from_directory(
    os.path.join(IMAGES, "training"), 
    class_mode="input", target_size=SHAPE, batch_size=BS,
)
val_gen = image_generator.flow_from_directory(
    os.path.join(IMAGES, "validation"), 
    class_mode="input", target_size=SHAPE, batch_size=BS,
)
hist = autoencoder.fit(train_gen, validation_data=val_gen, epochs=EPOCHS, batch_size=BS)

User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement