Skip to content
Advertisement

How to create tensorflow dataset from runtime generated images?

So, I start a small project based on tensorflow and could not understand how to prepare dataset generated from memory input. I have a random number of sources, that generate images. Than I pass it to python script. Images created as bytes array with png format. I collect images to array, and want to prepare dataset from it and train model based on this dataset.

global_feeded_images = []
global_feeded_labels = []


def feed_image(self, img):
    global global_feeded_images
    global_feeded_images.append(bytes(img))
    global_feeded_labels.append(0)

After collecting all images, I want to start model training.

 model.fit(image_train_ds, np.array(global_feeded_labels), epochs=10)

As I understand, tensorflow could accept numpy array or tensor, but I could not understand, what I should convert, each image separatly or whole array?
Or, more shortly, how to convert array of images global_feeded_images to dataset image_train_ds?

Advertisement

Answer

Correct solution was simple.

fp = io.BytesIO(bytes(img)) # our bytes array
        with fp:
            im = Image.open(fp)
            im = im.convert('L') # do grayscale or set corrent shape for flatten layer
            numpydata = np.asarray(im) # tf could consume numpy

global_feeded_images.append(numpydata)
global_feeded_labels.append(0)

train_dataset = tf.data.Dataset.from_tensor_slices((global_feeded_images, global_feeded_labels))
User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement