Skip to content
Advertisement

Keras image generator keep giving different number of labels

I am trying to make a simple fine turned Resnet50 model using the Market1501 dataset and keras.

So the data set contains images (12000 or so) and 751 labels that I want to use (0-750). I can fit the data into a single go so I have to use a image generator for this.

So my base model is like this

base_model = ResNet50(weights='imagenet', include_top=False,input_tensor=Input(shape=(224,224,3)))
x = base_model.output
x = Flatten(name="flatten")(x)
x = Dropout(0.5)(x)
x = Dense(750, activation='softmax', name='fc8',kernel_initializer=RandomNormal(mean=0.0, stddev=0.001))(x)
model = Model(input=base_model.input, output=x)

And my image generator is like this

def image_generator(image_array, batch_size):
    # Define data generator arguments
    datagen_args = dict(rotation_range=20,
                    width_shift_range=0.2,
                    height_shift_range=0.2,
                    shear_range=0.1,
                    zoom_range=0.1,
                    horizontal_flip=True)

    # Create different data generators for each image
    # This gives each image a unique transformation which will make it harder for the network
    datagen = ImageDataGenerator(**datagen_args)
    while True:
        number_of_images = len(image_array)
        indices = np.random.permutation(np.arange(number_of_images))
        num_batches = number_of_images // batch_size
        for bid in range(num_batches):
            # loop once per batch
            images = []
            lables = []
            batch_indices = indices[bid * batch_size: (bid + 1) * batch_size]
            for i in batch_indices:
                img, lbl = image_array[i]
                # Process images
                img = image.load_img(os.path.join(TRAIN, img), target_size=[224, 224])
                img = image.img_to_array(img)
                #img = np.expand_dims(img, axis=0)
                img = preprocess_input(img)
                img = datagen.random_transform(img)
                images.append(img)
                lables.append(lbl)
            yield np.array(images), to_categorical(lables)

And I use it like this

batch_size = 64
NUM_EPOCHS = 40
train_gen = image_generator(image_array, batch_size)
num_train_steps = len(image_array)

The issue is it give me this error

Error when checking target: expected fc8 to have shape (751,) but got array with shape (742,)

And the bigger issue is the 2nd number keep changing so I know its something with the image generator not getting every label in for each iteration.

EDIT
How the data is generated:
There is a external list with the image and the label like this

['0002_451_03.jpg', '0']
img001.jpg, 0
img002.jpg, 0
...
img1500.jpg, 750

This is read in and loaded into a array. The label is the number after the image

Advertisement

Answer

change

batch_indices = indices[bid * batch_size: (bid + 1) * batch_size]

with

batch_indices = indices[bid * batch_size: min((bid + 1) * batch_size, number_of_images)]
User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement