Skip to content
Advertisement

fit_generator() returns NoneType instead of History object in Mask R CNN

I would like to save the loss data while training my Mask R CNN, but I seem to be missing something. The training is working but I’m getting the Error:

AttributeError: ‘NoneType’ object has no attribute ‘history’

    history = model.train(dataset_train, dataset_val,
            learning_rate=config.LEARNING_RATE,
            epochs=EPOCH_NUMBER,
            augmentation=augmentation,
            layers='heads', custom_callbacks=custom_callbacks)

    df = pd.DataFrame(history.history).to_excel(
        LOSS_DIR)

I’m not even sure if this is the right approach but it seemed easy enough. This part of the code calls this function in model.py, to which I only added at the very end the history = fit_generator(...) and return history, it used to just call fit_generator(...):

    def train(self, train_dataset, val_dataset, learning_rate, epochs, layers,
          augmentation=None, custom_callbacks=None, no_augmentation_sources=None):
    """Train the model.
    train_dataset, val_dataset: Training and validation Dataset objects.
    learning_rate: The learning rate to train with
    epochs: Number of training epochs. Note that previous training epochs
            are considered to be done already, so this actually determines
            the epochs to train in total rather than in this particular
            call.
    layers: Allows selecting which layers to train. It can be:
        - A regular expression to match layer names to train
        - One of these predefined values:
          heads: The RPN, classifier and mask heads of the network
          all: All the layers
          3+: Train Resnet stage 3 and up
          4+: Train Resnet stage 4 and up
          5+: Train Resnet stage 5 and up
    augmentation: Optional. An imgaug (https://github.com/aleju/imgaug)
        augmentation. For example, passing imgaug.augmenters.Fliplr(0.5)
        flips images right/left 50% of the time. You can pass complex
        augmentations as well. This augmentation applies 50% of the
        time, and when it does it flips images right/left half the time
        and adds a Gaussian blur with a random sigma in range 0 to 5.

            augmentation = imgaug.augmenters.Sometimes(0.5, [
                imgaug.augmenters.Fliplr(0.5),
                imgaug.augmenters.GaussianBlur(sigma=(0.0, 5.0))
            ])
    custom_callbacks: Optional. Add custom callbacks to be called
        with the keras fit_generator method. Must be list of type keras.callbacks.
    no_augmentation_sources: Optional. List of sources to exclude for
        augmentation. A source is string that identifies a dataset and is
        defined in the Dataset class.
    """
    assert self.mode == "training", "Create model in training mode."

    # Pre-defined layer regular expressions
    layer_regex = {
        # all layers but the backbone
        "heads": r"(mrcnn_.*)|(rpn_.*)|(fpn_.*)",
        # From a specific Resnet stage and up
        "3+": r"(res3.*)|(bn3.*)|(res4.*)|(bn4.*)|(res5.*)|(bn5.*)|(mrcnn_.*)|(rpn_.*)|(fpn_.*)",
        "4+": r"(res4.*)|(bn4.*)|(res5.*)|(bn5.*)|(mrcnn_.*)|(rpn_.*)|(fpn_.*)",
        "5+": r"(res5.*)|(bn5.*)|(mrcnn_.*)|(rpn_.*)|(fpn_.*)",
        # All layers
        "all": ".*",
    }
    if layers in layer_regex.keys():
        layers = layer_regex[layers]

    # Data generators
    train_generator = data_generator(train_dataset, self.config, shuffle=True,
                                     augmentation=augmentation,
                                     batch_size=self.config.BATCH_SIZE,
                                     no_augmentation_sources=no_augmentation_sources)
    val_generator = data_generator(val_dataset, self.config, shuffle=True,
                                   batch_size=self.config.BATCH_SIZE)

    # Create log_dir if it does not exist
    if not os.path.exists(self.log_dir):
        os.makedirs(self.log_dir)

    # Callbacks
    callbacks = [
        keras.callbacks.TensorBoard(log_dir=self.log_dir,
                                    histogram_freq=0, write_graph=True, write_images=False),
        keras.callbacks.ModelCheckpoint(self.checkpoint_path,
                                        verbose=0, save_weights_only=True),
    ]

    # Add custom callbacks to the list
    if custom_callbacks:
        callbacks += custom_callbacks

    # Train
    log("nStarting at epoch {}. LR={}n".format(self.epoch, learning_rate))
    log("Checkpoint Path: {}".format(self.checkpoint_path))
    self.set_trainable(layers)
    self.compile(learning_rate, self.config.LEARNING_MOMENTUM)

    # Work-around for Windows: Keras fails on Windows when using
    # multiprocessing workers. See discussion here:
    # https://github.com/matterport/Mask_RCNN/issues/13#issuecomment-353124009
    if os.name is 'nt':
        workers = 1
    else:
        workers = multiprocessing.cpu_count()

    history = self.keras_model.fit_generator(
        train_generator,
        initial_epoch=self.epoch,
        epochs=epochs,
        steps_per_epoch=self.config.STEPS_PER_EPOCH,
        callbacks=callbacks,
        validation_data=val_generator,
        validation_steps=self.config.VALIDATION_STEPS,
        max_queue_size=100,
        workers=0,
        use_multiprocessing=False,
    )
    self.epoch = max(self.epoch, epochs)
    return history

In the documentation of fit_generator() it says that it returns a History Object but it looks like it doesn’t? I’m very new to machine learning and working on projects like this in general so I’m sorry if this was a stupid question or if I forgot some crucial information.

Advertisement

Answer

I believe that model.fit_generator is deprecated, in TensorFlow 2.2 and higher you can just use model.fit because this now supports generators.

https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit_generator

User contributions licensed under: CC BY-SA
3 People found this is helpful
Advertisement