Skip to content
Advertisement

Due to huge images. Can we generalize flow_from_directory method for regression problem (CNN): input image and output is x,y (float)

Data is in a CSV file which includes image path and target (x and y). where x and y belong to [-1 to 1] after scale (in keras becasue there are so many image . I can not load all in X_train like normal). Thank you so much for help! data in csv file

Advertisement

Answer

I will write here as the comment section is getting bigger and bigger.

You can try to train the model using gradientTape. For details, please check here. With. that you will have more control over batches.

EPOCHS_start = 1
EPOCHS_end = 32

nb_train_steps = training_generator.n // BS

# whatever loss you want to use
loss_fn = BinaryCrossentropy(from_logits=False)
train_acc_metric1 = BinaryAccuracy()

for epoch in range(EPOCHS_start, EPOCHS_end):
  print("Start of epoch %d" % (epoch,))
  start_time = time.time()
  loss_total = tf.Variable(0.0)

  # Iterate over the batches of the dataset.
  for step, (x_batch_train, y_batch_train) in enumerate(training_generator):
    with tf.GradientTape() as tape:
      x_batch_train_scaled = rescale_img(x_batch_train)
      # you have to create your model before
      logits = model(x_batch_train_scaled, training=True)
      y_batch_train =  
           np.asarray(y_batch_train).astype('float32').reshape((-1, 1))

      loss_value = loss_fn(y_batch_train, logits)
      print("loss value :  %.4f " % loss_value)

      grads = tape.gradient(loss_value, model.trainable_weights)
      optimizer.apply_gradients(zip(grads, model.trainable_weights))

      loss_total = loss_total + loss_value

      # Update training metric.
      train_acc_metric1.update_state(y_batch_train, logits)

      if step >= nb_train_steps:
        # we need to break the loop by hand because
        # the generator loops indefinitely
        break

      # Display metrics at the end of each epoch.
      train_acc = train_acc_metric1.result()


      print("Training acc over epoch: ",  
            "%.4f - %.4f - LOSS: %.4f ** Time taken: %.2fs" % 
            (float(train_acc),  
            float(loss_total.numpy() / nb_train_steps), 
            (time.time() - start_time)))

      train_acc_metric1.reset_states()

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement