I have managed to implement early stopping into my Keras model, but I am not sure how I can view the loss of the best epoch.
es = EarlyStopping(monitor='val_out_soft_loss', mode='min', restore_best_weights=True, verbose=2, patience=10) model.fit(tr_x, tr_y, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[es], validation_data=(val_x, val_y)) loss = model.history.history["val_out_soft_loss"][-1] return model, loss
The way I have defined the loss score, means that the returned score comes from the final epoch, not the best epoch.
Example:
from sklearn.model_selection import train_test_split, KFold losses = [] models = [] for k in range(2): kfold = KFold(5, random_state = 42 + k, shuffle = True) for k_fold, (tr_inds, val_inds) in enumerate(kfold.split(train_y)): print("-----------") print("-----------") model, loss = get_model(64, 100) models.append(model) print(k_fold, loss) losses.append(loss) print("-------") print(losses) print(np.mean(losses)) Epoch 23/100 18536/18536 [==============================] - 7s 362us/step - loss: 0.0116 - out_soft_loss: 0.0112 - out_reg_loss: 0.0393 - val_loss: 0.0131 - val_out_soft_loss: 0.0127 - val_out_reg_loss: 0.0381 Epoch 24/100 18536/18536 [==============================] - 7s 356us/step - loss: 0.0116 - out_soft_loss: 0.0112 - out_reg_loss: 0.0388 - val_loss: 0.0132 - val_out_soft_loss: 0.0127 - val_out_reg_loss: 0.0403 Restoring model weights from the end of the best epoch Epoch 00024: early stopping 0 0.012735568918287754
So in this example, I would like to see the loss at Epoch 00014 (which is 0.0124).
I also have a separate question: How can I set the decimal places for the val_out_soft_loss score?
Advertisement
Answer
Assign the fit()
call in Keras to a variable so you can track the metrics through the epochs.
history = model.fit(tr_x, ...
It will return a dictionary, access it like this:
loss_hist = history.history['loss']
And then get the min()
to get the minimum loss, and argmin()
to get the best epoch (zero-based).
np.min(loss_hist) np.argmin(loss_hist)