Skip to content
Advertisement

Evaluate model on Testing Set after each epoch of training

I’m training a tensorflow model on image dataset for a classification task, we usually provide the training set and validation set to the model.fit method, we can later output model convergence graph of training and validation. I want to do the same with the testing set, in other words, I want to get the accuracy and loss of my model on the testing set after each epoch(not validation set – and I can’t replace the validation set with the testing set because I need graphs of both of them).

I managed to do that by saving the checkpoints of my model after each epoch using some callback and later load each checkpoint to my model and compute accuracy and loss, but I want to know if there exists some easier way of doing that, maybe with some other callback or some work around with the model.fit method.

Advertisement

Answer

You could use a custom Callback and pass your test data and do whatever you like:

JavaScript
JavaScript

You can also just use model.evaluate in the callback. See also this post.

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement