Skip to content
Advertisement

How to load models trained on GPU into CPU (system) memory?

I trained a model on a GPU and now I am trying to evaluate it on CPU (GPU is being used for a different train run). However when I try to load it using:

JavaScript

I am getting a CUDA_ERROR_OUT_OF_MEMORY:

JavaScript

(I tried also setting compile=True with the same result.)

It seems that the model is being loaded into GPU which is already used by another instance. How can I force keras/tensorflow to load it into system memory and execute it on CPU?

Advertisement

Answer

Can you define everything inside with tf.device('/cpu:0'): except the libraries import part and test.

If that doesn’t work then create a virtual envoironment and install normal tensorflow not the gpu version and then try it. If it is still OOM error, then it is from the CPU being utilized and it has not enough memory to load this trained model.

Advertisement