Skip to content
Advertisement

Tag: nvidia

Why does GPU memory increase when recreating and reassigning a JAX numpy array to the same variable name?

When I recreate and reassign a JAX np array to the same variable name, for some reason the GPU memory nearly doubles the first recreation and then stays stable for subsequent recreations/reassignments. Why does this happen and is this generally expected behavior for JAX arrays? Fully runnable minimal example: https://colab.research.google.com/drive/1piUvyVylRBKm1xb1WsocsSVXJzvn5bdI?usp=sharing. For posterity in case colab goes down: Thank you! Answer

CUDA Error: out of memory – Python process utilizes all GPU memory

Even after rebooting the machine, there is >95% of GPU Memory used by python3 process (system-wide interpreter). Note that memory consumption keeps even if there are no running training scripts, and I’ve never used keras/tensorflow in the system environment, only with venv or in docker container. UPDATED: The last activity was the execution of NN test script with the following

Advertisement