Skip to content
Advertisement

Tag: gpu

Get total amount of free GPU memory and available using pytorch

I’m using google colab free Gpu’s for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using PyTorch. Answer In the recent version of PyTorch you can also use torch.cuda.mem_get_info: https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html#torch.cuda.mem_get_info

Training a simple model in Tensorflow GPU slower than CPU

I have set up a simple linear regression problem in Tensorflow, and have created simple conda environments using Tensorflow CPU and GPU both in 1.13.1 (using CUDA 10.0 in the backend on an NVIDIA Quadro P600). However, it looks like the GPU environment always takes longer time than the CPU environment. The code I’m running is below. Here are some

Pycuda Blocks and Grids to work with big datas

I need help to know the size of my blocks and grids. I’m building a python app to perform metric calculations based on scipy as: Euclidean distance, Manhattan, Pearson, Cosine, joined other. The project is PycudaDistances. It seems to work very well with small arrays. When I perform a more exhaustive test, unfortunately it did not work. I downloaded movielens

Advertisement