Skip to content
Advertisement

Is there a way to use torch.nn.DataParallel with CPU?

I’m trying to change some PyTorch code so that it can run on the CPU.

The model was trained with torch.nn.DataParallel() so when I load the pre-trained model and try using it I must use nn.DataParallel() which I am currently doing like this:

JavaScript

However after I switched my torch device to cpu like this:

JavaScript

I got this error:

JavaScript

I’m assuming that it’s still looking for CUDA because that’s what device_ids is set to but is there a way to make it use the CPU? This post from the PyTorch repo makes me think that I can but it doesn’t explain how.

If not is there any other way to use a model trained with DataParallel on your CPU?

Advertisement

Answer

When you use torch.nn.DataParallel() it implements data parallelism at the module level.

According to the doc:

The parallelized module must have its parameters and buffers on device_ids[0] before running this DataParallel module.

So even though you are doing .to(torch.device('cpu')) it is still expecting to pass the data to a GPU.

However since DataParallel is a container you can bypass it and get just the original module by doing this:

JavaScript

Now it will access the original module you defined before you applied the DataParallel container.

User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement