Skip to content
Advertisement

How to run Pytorch model in normal non-parallel way?

I am going through this script, and there is a code block which takes 2 options into account, DataParallel and DistributedDataParallel here:

if not args.distributed:
    if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
        model.features = torch.nn.DataParallel(model.features)
        model.cuda()
    else:
        model = torch.nn.DataParallel(model).cuda()
else:
    model.cuda()
    model = torch.nn.parallel.DistributedDataParallel(model)

What if I don’t want either of these options, and I want to run it without even DataParallel. How do I do it?

How do I define my model so that it runs as a plain nn and not parallelizing anything?

Advertisement

Answer

  • DataParallel is a wrapper object to parallelize the computation on multiple GPUs of the same machine, see here.
  • DistributedDataParallel is also a wrapper object that lets you distribute the data on multiple devices, see here.

If you don’t want it, you can simply remove the wrapper and use the model as it is:

if not args.distributed:
    if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
        model.features = model.features
        model.cuda()
    else:
        model = model.cuda()
else:
    model.cuda()
    model = model

This is to keep code modification to a minimum. Of course, since parallelization is of no interest to you, you could drop this whole if statement to something along the lines of:

if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
    model.features = model.features
model = model.cuda()

Note that this code assumes you are running on the GPU.

User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement