Skip to content
Advertisement

Calling cuda() with async results in SyntaxError

I’m trying to run this PyTorch code:

for i, (input, target) in enumerate(train_loader):

    input = input.float().cuda(async=True)
    target = target.cuda(async=True)
    input_var = torch.autograd.Variable(input)
    target_var = torch.autograd.Variable(target)

    output = model(input_var)

But when I try I am getting this error message:

input = input.float().cuda(async=True)
                               ^
SyntaxError: invalid syntax
Process finished with exit code 1

What am I doing wrong? I already installed cuda.

Advertisement

Answer

Your code does not work because:

  • async is a reserved keyword in python which cannot be used in that way, that is why you get the SyntaxError

  • cuda() no longer has an argument async. The constructor looks like this:

cuda(device=None, non_blocking=False) → Tensor

Use instead non_blocking:

The argument non_blocking has the same effect as async previously had:



As an add-on: If you are interested in what async is actually used for you can take a look here: https://www.python.org/dev/peps/pep-0492/#new-syntax

User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement