I’m trying to push both my mode and data, images and labels, to run on the GPU by doing:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Followed by:
count = 0 loss_list = [] iteration_list = [] accuracy_list = [] epochs = 30 for epoch in range(epochs): for i, (images, labels) in enumerate(trainloader): net = net.to(device) images.to(device) labels.to(device) optimizer.zero_grad() outputs = net(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() count += 1 if count % 50 == 0: correct = 0 total = 0 for i, (images, labels) in enumerate(testloader): images.to(device) labels.to(device) outputs = net(images) predicted = torch.max(outputs.data, 1)[1] total += len(labels) correct += (predicted == labels).sum() accuracy = 100 * correct / float(total) loss_list.append(loss.data) iteration_list.append(count) accuracy_list.append(accuracy) if count % 500 == 0: print("Iteration: {} Loss: {} Accuracy: {} %".format(count, loss.data, accuracy))
I’m explicitly pushing my model and data to device however I am met by the error:
RuntimeError Traceback (most recent call last) <ipython-input-341-361b906da73d> in <module>() 12 13 optimizer.zero_grad() ---> 14 outputs = net(images) 15 loss = criterion(outputs, labels) 16 loss.backward() 4 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias) 394 _pair(0), self.dilation, self.groups) 395 return F.conv2d(input, weight, bias, self.stride, --> 396 self.padding, self.dilation, self.groups) 397 398 def forward(self, input: Tensor) -> Tensor: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
I feel like I’m doing the right thing by pushing both model and data to GPU but I can’t figure out why it’s not working. Does somebody know what’s going wrong? Thank you in advance.
Advertisement
Answer
Your weights are saved on your gpu but your input is on your cpu. You can change that by: images.cuda()