Skip to content
Advertisement

RuntimeError: Given groups=1, weight of size [32, 16, 5, 5], expected input[16, 3, 448, 448] to have 16 channels, but got 3 channels instead

I am getting the following error and can’t figure out why. I printed the input size of my torch before it gets fed to the CNN:

torch.Size([16, 3, 448, 448])

Here is my error message:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-116-bfa18f2a99fd> in <module>()
     14     # Forward pass
     15     print(images.shape)
---> 16     outputs = modelll(images.float())
     17     loss = criterion(outputs, labels)
     18 

6 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    441                             _pair(0), self.dilation, self.groups)
    442         return F.conv2d(input, weight, bias, self.stride,
--> 443                         self.padding, self.dilation, self.groups)
    444 
    445     def forward(self, input: Tensor) -> Tensor:

RuntimeError: Given groups=1, weight of size [32, 16, 5, 5], expected input[16, 3, 448, 448] to have 16 channels, but got 3 channels instead


I defined a CNN with 5 convolutional layers and two fully connected layers. I am feeding in batches of 16 and have resized the images to be (448×448). The images are colour, so I assumed an input of torch.Size([16, 3, 448, 448]) would be correct. Do I need to rearrange my tensor to be torch.Size([3, 448, 448, 16])? Just guessing here as I am fairly new to coding. I have looked online but haven’t been able to figure it out. Any help would be greatly appreciated.

#Defining CNN
class ConvolutionalNet(nn.Module):
  def __init__(self, num_classes=182):
    super().__init__()

    self.layer1 = nn.Sequential(
        nn.Conv2d(3, 16, kernel_size=5, stride=1, padding=2),
        nn.BatchNorm2d(16),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2)
    )

    self.layer2 = nn.Sequential(
        nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
        nn.BatchNorm2d(32),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2)
    )

    self.layer3 = nn.Sequential(
        nn.Conv2d(32, 32, kernel_size=5, stride=1, padding=2),
        nn.BatchNorm2d(32),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2)
    )

    self.layer4 = nn.Sequential(
        nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2),
        nn.BatchNorm2d(64),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2)
    )

    self.layer5 = nn.Sequential(
        nn.Conv2d(64, 64, kernel_size=5, stride=1, padding=2),
        nn.BatchNorm2d(64),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2)
    )

    

    self.fc1 = nn.Linear(10*10*64, 240)
    self.fc2 = nn.Linear(240, 182)
  


  def forward(self, x):
    out = self.layer1(x)
    out = self.layer2(x)
    out = self.layer3(x)
    out = self.layer4(x)
    out = self.layer5(x)
    out = out.reshape(out.size(0), -1)
    out = F.relu(self.fc1((x)))
    out = self.fc3(x)

      
    return out


#Creating instance
modelll = ConvolutionalNet(num_classes).to(device)
modelll

num_classes = 182


criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(modelll.parameters(), lr=3e-4)

#Training loop
modelll.train() 
num_epochs = 5    # Set the model into `training` mode, because certain operators will perform differently during training and evaluation (e.g. dropout and batch normalization)
total_epochs = notebook.tqdm(range(num_epochs))


for epoch in total_epochs:
  for i, (images, labels, m) in enumerate(train_loader):  
    # Move tensors to the configured device
    images = images.to(device)
    labels = labels.to(device)
    # Forward pass
    print(images.shape)
    outputs = modelll(images.float()) 
    loss = criterion(outputs, labels)
    
    # Backward and optimize
    loss.backward()
    optimizer.step()
    optimizer.zero_grad()

    if (i + 1) %% 10 == 0:
      total_epochs.set_description(
          'Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(
              epoch + 1, num_epochs, i + 1, len(train_loader), loss.item()))

Advertisement

Answer

You haven’t passed your output to the next layer’s input, you’re continually using the input. You should change your forward call to:

  def forward(self, x):
    out = self.layer1(x)
    out = self.layer2(out)
    out = self.layer3(out)
    out = self.layer4(out)
    out = self.layer5(out)
    out = out.reshape(out.size(0), -1)
    out = F.relu(self.fc1((out)))
    out = self.fc3(out)
    return out

User contributions licensed under: CC BY-SA
8 People found this is helpful
Advertisement