Skip to content
Advertisement

Pytorch – Inferring linear layer in_features

I am building a toy model to take in some images and give me a classification. My model looks like:

conv2d -> pool -> conv2d -> linear -> linear.

My issue is that when we create the model, we have to calculate the size of the first linear layer in_features based on the size of the input image. If we get new images of different sizes, we have to recalculate in_features for our linear layer. Why do we have to do this? Can’t it just be inferred?

Advertisement

Answer

As of 1.8, PyTorch now has LazyLinear which infers the input dimension:

A torch.nn.Linear module where in_features is inferred.

User contributions licensed under: CC BY-SA
7 People found this is helpful
Advertisement