Skip to content
Advertisement

How to add two separate layers on the top of one layer using pytorch?

I want to add two separate layers on the top of one layer (or a pre-trained model) Is that possible for me to do using Pytorch?
enter image description here

Advertisement

Answer

Yes, when defining your model’s forward function, you can specify how the inputs should be passed through the layers.

For example:

def forward(self, X):
    X = self.common_layer(X)
    X = self.activation_fn(X)
    Xa = self.layer_a(X)
    Xb = self.layer_b(X)
    # now combine outputs of the parallel layers however you please
    return self.combining_layer(torch.cat([Xa, Xb]))

Where forward is a member of MyNet:

class MyNet(nn.Module):
   def __init__(self):
       # define common_layer, activation_fn, layer_a, layer_b, and combining_layer

Training

The model should be trained, just like any other pytorch model.

Because the combined output goes through both layer_a and layer_b, computing gradient of the loss will optimize paramaters for both layers. The layers will be optimized seperately as the parameters of their object are independent.

For example.

model = MyNet()
...
optimizer.zero_grad()
predictions = model(input_batch)
loss = my_loss_fn(predictions, ground_truth)
loss.backward()
optimizer.step()
User contributions licensed under: CC BY-SA
3 People found this is helpful
Advertisement