I want to extract the features from certain blocks of the TimeSformer model and also want to remove the last two layers.
import torch from timesformer.models.vit import TimeSformer model = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/path/to/pretrained/model.pyth')
The print of the model is as follows:
TimeSformer( (model): VisionTransformer( (dropout): Dropout(p=0.0, inplace=False) (patch_embed): PatchEmbed( (proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16)) ) (pos_drop): Dropout(p=0.0, inplace=False) (time_drop): Dropout(p=0.0, inplace=False) (blocks): ModuleList( #************ (0): Block( (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (attn): Attention( (qkv): Linear(in_features=768, out_features=2304, bias=True) (proj): Linear(in_features=768, out_features=768, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (attn_drop): Dropout(p=0.0, inplace=False) ) (temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (temporal_attn): Attention( (qkv): Linear(in_features=768, out_features=2304, bias=True) (proj): Linear(in_features=768, out_features=768, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (attn_drop): Dropout(p=0.0, inplace=False) ) (temporal_fc): Linear(in_features=768, out_features=768, bias=True) (drop_path): Identity() (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=768, out_features=3072, bias=True) (act): GELU() (fc2): Linear(in_features=3072, out_features=768, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (1): Block( (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (attn): Attention( (qkv): Linear(in_features=768, out_features=2304, bias=True) (proj): Linear(in_features=768, out_features=768, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (attn_drop): Dropout(p=0.0, inplace=False) ) (temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (temporal_attn): Attention( (qkv): Linear(in_features=768, out_features=2304, bias=True) (proj): Linear(in_features=768, out_features=768, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (attn_drop): Dropout(p=0.0, inplace=False) ) (temporal_fc): Linear(in_features=768, out_features=768, bias=True) (drop_path): DropPath() (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=768, out_features=3072, bias=True) (act): GELU() (fc2): Linear(in_features=3072, out_features=768, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) . . . . . . (11): Block( (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (attn): Attention( (qkv): Linear(in_features=768, out_features=2304, bias=True) (proj): Linear(in_features=768, out_features=768, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (attn_drop): Dropout(p=0.0, inplace=False) ) (temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (temporal_attn): Attention( (qkv): Linear(in_features=768, out_features=2304, bias=True) (proj): Linear(in_features=768, out_features=768, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (attn_drop): Dropout(p=0.0, inplace=False) ) (temporal_fc): Linear(in_features=768, out_features=768, bias=True) (drop_path): DropPath() (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=768, out_features=3072, bias=True) (act): GELU() (fc2): Linear(in_features=3072, out_features=768, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True) **** I want to remove this layer***** (head): Linear(in_features=768, out_features=400, bias=True) **** I want to remove this layer*****
)
)
Specifically, I want to extract the outputs of the 4th, 8th and 11th blocks of the model and removing the lats two layers. How can I do this. I tried using TimeSformer.blocks[0] but that was not working.
Update :
I have a Class and I need to access the aforementioned blocks of the TimeSformer as the output of this class. The input of this class is a 5D tensor. This is the non-modified code that I use for extracting the outputs of the aforementioned blocks:
class Model(nn.Module): def __init__(self, pretrained=False): super(Model, self).__init__() self.model =TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/home/user/models/TimeSformer_divST_16x16_448_K400.pyth') self.activation = {} def get_activation(name): def hook(model, input, output): self.activation[name] = output.detach() return hook self.model.model.blocks[4].register_forward_hook(get_activation('block4')) self.model.model.blocks[8].register_forward_hook(get_activation('block8')) self.model.model.blocks[11].register_forward_hook(get_activation('block11')) block4_output = self.activation['block4'] block8_output = self.activation['block8'] block11_output = self.activation['block11'] def forward(self, x, out_consp = False): features2, features3, features4 = self.model(x)
Advertisement
Answer
To extract the intermediate output from specific layers, you can register it as a hook, the example is showed by the snipcode below:
import torch from timesformer.models.vit import TimeSformer model = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/path/to/pretrained/model.pyth') activation = {} def get_activation(name): def hook(model, input, output): activation[name] = output.detach() return hook model.model.blocks[4].register_forward_hook(get_activation('block4')) model.model.blocks[8].register_forward_hook(get_activation('block8')) model.model.blocks[11].register_forward_hook(get_activation('block11')) x = torch.randn(3,3,224,224) output = model(x) block4_output = activation['block4'] block8_output = activation['block8'] block11_output = activation['block11']
To remove the last two layers, you can replace them with Identity:
model.norm = torch.nn.Identity() model.head= torch.nn.Identity()