Skip to content
Advertisement

Tag: transformer-model

MultiHeadAttention giving very different values between versions (Pytorch/Tensorflow

I’m trying to recreate a transformer that was written in Pytorch and make it Tensorflow. Everything was going pretty well until each version of MultiHeadAttention started giving extremely different outputs. Both methods are an implementation of multi-headed attention as described in the paper “Attention is all you Need”, so they should be able to achieve the same output. I’m converting

Load a model as DPRQuestionEncoder in HuggingFace

I would like to load the BERT’s weights (or whatever transformer) into a DPRQuestionEncoder architecture, such that I can use the HuggingFace save_pretrained method and plug the saved model into the RAG architecture to do end-to-end fine-tuning. But I got the following error I am using the last version of Transformers. Answer As already mentioned in the comments, DPRQuestionEncoder does

Pytorch’s nn.TransformerEncoder “src_key_padding_mask” not functioning as expected

Im working with Pytorch’s nn.TransformerEncoder module. I got input samples with (as normal) the shape (batch-size, seq-len, emb-dim). All samples in one batch have been zero-padded to the size of the biggest sample in this batch. Therefore I want the attention of the all zero values to be ignored. The documentation says, to add an argument src_key_padding_mask to the forward

Advertisement