Skip to content
Advertisement

Tag: attention-model

MultiHeadAttention giving very different values between versions (Pytorch/Tensorflow

I’m trying to recreate a transformer that was written in Pytorch and make it Tensorflow. Everything was going pretty well until each version of MultiHeadAttention started giving extremely different outputs. Both methods are an implementation of multi-headed attention as described in the paper “Attention is all you Need”, so they should be able to achieve the same output. I’m converting

Advertisement