I am trying to compute cross_entropy loss manually in Pytorch for an encoder-decoder model. I used the code posted here to compute it: Cross Entropy in PyTorch I updated the code to discard padded tokens (-100). The final code is this: To verify that it works fine, I tested it on a text generation task, and I computed the loss
Tag: cross-entropy
pytorch custom loss function nn.CrossEntropyLoss
After studying autograd, I tried to make loss function myself. And here are my loss and I compared with torch.nn.CrossEntropyLoss here are results values were same. I thought, because those are different functions so grad_fn are different and it won’t cause any problems. But something happened! After 4 epochs, loss values are turned to nan. Contrary to myCEE, with nn.CrossEntropyLoss
TypeError: Input ‘y’ of ‘Mul’ Op has type float32 that does not match type int64 of argument ‘x’
after this code i am getting the error in categoricalfocalloss i m not getting whereint64 error is coming model description here in this code , in the loss categoricalfocal loss is used here in the model i used categorical focal loss when i run this ,in train dataset i am not getting how tcovert itintointoint64 error is got is mentioned