I’m trying to create a custom loss function, in which I have to slice the tensors multiple times. One example is listed below: This (and the entire loss function) works fine when testing it manually on selfmade Tensors y_true and y_pred, but when using it inside a loss function it will give an error upon model fitting (compiling goes fine).
Tag: loss-function
manually computing cross entropy loss in pytorch
I am trying to compute cross_entropy loss manually in Pytorch for an encoder-decoder model. I used the code posted here to compute it: Cross Entropy in PyTorch I updated the code to discard padded tokens (-100). The final code is this: To verify that it works fine, I tested it on a text generation task, and I computed the loss
pytorch custom loss function nn.CrossEntropyLoss
After studying autograd, I tried to make loss function myself. And here are my loss and I compared with torch.nn.CrossEntropyLoss here are results values were same. I thought, because those are different functions so grad_fn are different and it won’t cause any problems. But something happened! After 4 epochs, loss values are turned to nan. Contrary to myCEE, with nn.CrossEntropyLoss
Trouble implementing “concurrent” softmax function from paper (PyTorch)
I am trying to implement the so called ‘concurrent’ softmax function given in the paper “Large-Scale Object Detection in the Wild from Imbalanced Multi-Labels”. Below is the definition of the concurrent softmax: NOTE: I have left the (1-rij) term out for the time being because I don’t think it applies to my problem given that my training dataset has a
xlearn predictions error give a different mse than output by the function
the xlearn predict function gives a different mse than what you get by looking at the predictions and calculating it yourself. Here is code to do this; you can run it by cloning the xlearn repository and copying the below code in demo/regression/house_price in the repository If you save it as min_eg.py, run it (after installing xlearn) as python min_eg.py
L1/L2 regularization in PyTorch
How do I add L1/L2 regularization in PyTorch without manually computing it? Answer See the documentation. Add a weight_decay parameter to the optimizer for L2 regularization.