I don’t know how to explain it correctly, so the title might be misleading. What I want to do is to move columns from a 3d tensor t1 to another 3d tensor t2 according to the indices. There’s a dictionary td, and a (k,v) pair in td means that kth column of t1 will be the vth column of t2
Tag: pytorch
Problem with data cast on the GPU in PyTorch
Im trying to do an image classifier, but im having a problem with the data cast on the GPU. Model already is in cuda, but i get error that says Whats the problem with input.to(args[‘device’])? Answer UPDATE: According to the OP, an aditional data.to(device) before the train loop caused this issue. you are probably getting a string like 0 or
RuntimeError: Given groups=1, weight of size [32, 16, 5, 5], expected input[16, 3, 448, 448] to have 16 channels, but got 3 channels instead
I am getting the following error and can’t figure out why. I printed the input size of my torch before it gets fed to the CNN: Here is my error message: I defined a CNN with 5 convolutional layers and two fully connected layers. I am feeding in batches of 16 and have resized the images to be (448×448). The
Difference between the calculation of the training loss and validation loss using pytorch
I wanna use the following code of this traditional image classification problem for my regression problem. The code can be found here: GeeksforGeeks-Training Neural Networks with Validation using Pytorch I can understand why the training loss is summed up and then divided by the length of the training data in this example, but I can’t get why the validation loss
I think I have reconstructed the computational graph before, but it hints me “Trying to backward through the graph a second time “, why?
A image to discribe my question From my point of view, every iterations, the computational graph will be constructed at the first arrow, and it will be used and delete at the second arrow in backward pass. So, why it tells me that: RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they
correct shape (BS,H,W,C) not working in torchvision.utils.save_image
Let’s BS be the batch size, H the height, w the weight, and c the number of channels which is 3 in my case. When I save my image in this shape (BS,C,H,W) with it works very well but the image is unreadable since the format is wrong. But when I am reshaping my image into the right format which
pytorch cuda out of memory while inferencing
I think this is a very basic question, my apologies as I am very new to pytorch. I am trying to find if an image is manipulated or not using MantraNet. After running 2-3 inferences I get the CUDA out of memory, then after restarting the kernel also I keep getting the same error: The error is given below: RuntimeError:
What are the main reasons why some network parameters might become nan after calling optimizer.step in Pytorch?
I am trying to understand why one or two parameters in my Pytorch neural network occasionally become nan after calling optimizer.step(). I have already checked the gradients after calling .backward() and just before calling the optimizer, and they neither contain nans nor are very large. I am doing gradient clipping, but I don’t think that this can be responsible since
manually computing cross entropy loss in pytorch
I am trying to compute cross_entropy loss manually in Pytorch for an encoder-decoder model. I used the code posted here to compute it: Cross Entropy in PyTorch I updated the code to discard padded tokens (-100). The final code is this: To verify that it works fine, I tested it on a text generation task, and I computed the loss
Creating and Use a PyTorch DataLoader
I am trying to create a PyTorch Dataset and DataLoader object using a sample data. This is the tab seperated dataset: This is the code to create the Dataset above and DataLoader object: The code is simply saved with the filename “demo.py”. The code should succesfully execute once the command ‘python demo.py’ is executed on a command prompt screen. I