I’m trying to implement some calculation, but I can’t figure how to vectorize my code and not using loops. Let me explain: I have a matrix M[N,C] of either 0 or 1. Another matrix Y[N,1] containing values of [0,C-1] (My classes). Another matrix ds[N,M] which is my dataset. My output matrix is of size grad[M,C] and should be calculated as
Tag: pytorch
Using PyTorch tensors with scikit-learn
Can I use PyTorch tensors instead of NumPy arrays while working with scikit-learn? I tried some methods from scikit-learn like train_test_split and StandardScalar, and it seems to work just fine, but is there anything I should know when I’m using PyTorch tensors instead of NumPy arrays? According to this question on https://scikit-learn.org/stable/faq.html#how-can-i-load-my-own-datasets-into-a-format-usable-by-scikit-learn : numpy arrays or scipy sparse matrices. Other
Giving output of one neural network as an input to another in pytorch
I have a pretrained convolution neural network which produces and output of shape (X,164) where X is the number of test examples. So output layer has 164 nodes. I want to take this output and give this two another network which is simply a fully connected neural network whereby the first layer has 64 nodes and output layer has 1
Dataloader throwing error TypeError: new(): data must be a sequence (got map)
I am trying to implement bidirectional LSTM on time series data. The main file calls the dataloader to load the data for the model. Main.py data_loader.py but I am unable to resolve the error TypeError: new(): data must be a sequence (got map) Following message is being received in the terminal: The input data is in the jason format (below
Resize feature vector from neural network
I am trying to perform a task of approximation of two embeddings (textual and visual). For the visual embedding, I am using VGG as the encoder. The output is a 1×1000 embedding. For the textual encoder, I am using a Transformer to which output is shaped 1×712. What I want is to convert both these vectors to the same dimension
pytorch custom loss function nn.CrossEntropyLoss
After studying autograd, I tried to make loss function myself. And here are my loss and I compared with torch.nn.CrossEntropyLoss here are results values were same. I thought, because those are different functions so grad_fn are different and it won’t cause any problems. But something happened! After 4 epochs, loss values are turned to nan. Contrary to myCEE, with nn.CrossEntropyLoss
NumPy + PyTorch Tensor assignment
lets assume we have a tensor representing an image of the shape (910, 270, 1) which assigned a number (some index) to each pixel with width=910 and height=270. We also have a numpy array of size (N, 3) which maps a 3-tuple to an index. I now want to create a new numpy array of shape (920, 270, 3) which
Pooling for 1D tensor
I am looking for a way to reduce the length of a 1D tensor by applying a pooling operation. How can I do it? If I apply MaxPool1d, I get the error max_pool1d() input tensor must have 2 or 3 dimensions but got 1. Here is my code: Answer Your initialization is fine, you’ve defined the first two parameters of
Share the output of one class to another class python
I have two DNNs the first one returns two outputs. I want to use one of these outputs in a second class that represents another DNN as in the following example: I want to pass the output (x) to the second class to be concatenated to another variable (v). I found a solution to make the variable (x) as a
This torch project keep telling me “Expected 2 or more dimensions (got 1)”
I was trying to make my own neural network using PyTorch. I do not understand why my code is not working properly. The program keeps giving me this error: Expected 2 or more dimensions (got 1) Can anyone explain what is wrong with my code? Answer The tensor you use as the dataset, Xs is shaped (n, 2). So when