I have a tensor with my LSTM inputs X (in PyTorch) as well as the matching predictions Y_hat
I want to add the Y_hat as a column to the original X.
The problem is that LSTM a sliding window with seq_length. In case seq. length is 3 and I have 6 variables in X, and 2 variables in Y_hat, I have something like this:
First entry of Tensor:
X1: [1 2 3 4 5 6] [5 4 6 7 8 9] [3 6 8 7 4 8] Y_hat1 [0 1] X2: [5 4 6 7 8 9] (repeated from X1) [3 6 8 7 4 8] (repeated from X1) [4 8 7 9 8 4] Y_hat1 [1 1]
and so on.
Is there an easy pythonesk command to reshape the X with Y so I can quickly get:
[3 6 8 7 4 8 0 1] [4 8 7 9 8 4 1 1]
Advertisement
Answer
Having defined X1
, X2
, Y_hat1
, and Y_hat2
:
X1 = torch.tensor([[1, 2, 3, 4, 5, 6], [5, 4, 6, 7, 8, 9], [3, 6, 8, 7, 4, 8]]) Y_hat1 = torch.tensor([0,1]) X2 = torch.tensor([[5, 4, 6, 7, 8, 9], [3, 6, 8, 7, 4, 8], [4, 8, 7, 9, 8, 4]]) Y_hat2 = torch.tensor([1,1])
As well as stacks of inputs (X
) and targets (Y
):
X = torch.stack([X1, X2]) Y = torch.stack([Y_hat1, Y_hat2])
You can the desired operation using concatenation and stacking operators:
>>> X = torch.stack([X1, X2]) >>> Y = torch.stack([Y_hat1, Y_hat2]) >>> torch.stack(tuple(torch.cat((x[-1], y)) for x, y in zip(X, Y)))
Which expanded corresponds to:
>>> torch.stack(( torch.cat((X1[-1], Y_hat1)), torch.cat((X2[-1], Y_hat2))))
In a more vectorized fashion you can do:
>>> torch.hstack((X[:,-1], Y))