Skip to content

How to avoid two variables refering to the same data? #Pytorch

During initializing, I tried to reduce the repeats in my code, so instead of:

output=  (torch.zeros(2, 3),
          torch.zeros(2, 3))

I wrote:

z = torch.zeros(2, 3)
output=  (z,z)

However, I find that the second method is wrong.

If I assign the data to variables h,c, any change on h would also be applied to c

h,c = output
h +=torch.ones(2,3)

results of the test above:

tensor([[0., 0., 0.],
        [0., 0., 0.]]) tensor([[0., 0., 0.],
        [0., 0., 0.]])
tensor([[1., 1., 1.],
        [1., 1., 1.]]) tensor([[1., 1., 1.],
        [1., 1., 1.]])

Is there a more elegent way to initialize two indenpendent variables?



I agree that your initial line needs no modification but if you do want an alternative, consider:

z = torch.zeros(2, 3)
output=  (z,z.clone())

The reason the other one (output = (z,z)) doesn’t work, as you’ve correctly discovered is that no copy is made. You’re only passing the same reference in each entry of the tuple to z

User contributions licensed under: CC BY-SA
3 People found this is helpful