Skip to content
Advertisement

What’s difference between tf.sub and just minus operation in tensorflow?

I am trying to use Tensorflow. Here is an very simple code.

train = tf.placeholder(tf.float32, [1], name="train")
W1 = tf.Variable(tf.truncated_normal([1], stddev=0.1), name="W1")
loss = tf.pow(tf.sub(train, W1), 2)
step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

Just ignore the optimization part (4th line). It will take a floating number and train W1 so as to increase squared difference.

My question is simple. If I use just minus sign instead of tf.sub” as below, what is different? Will it cause a wrong result?

loss = tf.pow(train-W1, 2)

When I replace it, the result looks the same. If they are the same, why do we need to use the “tf.add/tf.sub” things?

Built-in back propagation calculation can be done only by the “tf.*” things?

Advertisement

Answer

Yes, – and + resolve to tf.sub ad tf.add. If you look at the tensorflow code you will see that these operators on tf.Variable are overloaded with the tf.* methods.

As to why both exists I assume the tf.* ones exist for consistency. So sub and say matmul operation can be used in the same way. While the operator overloading is for convenience.

User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement