What is the difference between ‘SAME’ and ‘VALID’ padding in tf.nn.max_pool of tensorflow?
In my opinion, ‘VALID’ means there will be no zero padding outside the edges when we do max pool.
According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use ‘VALID’ of tensorflow.
But what is ‘SAME’ padding of max pool in tensorflow?
Advertisement
Answer
I’ll give an example to make it clearer:
x: input image of shape [2, 3], 1 channelvalid_pad: max pool with 2×2 kernel, stride 2 and VALID padding.same_pad: max pool with 2×2 kernel, stride 2 and SAME padding (this is the classic way to go)
The output shapes are:
valid_pad: here, no padding so the output shape is [1, 1]same_pad: here, we pad the image to the shape [2, 4] (with-infand then apply max pool), so the output shape is [1, 2]
x = tf.constant([[1., 2., 3.],
[4., 5., 6.]])
x = tf.reshape(x, [1, 2, 3, 1]) # give a shape accepted by tf.nn.max_pool
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
valid_pad.get_shape() == [1, 1, 1, 1] # valid_pad is [5.]
same_pad.get_shape() == [1, 1, 2, 1] # same_pad is [5., 6.]