This can be accomplished with
cube = ( cube[::2, ::2, ::2] + cube[1::2, ::2, ::2] + cube[::2, 1::2, ::2] + cube[1::2, 1::2, ::2] + cube[::2, ::2, 1::2] + cube[1::2, ::2, 1::2] + cube[::2, 1::2, 1::2] + cube[1::2, 1::2, 1::2] )
But I’m wondering if there is a function to accomplish this quickly and cleanly. If not, is there a canonical name for this operation?
Advertisement
Answer
This sounds like a classical strided-convolution pooling operation.
You can do this in many ways, but the most straightforward would probably be to use skimage’s block_reduce
functionality, as so –
from skimage.measure import block_reduce a = np.arange(6*8*4).reshape(6, 8, 4) reduced_a = block_reduce(a, block_size=(2, 2, 2), func=np.sum) # Make sure the dimensions are as expected: print(reduced_a.shape) # (3, 4, 2) # Test specific value for correct reduction of matrix: assert reduced_a[0, 0, 0] == np.sum(a[:2, :2, :2]) # And more generally, for any index: i, j, k = 2, 0, 1 assert reduced_a[i, j, k] == np.sum(a[i*2:i*2+2, j*2:j*2+2, k*2:k*2+2])
Another way would be to directly convolve your “cube” with a “ones” kernel, and then subsample for the required 2-stride:
from scipy import ndimage convolved_a = ndimage.convolve(a, np.ones((2, 2, 2)))[::2, ::2, ::2] # Assert that this is indeed the same: assert np.all(convolved_a == reduced_a)