Skip to content
Advertisement

numpy slicing using multiple conditions where one of the conditions searches the neighborhood of an element

the problem is to take a black-white image, detect all the places where white borders on black, keep that white, and turn all other white pixels black. I know how to do this using normal for-loops and lists, but I want to do it w/ numpy, which I am not that familiar with. Here is what I have so far:

>>>from PIL Import Image
>>>import numpy as np

>>>a = Image.open('a.png')
>>>a = a.convert('L')
>>>a_np = np.array(a)

>>>a_np

array([[0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       ...,
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0]], dtype=uint8)

>>>mask = np.pad(a_np[1:-1,1:-1],1,mode='wrap') != 0
>>>mask
array([[False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False],
       ...,
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False],
       [False, False, False, ..., False, False, False]])
>>> np.where(mask == True)
(array([ 98,  98,  98, ..., 981, 981, 981]), array([393, 394, 395, ..., 684, 685, 686]))
>>> a_np[mask] = 0
>>> a_np
array([[0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       ...,
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0]], dtype=uint8)
>>> np.where(a_np == 1)
(array([], dtype=int64), array([], dtype=int64))

Basically, trying to create a mask that finds the neighbors of every element in the array and for those that do not have a black neighbor, turn them black – but no matter what I try I either get all black elements or the same array that I started with. Numpy or OpenCV solutions are welcome.

BEFORE enter image description here

AFTER enter image description here

Advertisement

Answer

With and help of , this can be achieved with a binary_erosion.

One can erode the pixels with a kernel of 3×3, then compare the original image to the erosion to find the difference:

from scipy.ndimage import binary_erosion

kernel = np.ones((3,3))

edges = a_np - binary_erosion(a_np, kernel)*255

out = Image.fromarray(edges.astype('uint8'), 'L')

output image:

contour

User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement