Skip to content
Advertisement

Is there a way to speed up looping over numpy.where?

Imagine you have a segmentation map, where each object is identified by a unique index, e.g. looking similar to this:

enter image description here

For each object, I would like to save which pixels it covers, but I could only come up with the standard for loop so far. Unfortunately, for larger images with thousands of individual objects, this turns out to be very slow–for my real data at least. Can I somehow speed things up?

JavaScript

Advertisement

Answer

Using np.where in a loop is not efficient algorithmically since the time complexity is O(s n m) where s = object_idxs.size and n, m = segmap.shape. This operation can be done in O(n m).

One solution using Numpy is to first select all the object pixel locations, then sort them based on their associated object in segmap, and finally split them based on the number of objects. Here is the code:

JavaScript
Advertisement