I’m trying to apply a function over all pixels of an image (in my specific case I want to do some colour approximation per pixel, but I don’t think this is relevant for the problem) in an efficient way. The thing is that I’ve found different approaches to do so, but all of them apply a function over each component of a pixel, whereas what I want to do is to apply a function that receives a pixel (not a pixel component), this is, the 3 rgb components (I guess as a tuple, but I don’t care about the format as long as I have the 3 components as parameters in my function).
If you are interested on what I have, this is the non-efficient solution to my issue (works fine but it is too slow):
def closest_colour(pixel, colours): closest_colours = sorted(colours, key=lambda colour: colours_distance(colour, pixel)) return closest_colours[0] # reduces the colours of the image based on the results of KMean # img is image open with opencv.imread() # colours is an array of colours def image_color_reduction(img, colours): start = time.time() print("Color reduction...") reduced_img = img.copy()[...,::-1] width = reduced_img.shape[0] height = reduced_img.shape[1] for x in range(width): for y in range(height): reduced_img[x,y] = closest_colour(reduced_img[x,y], colours) end = time.time() print(f"Successfully reduced in {end-start} seconds") return reduced_img
I’ve followed this post: PIL – apply the same operation to every pixel that seemed to be pretty clear and aligned with my issue. I’ve tried using any kind of image formatting, I’ve tried multithreading (both with pool.map & pool.imap), I’ve tried numpy.apply_along_axis and I finally tried PIL.point(), what I thought was the most similar solution to what I was looking for. Indeed, if you take a look at their official documentation: .point(), it exactly says: The function is called once for each possible pixel value. I find this really misleading, because after trying it I realized pixel value in this context does not refer to an rgb tuple, but to each of the 3 rgb compoments (seriously, in what world?).
I would really appreciate it if somebody could share a bit of their experience and give me some light on this issue. Thank you in advance!!
(UPDATE)
As per your request I add more information on the specific problem I am approaching:
Given
- an image M of size 1022*1080
- an array of colours N with size 1 < |N| < 16
Reduce the colours of M by replacing each pixel’s colour by the most similar one in N (thanks to your answers I know this is defined as Nearest Neighbor Color Quantization)
This is the missing implementation of colours_distance:
def colours_distance(c1, c2): (r1,g1,b1) = c1 (r2,g2,b2) = c2 return math.sqrt((r1 - r2)**2 + (g1 - g2) ** 2 + (b1 - b2) **2)
And this are the imports needed to run this code:
import cv2 import time import math
The solution shown in my question solves the problem described in slightly less than 40s on average.
Advertisement
Answer
Assuming that your image is an (M, N, 3)
numpy array, your color table is (K, 3)
, and you measure color distance as some sane vector norm, you can use scipy’s cKDTree
(or just KDTree
) to optimize and vectorize the lookup for you.
First make a tree out of your color table:
colors = ... # K, 3 array color_tree = cKDTree(colors)
Now you can query
the tree directly to get the output image:
_, output = color_tree.query(img)
output
will be an (M, N)
index array into color_table
. The important thing is that KD trees are optimized to perform O(log K)
lookups per pixel, not O(K)
or O(K log K)
as in your current implementation. Since the loops are implemented in C, you’ll get a major boost from that too.