I’m trying to convert an RGB to a gray-value image of the same size (with values between 0 and 1). The mapping is done by a dictionary called MASK_LUT_IDX
which takes in a tuple (RGB) and returns the corresponding value. The current code is 2x faster than before, but still takes 1.5s (according to timeit
), which is proving to be an issue.
import numpy as np def quickConv(numpy_triple): return MASK_LUT_IDX[tuple(numpy_triple)] def ImageSegmenter(masked_img): rgb_tuples = np.array(masked_img.getdata(), dtype=tuple) class_idxs = np.apply_along_axis(quickConv, 1,rgb_tuples) return np.array(class_idxs).reshape(masked_img.size[1],masked_img.size[0]) class_img = ImageSegmenter(masked_img)
Is there a better way of converting this? I’ve looked into the palette functionalities, but it doesn’t seem to quite fit my needs.
Advertisement
Answer
Thanks to Cris Luengo’s help, here is a sped up version of the function.
def ImageSegmenter(masked_img): masked_img = np.array(masked_img) class_idxs = FASTER_MASK_LUT_IDX[masked_img[:,:,0],masked_img[:,:,1],masked_img[:,:,2]] return class_idxs
Where FASTER_MASK_LUT_IDX is a 3d tensor set given by
FASTER_MASK_LUT_IDX = np.zeros((256,256,256)) for idx,label in zip(CLASS_IDX,CLASS_LABELS): red_idx = RGB_CLASS_MAPPING[label]['R'] green_idx = RGB_CLASS_MAPPING[label]['G'] blue_idx = RGB_CLASS_MAPPING[label]['B'] FASTER_MASK_LUT_IDX[red_idx,green_idx,blue_idx] = idx/NUM_CLASSES
RGB_CLASS_MAPPING maps an RGB value to a class, which was unrolled using enumerate to create CLASS_IDX and CLASS_LABELS using a list comprehension.
CLASS_IDX,CLASS_LABELS = zip(*[(idx,label for idx,label in enumerate(RGB_CLASS_MAPPING)])