I have a 2d gaussian whose center has been destroyed by pixel saturation. I need the center to be filled in because a poorly filled in center will confuse a neural network I’m trying to train. See below:
The scattered nan values I can handle fairly easily, but the large cluster in the gaussian’s center I cannot.
I’ve tried various methods to correct this, but none seem to work in the sense that the gaussian is filled in correctly.
Here are some other similar answers that I’ve tried:
Python Image Processing – How to remove certain contour and blend the value with surrounding pixels?
https://docs.astropy.org/en/stable/convolution/index.html
These work well for the small discrete nans floating around the image, but don’t adequately address the center cluster.
This is what I get with convolution infilling: I’ve taken slices of the centers as well.
I do actually have a reference image that does not have nans. However, the scaling of the pixel values are not constant, so I’ve made a function that takes into account the different scaling of each pixel.
def mult_mean_surround(s_arr, c_arr, coord): directions = np.array([[1,0],[-1,0],[0,1],[0,-1],[1,1],[1,-1],[-1,-1],[-1,1]]) s = np.array([]) for i in directions: try: if not np.isnan(s_arr[coord[0]+i[0],coord[1]+i[1]]): s=np.append(s,s_arr[coord[0]+i[0],coord[1]+i[1]]/c_arr[coord[0]+i[0],coord[1]+i[1]]) except IndexError: pass if len(s)!=0: s_arr[coord[0],coord[1]] = c_arr[coord[0],coord[1]] *np.mean(s)
It copies the corresponding pixel values of the reference image and scales it to the correct amount.
Ideally, it would look something like this: The center is brighter than the rim and it looks more like a gaussian. However, this method is also substantially slower than the rest, so I’m not sure how to get around either of my issues. I’ve tried boosting speed with cupy to no luck as shown here: Boosting algorithm with cupy
If anyone has any helpful ideas, that would be great.
Advertisement
Answer
I am assuming that you are filling the ‘hole’ with only one gaussian.
First make a mask of all the NaNs, i.e. NaN = 1, not NaN = 0.
You can do a neighbor-count check to remove all mask pixels with no neighbors, then use a clustering algorithm (like DBSCAN) to find the largest cluster of pixels.
Calculate the centroid, width (max x – min x), and height (max y – min y) of the resulting cluster.
You can then use the following code:
import math def gaussian_fit(query_x,query_y, centroid_x,centroid_y, filter_w,filter_h, sigma_at_edge = 1.0): x_coord = (query_x - centroid_x) * 2 / (filter_w * sigma_at_edge) y_coord = (query_y - centroid_y) * 2 / (filter_h * sigma_at_edge) return math.exp(-1.0*(x_coord**2+y_coord**2))
You may need to rescale the result by some constant.