I have a 3D array holding voxels from a mri dataset. The model could be stretched along one or more directions. E.g. the voxel size (x,y,z) could be 0.5×0.5×2 mm. Now I want to resample the 3D array into an array holding 1,1,1 mm voxels. For this I need to make the x/y dimensions smaller and the z dimension bigger
Tag: scipy
Expanding/Zooming in a numpy array
I have the following array: I want to expand it to this array: So I’m using the following command: based on this question and answer here Resampling a numpy array representing an image. However, what I’m getting is this: I want the expansion to be exactly by 3, or whatever the zoom factor is, but currently it’s different for each
Python open jp2 medical images – Scipy, glymur
I am trying to read and tile a jp2 image file. The image is RGB 98176 x 80656 pixels (it is medical image data). When trying to read the image with glymur I get this error: I understand the image is too big. What I need is to read the image data by tiles and save them elsewhere and in
problems installing scipy and sklearn
I am quite new into programming and I am struggling much to install scipy with the errors. I did not had any problems installing other libraries: Failed cleaning build dir for scipy and Failed building wheel for scipy With sklearn I am facing the problem : Failed building wheel for scikit-learn just found this info on the web regarding this
Understanding scipy deconvolve
I’m trying to understand scipy.signal.deconvolve. From the mathematical point of view a convolution is just the multiplication in fourier space so I would expect that for two functions f and g: Deconvolve(Convolve(f,g) , g) == f In numpy/scipy this is either not the case or I’m missing an important point. Although there are some questions related to deconvolve on SO
How can I get descriptive statistics of a NumPy array?
I use the following code to create a numpy-ndarray. The file has 9 columns. I explicitly type each column: Now I would like to get some descriptive statistics for each column (min, max, stdev, mean, median, etc.). Shouldn’t there be an easy way to do this? I tried this: but this returns an error: TypeError: cannot perform reduce with flexible
Scikit-learn train_test_split with indices
How do I get the original indices of the data when using train_test_split()? What I have is the following But this does not give the indices of the original data. One workaround is to add the indices to data (e.g. data = [(i, d) for i, d in enumerate(data)]) and then pass them inside train_test_split and then expand again. Are
Computing the correlation coefficient between two multi-dimensional arrays
I have two arrays that have the shapes N X T and M X T. I’d like to compute the correlation coefficient across T between every possible pair of rows n and m (from N and M, respectively). What’s the fastest, most pythonic way to do this? (Looping over N and M would seem to me to be neither fast
Dendrogram through scipy given a similarity matrix
I have computed a jaccard similarity matrix with Python. I want to cluster highest similarities to lowest, however, no matter what linkage function I use it produces the same dendrogram! I have a feeling that the function assumes that my matrix is of original data, but I have already computed the first similarity matrix. Is there any way to pass
How do I bin and categorize numbers in Python?
I’m not sure if binning is the correct term, but I want to implement the following for a project I am working on: I have an array or maybe a dict describing boundaries and/or regions, for example: The areas are indexed from 0 to 100 (for example). I want to classify each area into a color (that is less than