Skip to content
Advertisement

Converting dictionary with known indices to a multidimensional array

I have a dictionary with entries labelled as {(k,i): value, ...}. I now want to convert this dictionary into a 2d array where the value given for an element of the array at position [k,i] is the value from the dictionary with label (k,i). The length of the rows will not necessarily be of the same size (e.g. row k = 4 may go up to index i = 60 while row k = 24 may go up to index i = 31). Due to the asymmetry, it is fine to make all additional entries in a particular row equal to 0 in order to have a rectangular matrix.

Advertisement

Answer

Here’s an approach –

JavaScript

We could also use sparse matrices to get the final output. e.g. with coordinate format sparse matrices. This would be memory efficient when kept as sparse matrices. So, the last step could be replaced by something like this –

JavaScript

Sample run –

JavaScript

To make it generic for ndarrays of any number of dimensions, we can use linear-indexing and use np.put to assign values into the output array. Thus, in our first approach, just replace the last step of assigning values with something like this –

JavaScript

Sample run –

JavaScript
User contributions licensed under: CC BY-SA
8 People found this is helpful
Advertisement