Why is ‘scipy.sparse.linalg.spilu’ less efficient than ‘scipy.linalg.lu’ for sparse matrix?

I posted this question on https://scicomp.stackexchange.com, but received no attention. As long as I get answer in one of them, I will inform in the other. I have a matrix B which is sparse and try to utilize a function scipy.sparse.linalg.spilu specialized for sparse matrix to factorize B. Could you please explain why this function is significantly less efficient than the function scipy.linalg.lu for general matrix? Thank you so much! The computation time is Answer Your matrix B is not sparse at all. More than half of the elements in B are non-zeros. Of course spilu would be less efficient

Improvement on copy array elements numpy

I have a question regarding variable assignation and memory allocation in a simple case. Imagine that I am initialising a state vector x, with initial value x0. I am then making iterative updates on that state vector with buffer array X and after each iteration, I store the new state vector in a storage list L. An outline of my initial implementation can be found in the following snippet: Which would print that L holds Because when x is appended to L, only the reference to X[-1, :] is added, not the actual value. The solution I could find is

How to add 2D np array to front of 3D np array?

I have a 3D numpy array and I want to add a 2D np array of 0’s to the front of it. I want to add another array B so that: I’ve tried np.append(B,A) but it returns a 2D array. Answer You can do it using numpy.vstack and by reshaping your array. For instance: By the way, you can create your array A more efficiently:

Creating a tumbling windows in python

Just wondering if there is a way to construct a tumbling window in python. So for example if I have list/ndarray , listA = [3,2,5,9,4,6,3,8,7,9]. Then how could I find the maximum of the first 3 items (3,2,5) -> 5, and then the next 3 items (9,4,6) -> 9 and so on… Sort of like breaking it up to sections and finding the max. So the final result would be list [5,9,8,9] Answer Approach #1: One-liner for windowed-max using np.maximum.reduceat – Becomes more compact with np.r_ – Approach #2: Generic ufunc way Here’s a function for generic ufuncs and that

manipulating 2D matrix using numpy boolean indexing takes a long time

I’ve genereted a huge amount of random data like so: which is a 100,000 by 1000 matrix(!) I’m generating new matrix where for each row, each column is true if the mean of all the columns beforehand (minus the expectancy of bernoulli RV with p=0.25) is greater than equal some epsilon. like so: After doing so I’m generating a 1-d array (finally!) where each coloumn represents how many true values I had in the same column in the matrix, and then I’m dividing every column by some number (exp_numer = 100,000) Also I have 5 different epsilons which I iterate

Taking the min value of N last days

I have this data frame: I want to show the min value of n last days (say, n = 4), using Date column, excluding the value of current day. A similar solution has provided by jezrael. (That one calculates the mean, and not min.) Expected result: Answer Use similar solution like @Chris with custom lambda function in GroupBy.apply and last join to original by DataFrame.join:

Efficiently compare running total for month to total for month

I have a dataframe (df). It contains predicted daily data from a model, up until the end of 2020. As each day passes in the year, actual and id data is added to the row. There are multiple names for each day I want to add an additional column named payout. The payout should be 0 unless the sum of actual, month to date has passed the sum of predicted. I.e., for Nir, we can see the sum of predicted is 4200. So the payout should be 0 until the sum of actual passes 4200. Once that threshold is passed,

applying .astype() for specific indices doesn’t work

I want to change first column to int in dnarray [[ 1. 6.218 2.974 0. ] [ 2. 32.881 8.66 0. ] [ 3. 38.94 35.843 0. ] [ 4. 8.52 35.679 0. ] [ 5. 52.902 49.538 0. ]] float64 int32 float64 I tried to use deepcopy, but no success. Any help will be good, I didn’t find any similar questions. Answer To have just the index of the row(+1) as element of the array seems redundant to me, maybe you don’t need the first row at all. Otherwise the best option to me is to use two arrays:

Working with 2 arrays to populate a third one in numpy

I am trying to work with two arrays in a certain way in python. Lets say now I have a target array C = [2, 7, 15, 25, 40]. I want to find a output value (say Y) for each element in C (say x) with conditions: If x < A[0] then Y = B[0] If A[i] < x < 2*A[i] < A[i+1] or A[i] < x < A[i+1] < 2*A[i] then Y=x If A[i] < 2*A[i] < x < A[i+1] then Y = B[i+1] If x > A[end] then Y = x i is the maximum possible index satisfying

how to perform a backwards correlation/convolution in python

I am trying to perform a backwards correlation (at least that’s what I think it’s called) on 2 matrices to get a resultant matrix. Note: backwards convolution works too, because I’m applying this to a CNN. I have the following two matrices: vals: w0: I essentially want to apply a sliding window, except in this case all the values of w0 are multiplied by the scalar value at each point in vals, then added with adjacent values. Assuming a stride of 1 and a padding of same (wrt vals), the following code gives the result I want: Resulting in: Which