This is my set of data, where I would like to fit a closed curve to, just like this post here is the visualized dataset: However, these are the results I got no matter how I sort my array. I pinned a few problems about my dataset but don’t know how to deal with them: Many x and y values
Tag: scipy
How to use np.unique on big arrays?
I work with geospatial images in tif format. Thanks to the rasterio lib I can exploit these images as numpy arrays of dimension (nb_bands, x, y). Here I manipulate an image that contains patches of unique values that I would like to count. (they were generated with the scipy.ndimage.label function). My idea was to use the unique method of numpy
Scipy minimze with constrains that have no simple expression
I am trying to find the values that minimize a least squares function. The issue is that a solution may be valid or not in a way that cannot be given as a simple expression of the values. Instead, we can check the validity by calling a function. What I tried to do was to set the sum of squares
Problem in linear constraints of scipy. All the elements of population is getting rejected
I am using scipy differential evolution. I have to set the following linear constraints. 0<x1+x2+x3+x4<=1. x2+x3=1. I have set the following matrix A=[0 1 1 0] B=[1]. linear_constraint = LinearConstraint(A,B,B,True). i have also set lower and upper bound to 0 and1. However, during each iteration, the output of the objective function is InF, whereas the differential evolution is not calling
Interpolation of points along the spline using scipy.interpolate.splrep
I’m working on the task of interpolating the points along the lanes in the images. A sample image with annotated points(image not from the actual dataset and spacing between the point is also not actual) is included here. I’m trying to use slprep from scipy and below are the steps I’m taking. My current observations are: When I plot the
For loops to iterate through columns of a csv
I’m very new to python and programming in general (This is my first programming language, I started about a month ago). I have a CSV file with data ordered like this (CSV file data at the bottom). There are 31 columns of data. The first column (wavelength) must be read in as the independent variable (x) and for the first
is there a rule of thumb to know if i’m modifying a value or a referenced value?
Consider the following : How am i supposed to know that modifying hull.points (or foo, a reference to hull.points) is modifying pts ? The documentation only say : The inspector in pycharm also tell me that both foo and hull.points are a ndarray and nothing in the code, documentation, inspector tell me that my variables are, in fact, pointers referencing
Extrapolating using Pandas and Curve_fit error func() takes 3 positional arguments but 4 were given
I’m using the code from another post to extrapolate values. I changed the func() so that it is linear not cubic however I get an error “func() takes 3 positional arguments but 4 were give” Extrapolate values in Pandas DataFrame is where I got the original code. My question is how would I change it so it works with a
Initialize high dimensional sparse matrix
I want to initialize 300,000 x 300,0000 sparse matrix using sklearn, but it requires memory as if it was not sparse: it gives the error: which is the same error as if I initialize using numpy: Even when I go to a very low density, it reproduces the error: Is there a more memory-efficient way to create such a sparse
command line ipython with –pylab + more imports
The –pylab command line arguments makes ipython a quick but powerful calculator in the terminal window which I use quite often. Is there a way to pass other useful imports to ipython via command line, such as which makes it even more convenient to use? Answer If you have installed sympy you get script that starts ipython with sympy imports.