Skip to content
Advertisement

Is there a way to get the error in fitting parameters from scipy.stats.norm.fit?

I have some data which I have fitted a normal distribution to using the scipy.stats.normal objects fit function like so:

import numpy as np                                                                                                                                                                                                                       
import matplotlib.pyplot as plt                                                                                                                                                                                                          
from scipy.stats import norm                                                                                                                                                                                                             
import matplotlib.mlab as mlab                                                                                                                                                                                                           

x = np.random.normal(size=50000)                                                                                                                                                                                                         

fig, ax = plt.subplots()                                                                                                                                                                                                                 

nbins = 75                                                                                                                                                                                                                               
mu, sigma = norm.fit(x)                                                                                                                                                                                                                  
n, bins, patches = ax.hist(x,nbins,normed=1,facecolor = 'grey', alpha = 0.5, label='before');                                                                                                                                            
y0 = mlab.normpdf(bins, mu, sigma) # Line of best fit                                                                                                                                                                                    
ax.plot(bins,y0,'k--',linewidth = 2, label='fit before')                                                                                                                                                                                 
ax.set_title('$mu$={}, $sigma$={}'.format(mu, sigma))                                                                                                                                                                                  

plt.show()                                                                                                                                                                                                                               

I would now like to extract the uncertainty/error in the fitted mu and sigma values. How can I go about this?

Advertisement

Answer

You can use scipy.optimize.curve_fit: This method does not only return the estimated optimal values of the parameters, but also the corresponding covariance matrix:

popt : array

Optimal values for the parameters so that the sum of the squared residuals of f(xdata, *popt) – ydata is minimized

pcov : 2d array

The estimated covariance of popt. The diagonals provide the variance of the parameter estimate. To compute one standard deviation errors on the parameters use perr = np.sqrt(np.diag(pcov)).

How the sigma parameter affects the estimated covariance depends on absolute_sigma argument, as described above.

If the Jacobian matrix at the solution doesn’t have a full rank, then ‘lm’ method returns a matrix filled with np.inf, on the other hand ‘trf’ and ‘dogbox’ methods use Moore-Penrose pseudoinverse to compute the covariance matrix.

You can calculate the standard deviation errors of the parameters from the square roots of the diagonal elements of the covariance matrix as follows:

import numpy as np 
import matplotlib.pyplot as plt
from scipy.stats import norm 
from scipy.optimize import curve_fit

x = np.random.normal(size=50000)
fig, ax = plt.subplots() 
nbins = 75
n, bins, patches = ax.hist(x,nbins, density=True, facecolor = 'grey', alpha = 0.5, label='before'); 

centers = (0.5*(bins[1:]+bins[:-1]))
pars, cov = curve_fit(lambda x, mu, sig : norm.pdf(x, loc=mu, scale=sig), centers, n, p0=[0,1])

ax.plot(centers, norm.pdf(centers,*pars), 'k--',linewidth = 2, label='fit before') 
ax.set_title('$mu={:.4f}pm{:.4f}$, $sigma={:.4f}pm{:.4f}$'.format(pars[0],np.sqrt(cov[0,0]), pars[1], np.sqrt(cov[1,1 ])))

plt.show()

This results in the following plot:

enter image description here

User contributions licensed under: CC BY-SA
6 People found this is helpful
Advertisement