问题描述
我想为一些数据拟合逻辑曲线.我使用逻辑曲线的一般方程来拟合我的数据.在
然后我用这种方法计算了协方差矩阵.
J = res_lsq.jaccov = np.linalg.inv(J.T.dot(J))
然后是使用这种方法的方差.
var = np.sqrt(np.diagonal(cov))
问题是这些是我的参数值.
参数= [ 1.94624122 5.66832958 5.21005677 -4.87025481 0.02520876 0.15057123 ]
这些是我的方差值.
方差= [3.38436210e-01 3.94262000e+03 8.30968350e+02 7.76773161e+026.44604446e-05 6.49474460e-04]
一个值为 3942 的参数是 5.66这些值是什么意思?这些数据是否真正显示了曲线与数据的拟合程度?我如何获得这样的数量,例如 p 值的类似物等?
我遇到了类似的问题.
我发现了一个类似的问题.
他们解释说你需要乘以你提到的 cov
均方误差,即 ``sum[(f(x)-y)^2]/(Nn)```,其中 N 是数据的长度,n 是您要拟合的参数数量.它可能会产生一个小数字,您的方差可能会减少.
一切顺利,
穆里洛
I wanted to fit a logistic curve to some data.I used the general equation of the logistic curve to fit to my data. Find it here.
def generate_data(t, A, K, C, Q, B, v):
y = A+ (K-A)*((C+Q*np.exp(-B*t))**(1/v))
return y
Here A, K, C, Q, B and v were variables I wanted to find.
I used the scipy.optimize.least_squares
function to get the values to generate my curve.
This was the argument function.
def fun(x, t, y):
return x[0] + (x[1]-x[0])*((x[2]+x[3]*np.exp(-x[4]*t))**(1/x[5])) - y
And this is how I called he actual optimisation function.
res_lsq= least_squares(fun, x0, args=(x_coord, y_coord))
Visually, the curve fit the data excellently.
I then calculated the Covariance matrix by this method.
J = res_lsq.jac
cov = np.linalg.inv(J.T.dot(J))
And then the variance using this method.
var = np.sqrt(np.diagonal(cov))
The problem is that for these were the values for my parameters.
Parameters= [ 1.94624122 5.66832958 5.21005677 -4.87025481 0.02520876 0.15057123 ]
And these were my variance values.
Variance= [3.38436210e-01 3.94262000e+03 8.30968350e+02 7.76773161e+02
6.44604446e-05 6.49474460e-04]
One value is 3942 for a parameter rhat is 5.66What do these values mean? Does this data actually show how well the curve fits the data? How do I get such a quantity, like an analogue to a p-value etc. ?
I was facing a similar issue.
I found a similar question.
There they explain that you need to multiply the cov
that you mentioned by the Mean Squared Error, that is ``sum[(f(x)-y)^2]/(N-n)```, where N is the length of the data and n is the number of parameters you are fitting. It will probably yield a small number and your variance will probably reduce.
All the best,
Murilo
这篇关于scipy.optimize.least_squares 中雅可比矩阵的协方差数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!