问题描述
我无法使这种棚车工作……我得到 OptimizeWarning:无法估计参数的协方差
category = OptimizeWarning) ,并且输出系数为
从scipy导入numpy as np $ b $b。optimizeimport curve_fit
def box(x,* p):
高度,中心,宽度= p
返回高度*(center-width / 2< x)*(x< center + width / 2)
x = np.linspace(-5,5)
y =(-2.5
coeff,var_matrix = curve_fit(box,x,y,p0 = [1,0,2])
输出系数为[1.04499699,0.,2.],不是第三个系数没有改变。
我怀疑这种函数形式不适合curve_fit使用的levenberg-marquardt算法,这有点烦人,因为我喜欢这个函数。在mathematica中,指定一个蒙特卡洛优化将变得微不足道。
您是对的。通常,基于梯度的优化不适用于具有尖锐边缘的函数。通过稍微扰动功能参数并查看装配质量的变化来估算梯度。但是,如果边缘不与数据点交叉,则稍微移动边缘将导致零梯度:
- A:很容易拟合振幅,因为高度的微小变化会立即导致残差的变化。
- B:很难拟合边缘,因为位置的微小变化不会影响残差(除非变化大到足以使边缘越过数据点)。
使用随机方法会更好。 Scipy具有
I can't get this boxcar fit working...I get " OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning)", and the output coefficients are not improved beyond the starting guess.
import numpy as np
from scipy.optimize import curve_fit
def box(x, *p):
height, center, width = p
return height*(center-width/2 < x)*(x < center+width/2)
x = np.linspace(-5,5)
y = (-2.5<x)*(x<2.5) + np.random.random(len(x))*.1
coeff, var_matrix = curve_fit(box, x, y, p0=[1,0,2])
The output coefficients are [ 1.04499699, 0., 2.], not that the third one has not even been changed.
I suspect that this functional form is not amenable to the levenberg-marquardt algorithm used by curve_fit, which is kind of annoying because I like this function. In mathematica it would be trivial to specify a monte carlo optimization, instead.
You are right. Generally, gradient-based optimizations are not well suited for functions with sharp edges. The gradient is estimated by perturbing the function parameters just a little and looking at the change in fitting quality. However, moving an edge just a little results in zero gradient if it does not cross a data point:
- A: it is easy to fit the amplitude because a small change in height immediaterly leads to a change in the residuals.
- B: it is hard to fit edges because a small change in position does not affect the residuals (unless the change is big enough to make the edge cross a data point).
Using a stochastic method should work better. Scipy has the differential_evolution function, which uses genetic algorithm and is therefore related to monte-carlo methods. However, it is less trivial to use than curve_fit
. You need to specify a cost function and ranges for the parameters:
res = differential_evolution(lambda p: np.sum((box(x, *p) - y)**2), # quadratic cost function
[[0, 2], [-5, 5], [0.1, 10]]) # parameter bounds
It's still a one-liner :)
coeff, var_matrix = curve_fit(box, x, y, p0=[1,0,2])
res = differential_evolution(lambda p: np.sum((box(x, *p) - y)**2), [[0, 2], [-5, 5], [0.1, 10]])
plt.step(x, box(x, *coeff), where='mid', label='curve_fit')
plt.step(x, box(x, *res.x), where='mid', label='diff-ev')
plt.plot(x, y, '.')
plt.legend()
这篇关于在python中使用scipy的curvefit拟合Boxcar函数的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!