问题描述
我想最小化一组方程,其中变量的不确定性是已知的.本质上,我想测试给定的测量变量符合方程给出的公式约束的假设.这似乎是我应该能够用 scipy-optimize 做的事情.例如我有三个方程:
I'd like to minimize a set of equations where the variables are known with their uncertainties. In essence I'd like to test the hypothesis that the given measured variables conform to the formula constraints given by the equations. This seems like something I should be able to do with scipy-optimize. For example I have three equations:
8 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4
4 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4
1 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4
以及四个测量未知数及其 1-sigma 不确定性:
And four measured unknowns with their 1-sigma uncertainty:
x1 = 0.246 ± 0.007
x2 = 0.749 ± 0.010
x3 = 1.738 ± 0.009
x4 = 2.248 ± 0.007
寻找任何指向正确方向的指针.
Looking for any pointers in the right direction.
推荐答案
这是我的方法.假设 x1-x4
近似正态分布在每个均值附近(1-sigma 不确定性),问题就变成了最小化误差平方和的问题,具有 3 个线性约束函数.因此,我们可以使用 scipy.optimize.fmin_slsqp()
This is my approach. Assuming x1-x4
are approximately normally distributed around each mean (1-sigma uncertainty), the problem is turning into one of minimizing the sum of square of errors, with 3 linear constrain functions. Therefore, we can attack it using scipy.optimize.fmin_slsqp()
In [19]:
def eq_f1(x):
return (x*np.array([0.5, 1.0, 1.5, 2.0])).sum()-8
def eq_f2(x):
return (x*np.array([0.0, 0.0, 1.0, 1.0])).sum()-4
def eq_f3(x):
return (x*np.array([1.0, 1.0, 0.0, 0.0])).sum()-1
def error_f(x):
error=(x-np.array([0.246, 0.749, 1.738, 2.248]))/np.array([0.007, 0.010, 0.009, 0.007])
return (error*error).sum()
In [20]:
so.fmin_slsqp(error_f, np.array([0.246, 0.749, 1.738, 2.248]), eqcons=[eq_f1, eq_f2, eq_f3])
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.17576389592
Iterations: 4
Function evaluations: 32
Gradient evaluations: 4
Out[20]:
array([ 0.25056582, 0.74943418, 1.74943418, 2.25056582])
这篇关于求解给定变量和不确定性的线性方程:scipy-optimize?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!