我的函数实现的实验结果存在一些问题,我希望其他人验证我使用的函数在逻辑上是否合理。
上下文:对于某些编程/数学问题,我需要计算一个连续间隔的平均值,以给定的10位小数位精度。该函数相当复杂并且涉及两个维度,因此我更愿意执行求和近似而不是自己计算连续平均值(因此必须在两个维度上对函数进行积分)。
为了帮助我,我一直在研究 Rust 中的一组近似函数。他们使用黎曼和、梯形规则或辛普森规则,根据实现计算函数 f
跨区间 a..b
的离散平均值,并使用固定增量 delta
:
pub fn riemann_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64
{
let n = ((b - a) / delta) as usize;
(0..n)
.map(|i| (i as f64) * delta)
.map(f)
.sum::<f64>() / (n as f64)
}
pub fn trapezoidal_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64
{
let n = ((b - a) / delta) as usize;
(0..n)
.map(|i| (i as f64) * delta)
.map(f)
.collect::<Vec<f64>>()
.windows(2)
.map(|xs| xs[0] + xs[1])
.sum::<f64>() / (2.0 * (n as f64))
}
pub fn simpson_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64
{
let n = ((b - a) / delta) as usize;
(0..n)
.map(|i| (i as f64) * delta)
.map(f)
.collect::<Vec<f64>>()
.windows(3)
.map(|xs| xs[0] + 4.0 * xs[1] + xs[2])
.sum::<f64>() / (6.0 * (n as f64))
}
(旁注:我上面写的
simpson_avg
函数实际上是辛普森规则的两次偏移应用的平均值,因为它使程序不那么复杂。目前,我不认为这是问题的一部分。)当我使
delta
接近于零时,我测试了每种方法的误差收敛,使用了几个函数 x.atan()
、 x.powi(4)
、 x
,积分范围从 0.0 到 1.0。delta riemann_err trap_err simpson_err
(smaller is better)
x.atan():
0.1000000000 0.0396869230 0.0763276780 0.1136616747
0.0100000000 0.0039311575 0.0078330229 0.0117430951
0.0010000000 0.0003927407 0.0007851897 0.0011777219
0.0001000000 0.0000392703 0.0000785377 0.0001178060
0.0000100000 0.0000073928 0.0000113197 0.0000152467
0.0000010000 0.0000003927 0.0000007854 0.0000011781
0.0000001000 0.0000000393 0.0000000785 0.0000001178
x.powi(4):
0.1000000000 0.0466700000 0.0794750000 0.1081733333
0.0100000000 0.0049666670 0.0097696470 0.0145089140
0.0010000000 0.0004996667 0.0009976697 0.0014950090
0.0001000000 0.0000499967 0.0000999767 0.0001499500
0.0000100000 0.0000129997 0.0000179993 0.0000229989
0.0000010000 0.0000005000 0.0000010000 0.0000015000
0.0000001000 0.0000000500 0.0000001000 0.0000001500
x:
0.1000000000 0.0500000000 0.0950000000 0.1400000000
0.0100000000 0.0050000000 0.0099500000 0.0149000000
0.0010000000 0.0005000000 0.0009995000 0.0014990000
0.0001000000 0.0000500000 0.0000999950 0.0001499900
0.0000100000 0.0000100000 0.0000149999 0.0000199999
0.0000010000 0.0000005000 0.0000010000 0.0000015000
0.0000001000 0.0000000500 0.0000001000 0.0000001500
我预计辛普森法则在所有这些函数中收敛速度最快,但正如您所见,它的收敛速度最差,黎曼和表现最好。
对我来说,这毫无意义,尤其是在简单的多项式示例中,辛普森规则显然会提供更好(或至少等效)的近似值。我猜这意味着我的函数逻辑/公式存在非常微妙的问题,或者我遇到了浮点精度错误。我希望在诊断这个问题时得到一些帮助。
最佳答案
正如 Shepmaster 指出的那样,您需要仔细查看您的迭代器走了多远。riemann_avg
需要迭代 x
中的所有 a ..= b
,然后使用两点之间的平均值(将 n+1
元素的总和除以 n
也是错误的)! (所以基本上 sum [ f(a+0.5*delta), f(a+1.5*delta), ..., f(b-delta/2) ]
)trapezoidal_avg
只需要包含终点,否则没问题。simpson_avg
在很多层面上都是错误的。根据 wikipedia: Composite Simpson's rule,您不能使用所有 3 元组窗口,只能每秒使用一次;所以你需要奇数个点,至少 3。
还使用了我的 https://stackoverflow.com/a/47869373/1478356 中的 FloatIterator
。
Playground
extern crate itertools;
use itertools::Itertools;
/// produces: [ linear_interpol(start, end, i/steps) | i <- 0..steps ]
///
/// linear_interpol(a, b, p) = (1 - p) * a + p * b
pub struct FloatIterator {
current: u64,
current_back: u64,
steps: u64,
start: f64,
end: f64,
}
impl FloatIterator {
/// results in `steps` items
pub fn new(start: f64, end: f64, steps: u64) -> Self {
FloatIterator {
current: 0,
current_back: steps,
steps: steps,
start: start,
end: end,
}
}
/// results in `length` items. To use the same delta as `new` increment
/// `length` by 1.
pub fn new_with_end(start: f64, end: f64, length: u64) -> Self {
FloatIterator {
current: 0,
current_back: length,
steps: length - 1,
start: start,
end: end,
}
}
/// calculates number of steps from (end - start) / step
pub fn new_with_step(start: f64, end: f64, step: f64) -> Self {
let steps = ((end - start) / step).abs().round() as u64;
Self::new(start, end, steps)
}
pub fn length(&self) -> u64 {
self.current_back - self.current
}
fn at(&self, pos: u64) -> f64 {
let f_pos = pos as f64 / self.steps as f64;
(1. - f_pos) * self.start + f_pos * self.end
}
/// panics (in debug) when len doesn't fit in usize
fn usize_len(&self) -> usize {
let l = self.length();
debug_assert!(l <= ::std::usize::MAX as u64);
l as usize
}
}
impl Iterator for FloatIterator {
type Item = f64;
fn next(&mut self) -> Option<Self::Item> {
if self.current >= self.current_back {
return None;
}
let result = self.at(self.current);
self.current += 1;
Some(result)
}
fn size_hint(&self) -> (usize, Option<usize>) {
let l = self.usize_len();
(l, Some(l))
}
fn count(self) -> usize {
self.usize_len()
}
}
impl DoubleEndedIterator for FloatIterator {
fn next_back(&mut self) -> Option<Self::Item> {
if self.current >= self.current_back {
return None;
}
self.current_back -= 1;
let result = self.at(self.current_back);
Some(result)
}
}
impl ExactSizeIterator for FloatIterator {
fn len(&self) -> usize {
self.usize_len()
}
}
pub fn riemann_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64,
{
let n = ((b - a) / delta) as usize;
let n = n.max(1);
// start with:
// [a, a+delta, ..., b-delta, b]
// then for all neighbors (x, y) sum up f((x+y)/2)
FloatIterator::new_with_end(a, b, n as u64 + 1)
.tuple_windows()
.map(|(a, b)| 0.5 * (a + b))
.map(f)
.sum::<f64>() / (n as f64)
}
pub fn trapezoidal_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64,
{
let n = ((b - a) / delta) as usize;
let n = n.max(1);
// start with:
// [a, a+delta, ..., b-delta, b]
// then for all neighbors (x, y) sum up f((x+y)/2)
FloatIterator::new_with_end(a, b, n as u64 + 1)
.map(f)
.tuple_windows()
.map(|(a, b)| a + b)
.sum::<f64>() / (2.0 * (n as f64))
}
pub fn simpson_avg<F>(a: f64, b: f64, delta: f64, f: F) -> f64
where
F: Fn(f64) -> f64,
{
let n = ((b - a) / delta) as usize;
let n = n.max(2); // need at least 3 points in the iterator
let n = n + (n % 2); // need odd number of points in iterator
FloatIterator::new_with_end(a, b, n as u64 + 1)
.map(f)
.tuple_windows()
.step(2)
.map(|(a, m, b)| a + 4.0 * m + b)
.sum::<f64>() / (3.0 * (n as f64))
}
fn compare<F, G>(a: f64, b: f64, f: F, g: G)
where
F: Fn(f64) -> f64,
G: Fn(f64) -> f64,
{
let correct = g(b) - g(a);
println!("Expected result: {:0.10}", correct);
println!(
"{:13} {:13} {:13} {:13}",
"delta", "riemann_err", "trapez_err", "simpson_err"
);
for d in 0..8 {
let delta = 10.0f64.powi(-d);
let r = riemann_avg(a, b, delta, &f);
let t = trapezoidal_avg(a, b, delta, &f);
let s = simpson_avg(a, b, delta, &f);
println!(
"{:+0.10} {:+0.10} {:+0.10} {:+0.10}",
delta,
correct - r,
correct - t,
correct - s,
);
}
}
fn main() {
let start = 0.;
let end = 1.;
println!("f(x) = atan(x)");
compare(
start,
end,
|x| x.atan(),
|x| x * x.atan() - 0.5 * (1. + x * x).ln(),
);
println!("");
println!("f(x) = x^4");
compare(start, end, |x| x.powi(4), |x| 0.2 * x.powi(5));
println!("");
println!("f(x) = x");
compare(start, end, |x| x, |x| 0.5 * x * x);
}
f(x) = atan(x)
Expected result: 0.4388245731
delta riemann_err trapez_err simpson_err
+1.0000000000 -0.0248230359 +0.0461254914 -0.0011735268
+0.1000000000 -0.0002086380 +0.0004170148 -0.0000014072
+0.0100000000 -0.0000020834 +0.0000041667 -0.0000000001
+0.0010000000 -0.0000000208 +0.0000000417 -0.0000000000
+0.0001000000 -0.0000000002 +0.0000000004 +0.0000000000
+0.0000100000 -0.0000000000 +0.0000000000 -0.0000000000
+0.0000010000 -0.0000000000 +0.0000000000 -0.0000000000
+0.0000001000 +0.0000000000 +0.0000000000 -0.0000000000
f(x) = x^4
Expected result: 0.2000000000
delta riemann_err trapez_err simpson_err
+1.0000000000 +0.1375000000 -0.3000000000 -0.0083333333
+0.1000000000 +0.0016637500 -0.0033300000 -0.0000133333
+0.0100000000 +0.0000166664 -0.0000333330 -0.0000000013
+0.0010000000 +0.0000001667 -0.0000003333 -0.0000000000
+0.0001000000 +0.0000000017 -0.0000000033 +0.0000000000
+0.0000100000 +0.0000000000 -0.0000000000 -0.0000000000
+0.0000010000 +0.0000000000 -0.0000000000 -0.0000000000
+0.0000001000 +0.0000000000 +0.0000000000 +0.0000000000
f(x) = x
Expected result: 0.5000000000
delta riemann_err trapez_err simpson_err
+1.0000000000 +0.0000000000 +0.0000000000 +0.0000000000
+0.1000000000 -0.0000000000 -0.0000000000 -0.0000000000
+0.0100000000 +0.0000000000 +0.0000000000 +0.0000000000
+0.0010000000 +0.0000000000 +0.0000000000 +0.0000000000
+0.0001000000 -0.0000000000 -0.0000000000 +0.0000000000
+0.0000100000 -0.0000000000 -0.0000000000 +0.0000000000
+0.0000010000 -0.0000000000 -0.0000000000 +0.0000000000
+0.0000001000 +0.0000000000 +0.0000000000 -0.0000000000
关于math - 黎曼和比高阶多项式逼近收敛得更快,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/48798401/