问题描述
我了解浮点计算存在准确性问题,并且有很多问题可以解释原因.我的问题是,如果我运行相同的计算两次,我可以一直依赖它来产生相同的结果吗?哪些因素可能会影响这一点?
I understand that floating point calculations have accuracy issues and there are plenty of questions explaining why. My question is if I run the same calculation twice, can I always rely on it to produce the same result? What factors might affect this?
- 计算之间的时间间隔?
- CPU 的当前状态?
- 不同的硬件?
- 语言/平台/操作系统?
- 太阳耀斑?
我有一个简单的物理模拟,想记录会话以便可以重播.如果可以依赖计算,那么我应该只需要记录初始状态和任何用户输入,并且我应该始终能够准确地重现最终状态.如果计算不准确,开始时的错误可能会在模拟结束时产生巨大影响.
I have a simple physics simulation and would like to record sessions so that they can be replayed. If the calculations can be relied on then I should only need to record the initial state plus any user input and I should always be able to reproduce the final state exactly. If the calculations are not accurate errors at the start may have huge implications by the end of the simulation.
我目前在 Silverlight 工作,但很想知道这个问题是否可以得到一般性回答.
I am currently working in Silverlight though would be interested to know if this question can be answered in general.
更新: 最初的答案是肯定的,但显然,正如所选答案的评论中所讨论的那样,这并不完全明确.看来我得做一些测试,看看会发生什么.
Update: The initial answers indicate yes, but apparently this isn't entirely clear cut as discussed in the comments for the selected answer. It looks like I will have to do some tests and see what happens.
推荐答案
据我所知,如果您处理相同的指令集和编译器,并且您运行的任何处理器都遵守,那么您只能保证获得相同的结果严格按照相关标准(即 IEEE754).也就是说,除非您正在处理一个特别混乱的系统,否则运行之间的任何计算漂移都不太可能导致错误行为.
From what I understand you're only guaranteed identical results provided that you're dealing with the same instruction set and compiler, and that any processors you run on adhere strictly to the relevant standards (ie IEEE754). That said, unless you're dealing with a particularly chaotic system any drift in calculation between runs isn't likely to result in buggy behavior.
我知道的具体问题:
某些操作系统允许您以破坏兼容性的方式设置浮点处理器的模式.
some operating systems allow you to set the mode of the floating point processor in ways that break compatibility.
浮点中间结果通常在寄存器中使用 80 位精度,但在内存中仅使用 64 位.如果程序以更改函数内寄存器溢出的方式重新编译,则与其他版本相比,它可能返回不同的结果.大多数平台都会为您提供一种强制将所有结果截断为内存精度的方法.
floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory. If a program is recompiled in a way that changes register spilling within a function, it may return different results compared to other versions. Most platforms will give you a way to force all results to be truncated to the in memory precision.
标准库函数可能会因版本而异.我认为在 gcc 3 vs 4 中有一些常见的例子.
standard library functions may change between versions. I gather that there are some not uncommonly encountered examples of this in gcc 3 vs 4.
IEEE 本身允许一些二进制表示不同......特别是 NaN 值,但我不记得细节.
The IEEE itself allows some binary representations to differ... specifically NaN values, but I can't recall the details.
这篇关于浮点不准确的确定性如何?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!