问题描述
注意:为了防止它们保持相同的精度,我使用了Matlab标签。 (从我可以告诉两个程序是非常相似的。)作为我的前一个问题的后续(),我试图确定我需要设置的精度级别(在我正在从Scilab代码转换的C ++程序中),以便按照Scillab程序的精确度来模拟。基本上这样两个程序产生的结果都是相同的(或 )。
当在Scilab中计算浮点计算时,是维持的精度水平?
我读过(
所以我不希望你需要做任何特殊的事情来表示精度,除了使用C ++双打(除非你的SciLab代码使用高精度浮点数)。
请注意,两个不同IEEE 754兼容实现的表示可能不同这个数字是16位有效数字:
MATLAB:
>> fprintf('%1.30f\\\
',1 / 2342317.0)
0.000000426927695952341190000000
Python :
>> %1.30f%(1/2342317)
'0.000000426927695952341193713560'
Note: I've used the Matlab tag just in case they maintain the same precision. (From what I can tell both programs are very similar.)
As a follow-up to a previous question of mine (here), I'm trying to determine the level of precision I need to set (in a C++ program which I'm currently converting from Scilab code) in order mock the accuracy of the Scilab program. Essentially so both programs with produce the same (or very similar) results.
When computing a floating point calculation in Scilab, what is the level of precision maintained?
I've read (here and a few other places) that when working with floating point arithmetic in C++ a double can only maintain somewhere around 16 decimal places accurately, for example:
4 8 12 16
v v v v
0.947368421052631578 long double
0.947368421052631526 double
How similar is this accuracy when compared to Scilab?
Re-posting comment as an answer:
IEEE 754 double-precision floating point numbers are the standard representation in most common languages, like MATLAB, C++ and SciLab:
so I don't expect you would need to do anything special to represent the precision, other than using C++ doubles (unless your SciLab code is using high-precision floats).
Note that the representations of two different IEEE 754 compliant implementations can differ after 16 significant digits:
MATLAB:
>> fprintf('%1.30f\n',1/2342317.0)
0.000000426927695952341190000000
Python:
>> "%1.30f" % (1/2342317,)
'0.000000426927695952341193713560'
这篇关于Scilab中浮点计算的精度是多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!