看下面的代码:
class Test
{
public static void main(String abc[])
{
for( int N=1; N <= 1_000_000_000; N=N*10)
{
long t1 = System.nanoTime();
start(N);
long t2 = System.nanoTime() - t1;
System.out.println("Time taken for " + N + " : " + t2);
}
}
public static void start( int N )
{
int j=1;
for(int i=0; i<=N; i++)
j=j*i;
}
}
上面的问题产生的输出是:
Time taken for 1 : 7267
Time taken for 10 : 3312
Time taken for 100 : 7908
Time taken for 1000 : 51181
Time taken for 10000 : 432124
Time taken for 100000 : 4313696
Time taken for 1000000 : 9347132
Time taken for 10000000 : 858
Time taken for 100000000 : 658
Time taken for 1000000000 : 750
问题:
1.)为什么N = 1花费的时间异常地大于N = 10? (有时甚至超过N = 100)
2.)为什么要花费N = 10M甚至更短的时间?
上述问题中指出的模式是深刻的,即使经过多次迭代也仍然存在。
这里与memoization有任何联系吗?
编辑:
谢谢您的回答。我想到了用实际循环替换方法调用。但是现在,没有JIT优化。为什么不 ?将语句放入优化过程中的方法中吗?
修改后的代码如下:
class test
{
public static void main(String abc[])
{
for( int k=1; k<=3; k++)
{
for( int N=1; N<=1_000_000_000; N=N*10)
{
long t1 = System.nanoTime();
int j=1;
for(int i=0; i<=N; i++)
j=j*i;
long t2 = System.nanoTime() - t1;
System.out.println("Time taken for "+ N + " : "+ t2);
}
}
}
}
编辑2:
上面修改的代码的输出:
Time taken for 1 : 2160
Time taken for 10 : 1142
Time taken for 100 : 2651
Time taken for 1000 : 19453
Time taken for 10000 : 407754
Time taken for 100000 : 4648124
Time taken for 1000000 : 12859417
Time taken for 10000000 : 13706643
Time taken for 100000000 : 136928177
Time taken for 1000000000 : 1368847843
Time taken for 1 : 264
Time taken for 10 : 233
Time taken for 100 : 332
Time taken for 1000 : 1562
Time taken for 10000 : 17341
Time taken for 100000 : 136869
Time taken for 1000000 : 1366934
Time taken for 10000000 : 13689017
Time taken for 100000000 : 136887869
Time taken for 1000000000 : 1368178175
Time taken for 1 : 231
Time taken for 10 : 242
Time taken for 100 : 328
Time taken for 1000 : 1551
Time taken for 10000 : 13854
Time taken for 100000 : 136850
Time taken for 1000000 : 1366919
Time taken for 10000000 : 13692465
Time taken for 100000000 : 136833634
Time taken for 1000000000 : 1368862705
最佳答案
因为这是VM第一次看到该代码-它可能决定只解释它,或者将它JIT耦合到 native 代码会花费一些时间,但可能没有优化。这是基准Java的“陷阱”之一。
到那时,JIT更加努力地优化了代码-将其减少到几乎没有。
特别是,如果您多次运行此代码(仅循环运行),您将看到JIT编译器优化的效果:
Time taken for 1 : 3732
Time taken for 10 : 1399
Time taken for 100 : 3266
Time taken for 1000 : 26591
Time taken for 10000 : 278508
Time taken for 100000 : 2496773
Time taken for 1000000 : 4745361
Time taken for 10000000 : 933
Time taken for 100000000 : 466
Time taken for 1000000000 : 933
Time taken for 1 : 933
Time taken for 10 : 467
Time taken for 100 : 466
Time taken for 1000 : 466
Time taken for 10000 : 933
Time taken for 100000 : 466
Time taken for 1000000 : 933
Time taken for 10000000 : 467
Time taken for 100000000 : 467
Time taken for 1000000000 : 466
Time taken for 1 : 467
Time taken for 10 : 467
Time taken for 100 : 466
Time taken for 1000 : 466
Time taken for 10000 : 466
Time taken for 100000 : 467
Time taken for 1000000 : 466
Time taken for 10000000 : 466
Time taken for 100000000 : 466
Time taken for 1000000000 : 466
如您所见,在第一个循环之后,无论输入是什么,循环所花费的时间都是相同的(模块噪声-基本上总是为〜460ns或〜933ns,这是不可预测的),这意味着JIT已优化了循环输出。
如果您实际上返回了
j
,并且将i
的初始值更改为1
而不是0
,那么您将看到期望的结果。将i
的初始值更改为1
的原因是,否则JIT可以发现您总是返回0。关于java - Java乘法奇怪的行为,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/17761515/