问题描述
有人可以向我解释这个:
Can someone explain this one to me:
如果你很懒,我测试了A)vs B) :
If you're lazy, I tested A) vs B):
A)
var innerHTML = "";
items.forEach(function(item) {
innerHTML += item;
});
B)
var innerHTML = items.join("");
两个测试的项目
是相同的500个元素的字符串数组,每个字符串是随机的,长度在100到400个字符之间。
Where items
for both tests is the same 500-element array of strings, with each string being random and between 100 and 400 characters in length.
A)最终快10倍。怎么会这样 - 我一直认为使用 join()
连接是一个优化技巧。我的测试有什么问题吗?
A) ends up being 10x faster. How can this be--I always thought concatenating using join("")
was an optimization trick. Is there something flawed with my tests?
推荐答案
使用加入()
是在IE6上组合大字符串的优化技巧,以避免 O(n ** 2)
缓冲区副本。由于 O(n ** 2)
只能真正控制数组的开销,因此从来没有预料到组成小字符串的巨大性能胜利。
Using join("")
was an optimization trick for composing large strings on IE6 to avoid O(n**2)
buffer copies. It was never expected to be a huge performance win for composing small strings since the O(n**2)
only really dominates the overhead of an array for largish n.
现代翻译通过使用依赖字符串来解决这个问题。有关依赖字符串的说明以及一些优缺点,请参阅此 。
Modern interpreters get around this by using "dependent strings". See this mozilla bug for an explanation of dependent strings and some of the advantages and drawbacks.
基本上,现代口译员知道许多不同类型的字符串:
Basically, modern interpreters knows about a number of different kinds of strings:
- 字符数组
- 另一个字符串的切片(子字符串)
- 两个其他字符串的串联
这使得连接和子串O(1)的代价有时会使一个子网状缓冲区过多,从而导致垃圾收集器效率低下或复杂化。
This makes concatenation and substring O(1) at the cost of sometimes keeping too much of a substringed buffer alive resulting in inefficiency or complexity in the garbage collector.
一些现代解释器已经开始考虑进一步将(1)分解为仅用于ASCII字符串的byte [] s,以及当字符串包含UTF-16代码时uint16s的数组不能装入一个字节的单位。但我不知道这个想法是否真的在任何翻译中。
Some modern interpreters have played around with the idea of further decomposing (1) into byte[]s for ASCII only strings, and arrays of uint16s when a string contains a UTF-16 code unit that can't fit into one byte. But I don't know if that idea is actually in any interpreter.
这篇关于JavaScript字符串连接速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!