问题描述
在学习如何实现健壮的哈希表,我想知道算法v8用于根据键创建哈希,以及它们用作键的含义(对于字符串键).哈希表密钥生成的大多数示例都是您好世界示例,并且容易受到冲突攻击和其他因素的影响.我正在寻找生产系统如何实现哈希表(例如v8)的示例.
In learning how to implement a robust hash table, I am wondering the algorithm v8 uses to create the hash from a key, and what they use as the key (for string keys). Most of the examples on hash table key generation are hello world examples and subject to collision attacks and other things. I am looking for an example of how a production system implements a hash table, such as v8.
浏览 v8来源很有帮助,但这有点让我烦恼./p>
Looking through the v8 source has been helpful, but it is a bit over my head.
推荐答案
V8的字符串哈希实现在src/string-hasher-inl.h中,其核心部分是以下函数,该函数将新"字符添加"到运行哈希",并针对字符串中的每个字符在循环中调用:
V8's string hashing implementation is in src/string-hasher-inl.h, its core part is the following function, which "adds" a new character to the "running hash", and is invoked in a loop for every character in the string:
uint32_t StringHasher::AddCharacterCore(uint32_t running_hash, uint16_t c) {
running_hash += c;
running_hash += (running_hash << 10);
running_hash ^= (running_hash >> 6);
return running_hash;
}
根据定义,每个哈希表都可能有冲突.是否要考虑冲突攻击取决于应用程序(例如,处理客户端输入的服务器可能希望防范基于冲突的DoS攻击;而Web浏览器通常不需要关心:客户端脚本是否要创建无用的CPU负载,它可以简单地执行for (;;) {}
,但是为什么任何脚本都想这样做?).
By definition, every hash table can have collisions. Whether collision attacks are a concern depends on the application (for example, a server processing client input might want to guard against collision-based DoS attacks; whereas a web browser typically doesn't need to care: if client-side script wanted to create useless CPU load, it could simply do for (;;) {}
-- but why would any script want to do that?).
针对冲突攻击的一种可能的防御方法是使用随机值(例如,在应用程序启动时选择的值)而不是0初始化(盐")每个字符串的运行哈希".这样,攻击者无法预测哪些字符串可能具有冲突哈希
A possible defense against collision attacks is to initialize ("salt") every string's "running hash" with a random value (that's e.g. chosen at application startup) instead of 0. That way attackers cannot predict which strings might have colliding hashes.
这篇关于v8如何散列哈希表中的键的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!