问题描述
今天我运行我的文件系统索引脚本来刷新 RAID 文件索引,4 小时后它崩溃并出现以下错误:
[md5:] 241613/241627 97.5%[md5:] 241614/241627 97.5%[md5:] 241625/241627 98.1%正在创建丢失列表...(丢失 79570 个文件)正在创建新文件列表...(241627 个新文件)<--- 最后几次 GC --->11629672 毫秒:标记扫描 1174.6 (1426.5) ->1172.4 (1418.3) MB,659.9/0 ms [分配失败] [旧空间中的 GC 请求].11630371 毫秒:标记扫描 1172.4 (1418.3) ->1172.4 (1411.3) MB,698.9/0 ms [分配失败] [旧空间中的 GC 请求].11631105 毫秒:标记扫描 1172.4 (1411.3) ->1172.4 (1389.3) MB,733.5/0 毫秒 [最后的 gc].11631778 毫秒:标记扫描 1172.4 (1389.3) ->1172.4 (1368.3) MB,673.6/0 ms [最后的 gc].<--- JS 堆栈跟踪--->==== JS 堆栈跟踪 ==========================================安全上下文:0x3d1d329c9e59 1:SparseJoinWithSeparatorJS(又名SparseJoinWithSeparatorJS)[native array.js:~84] [pc=0x3629ef689ad0](this=0x3d1d32904189 <undefined>,w=0x2b690ce91071,<JS216L=37d16L1<JS16L<JS 函数 ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]: ,
>)2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin ...致命错误:CALL_AND_RETRY_LAST 分配失败 - JavaScript 堆内存不足1: node::Abort() [/usr/bin/node]2: 0xe2c5fc [/usr/bin/node]3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]7:0x3629ef50961b
服务器配备 16GB RAM 和 24GB SSD 交换.我非常怀疑我的脚本是否超过了 36GB 的内存.至少不应该
脚本使用文件元数据(修改日期、权限等,无大数据)创建存储为对象数组的文件索引
这是完整的脚本代码:http://pastebin.com/mjaD76c3
我过去已经用这个脚本遇到过奇怪的节点问题,例如什么迫使我.将索引拆分为多个文件,因为节点在处理诸如 String 之类的大文件时出现故障.有没有什么方法可以改善具有庞大数据集的 nodejs 内存管理?
如果我没记错的话,如果不手动增加,V8 中的内存使用量有一个严格的标准限制在 1.7 GB 左右.
在我们的一个产品中,我们在部署脚本中遵循了这个解决方案:
node --max-old-space-size=4096 yourFile.js
也会有一个新的空间命令,但正如我在这里读到的:a-tour-of-v8-garbage-collection 新空间只收集新创建的短期数据,旧空间包含所有引用的数据结构,这应该是您的最佳选择.>
Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:
[md5:] 241613/241627 97.5%
[md5:] 241614/241627 97.5%
[md5:] 241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)
<--- Last few GCs --->
11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]: ,
>)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [/usr/bin/node]
2: 0xe2c5fc [/usr/bin/node]
3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
7: 0x3629ef50961b
Server is equipped with 16gb RAM and 24gb SSD swap. I highly doubt my script exceeded 36gb of memory. At least it shouldn't
Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)
Here's full script code:http://pastebin.com/mjaD76c3
I've already experiend weird node issues in the past with this script what forced me eg. split index into multiple files as node was glitching when working on such big files as String. Is there any way to improve nodejs memory management with huge datasets?
If I remember correctly, there is a strict standard limit for the memory usage in V8 of around 1.7 GB, if you do not increase it manually.
In one of our products we followed this solution in our deploy script:
node --max-old-space-size=4096 yourFile.js
There would also be a new space command but as I read here: a-tour-of-v8-garbage-collection the new space only collects the newly created short-term data and the old space contains all referenced data structures which should be in your case the best option.
这篇关于Node.js 堆内存不足的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!