首先请原谅我可能是一个非常幼稚的问题。
我的任务是为我的项目确定正确的 nosql 数据库。
我以高度并发的方式在表(列族)中插入和更新记录。

然后我遇到了这个。

INFO 11:55:20,924 Writing Memtable-scan_request@314832703(496750/1048576 serialized/live bytes, 8204 ops)
 INFO 11:55:21,084 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-14-Data.db (115527 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=24665321)
 INFO 11:55:21,085 Writing Memtable-scan_request@721424982(1300975/2097152 serialized/live bytes, 21494 ops)
 INFO 11:55:21,191 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-15-Data.db (304269 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=26554523)
 WARN 11:55:21,268 Heap is 0.829968311377531 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN 11:55:21,268 Flushing CFS(Keyspace='mykey', ColumnFamily='scan_request') to relieve memory pressure
 INFO 11:55:25,451 Enqueuing flush of Memtable-scan_request@714386902(324895/843149 serialized/live bytes, 5362 ops)
 INFO 11:55:25,452 Writing Memtable-scan_request@714386902(324895/843149 serialized/live bytes, 5362 ops)
 INFO 11:55:25,490 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-16-Data.db (76213 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=27025950)
 WARN 11:55:30,109 Heap is 0.9017950505664833 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically



java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8849.hprof ...
Heap dump file created [1359702396 bytes in 105.277 secs]
 WARN 12:25:26,656 Flushing CFS(Keyspace='mykey', ColumnFamily='scan_request') to relieve memory pressure
 INFO 12:25:26,657 Enqueuing flush of Memtable-scan_request@728952244(419985/1048576 serialized/live bytes, 6934 ops)

需要注意的是,在我得到这个之前,我能够插入和更新大约 600 万条记录。我在单个节点上使用 cassandra。尽管日志中有提示,但我无法决定要更改什么配置。我确实检查了 bin/cassandra shell 脚本,我看到他们在提出 -Xms 和 -Xmx 值之前已经做了很多操作。

友善的建议。

最佳答案

首先,你可以运行

ps -ef|grep cassandra

查看 Cassandra 中 -Xmx 的设置。 -Xms 和 -Xmx 的默认值基于系统内存量。

检查此详细信息:
http://www.datastax.com/documentation/cassandra/1.2/index.html?pagename=docs&version=1.2&file=index#cassandra/operations/ops_tune_jvm_c.html

您可以尝试增加 MAX_HEAP_SIZE(在 conf/cassandra-env.sh 中)以查看问题是否会消失。

例如,您可以替换
MAX_HEAP_SIZE="${max_heap_size_in_mb}M"


MAX_HEAP_SIZE="2048M"

关于cassandra - 如何防止堆填满,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/17345063/

10-16 16:23
查看更多