CEPH的参数配置及优化,作者:海风无影
1)主要对比官网参数
http://ceph.com/planet/configure-ceph-rbd-caching-on-openstack-nova/
http://docs.ceph.com/docs/master/rbd/rbd-config-ref/

2)参考http://www.kissthink.com/archive/5153.html实验结果得出结论:
 debugging_off: 0 关闭debugg
 "filestore_op_threads": "2",最大并行文件系统操作线程数默认为2,推荐调整到1,但写效果只有提高2%左右,一般不调整
 "journal_aio": "true",默认是打开的,一定要打开,否则性能差很多,XFS下降10%,EXT4近20%
  flush_true: 0  没查到此参数
  "ms_nocrc": "false",  调整为true ,提高read 2%左右性能
  "osd_disk_threads": "1", 后端磁盘线程数默认为1,不要调整,越大性能越差
  "osd_op_threads": "2",Ceph的OSD服务守护程序操作的线程数并行线程数默认为2,推荐调整到4,能提高10%读的性能
3)参考 http://my.oschina.net/renguijiayi/blog/348258
 
4)xfs 参数调整为 noatime,nodiratime,inode64,nobarrier
mount -t xfs -o noatime,nodiratime,inode64,nobarrierLABEL=/data1 /data1
默认的cache设置 
  "rbd_cache": "false",
  "rbd_cache_writethrough_until_flush":"false",
  "rbd_cache_size": "33554432",
  "rbd_cache_max_dirty": "25165824",
  "rbd_cache_target_dirty": "16777216",
  "rbd_cache_max_dirty_age": "1",
  "rbd_cache_block_writes_upfront":"false",


最后结合作出调整如下:  
修改ceph.conf增加以下字段
修改
在[global]字段增加
mon osd full ratio = .85
mon osd nearfull ratio = .75
osd op threads = 4
journal aio = true
journal dio = true
journal queue max ops = 10000
journal queue max bytes = 335544320
filestore queue max ops = 5000
filestore queue committing max ops = 5000
filestore queue max bytes = 1048576000
ms nocrc = true
[client]
rbd cache = true
rbd cache size = 67108864
rbd cache writethrough until flush = true



修改nova.conf
disk_cachemodes="network=writeback"
Several options can be used depending on the disk type:
file
block
network
mount
Caching methods available:
none,
writethrough,
writeback,
directsync,
writethrough,
unsafe,
更多查看http://ceph.com/planet/configure-ceph-rbd-caching-on-openstack-nova/
http://docs.ceph.com/docs/master/rbd/rbd-config-ref/
修改
sed -i'/#disk_cachemodes/adisk_cachemodes="network=writeback"'/etc/nova/nova.conf
重启后验证:
#virsh dumpxml instance-00000026
   


默认配置
  ceph --admin-daemon/var/run/ceph/ceph-osd.0.asok help
{ "config get": "config get : get the config value",
  "config set": "config set [ ...]: set aconfig variable",
  "config show": "dump current configsettings",
  "dump_blacklist": "dump blacklisted clientsand times",
  "dump_historic_ops": "show slowest recentops",
  "dump_op_pq_state": "dump op priority queuestate",
  "dump_ops_in_flight": "show the opscurrently in flight",
  "dump_watchers": "show clients which haveactive watches, and on which objects",
  "flush_journal": "flush the journal topermanent store",
  "get_command_descriptions": "list availablecommands",
  "getomap": "output entire objectmap",
  "git_version": "get git sha1",
  "help": "list available commands",
  "injectdataerr": "inject data error intoomap",
  "injectmdataerr": "inject metadataerror",
  "log dump": "dump recent log entries to logfile",
  "log flush": "flush log entries to logfile",
  "log reopen": "reopen log file",
  "objecter_requests": "show in-progress osdrequests",
  "perf dump": "dump perfcountersvalue",
  "perf schema": "dump perfcountersschema",
  "rmomapkey": "remove omap key",
  "setomapheader": "set omap header",
  "setomapval": "set omap key",
  "status": "high-level status of OSD",
  "truncobj": "truncate object tolength",
  "version": "get ceph version"}

查看当前运行的配置 ceph --admin-daemon/var/run/ceph/ceph-osd.0.asok config show  
更改前请保存原先配置,对比后的配置
#diff osd0.config_old osd0.config_new

91c91
<   "ms_nocrc": "false",
---
>   "ms_nocrc": "true",

150,151c150,151
<   "mon_osd_full_ratio": "0.95",
<   "mon_osd_nearfull_ratio":"0.85",
---
>   "mon_osd_full_ratio": "0.85",
>   "mon_osd_nearfull_ratio":"0.75",

400c400
<   "osd_op_threads": "2",
---
>   "osd_op_threads": "4",
551,553c551,553
<   "filestore_queue_max_ops": "50",
<   "filestore_queue_max_bytes":"104857600",
<   "filestore_queue_committing_max_ops":"500",
---
>   "filestore_queue_max_ops":"5000",
>   "filestore_queue_max_bytes":"1048576000",
>   "filestore_queue_committing_max_ops":"5000",
585,586c585,586
<   "journal_queue_max_ops": "300",
<   "journal_queue_max_bytes":"33554432",
---
>   "journal_queue_max_ops":"10000",
>   "journal_queue_max_bytes":"335544320",
11-10 07:15