一.mongodb分布式应用原理
MongoDB集群包括一定数量的mongod(分片存储数据)、mongos(路由处理)、config server(配置节点)、clients(客户端)、arbiter(为了当主挂掉的时候能够顺利的选出新的主,一组复制集的数量要求是奇数,如果是偶数个,就需要添加一个仲裁节点,这个节点只负责投票,不存储也不处理数据,所以对性能没啥要求,本次没有使用仲裁节点).
架构
1.架构设计
lvs master slave
172.16.35.130 172.16.35.158
vip 172.16.35.200 端口 27017
Server01 server02 server03
172.16.35.101 172.16.35.135 172.16.35.151
Replica Set 1 mongod db01 mongod db01 mongod db01
Replica Set 2 mongod db02 mongod db02 mongod db02
Replica Set 3 mongod db03 mongod db03 mongod db03
3 config mongod Config 1 mongod Config 2 mongod Config 3
3 mongos mongos1 mongos 2 mongos 3
2.主机设计
server01 172.16.35.101 mongod db01:27021 priority:3
mongod db02:27022 priority:2
mongod db03:27023 priority:1
mongod server:20000
mongos1:27017
server02 172.16.35.135 mongod db01:27021 priority:1
mongod db02:27022 priority:3
mongod db03:27023 priority:2
mongod server:20000
mongos2:27017
server03 172.16.35.151 mongod db01:27021 priority:2
mongod db02:27022 priority:1
mongod db03:27023 priority:3
mongod server:20000
mongos3:27017
mongodb的部署过程
一.安装,及其简单,下载tar包解压即用.设置好命令路径即可
cd /usr/local/src
#monogo的官方下载地址
wget http://172.16.35.138/package/mongodb-linux-x86_64-rhel70-3.0.3.tgz
tar zxf mongodb-linux-x86_64-rhel70-3.0.3.tgz
mv mongodb-linux-x86_64-rhel70-3.0.3 /usr/local/mongodb
echo 'export PATH=/usr/local/mongodb/bin/:$PATH' >>/etc/profile
source /etc/profile
#
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
yum -y install numactl numactl-devel
#
二.配置Replica Sets
1.创建数据和日志目录
server01;server02;server03:
mkdir -p /usr/local/mongodb/{run,conf,sock}
mkdir -p /home/mongodb/data/{db01,db02,db03,server}
mkdir -p /home/mongodb/logs/{db01,db02,db03,server,mongos}
cd /usr/local/mongodb/conf/
2.配置Replica Sets:
#配置文件:mongo3.0可以使用yaml格式的配置文件
#db01
systemLog:
destination: file
path: "/home/mongodb/logs/db01/db01.log"
logAppend: true
storage:
journal:
enabled: true
dbPath: "/home/mongodb/data/db01/"
directoryPerDB: true
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 16
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement :
fork: true
pidFilePath: "/usr/local/mongodb/run/db01.pid"
net:
port: 27021
replication:
oplogSizeMB: 20
replSetName: "db01"
sharding:
clusterRole: shardsvr
#db02
systemLog:
destination: file
path: "/home/mongodb/logs/db01/db02.log"
logAppend: true
storage:
journal:
enabled: true
dbPath: "/home/mongodb/data/db02/"
directoryPerDB: true
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 16
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement :
fork: true
pidFilePath: "/usr/local/mongodb/run/db02.pid"
net:
port: 27022
replication:
oplogSizeMB: 20
replSetName: "db02"
sharding:
clusterRole: shardsvr
#db03
systemLog:
destination: file
path: "/home/mongodb/logs/db01/db03.log"
logAppend: true
storage:
journal:
enabled: true
dbPath: "/home/mongodb/data/db03/"
directoryPerDB: true
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 16
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement :
fork: true
pidFilePath: "/usr/local/mongodb/run/db03.pid"
net:
port: 27023
replication:
oplogSizeMB: 20
replSetName: "db03"
sharding:
clusterRole: shardsvr
#
#启动服务,三台服务器上分别执行;
numactl --interleave=all mongod -f /usr/local/mongodb/conf/db01.conf
numactl --interleave=all mongod -f /usr/local/mongodb/conf/db02.conf
numactl --interleave=all mongod -f /usr/local/mongodb/conf/db03.conf
#用mongo连接其中一台主机的27021端口的mongod(注意,不能是db01的仲裁节点):
mongo --port 27021
>config={_id:'db01',members:[{_id:0,host:'172.16.35.101:27021',priority:3},{_id:1,host:'172.16.35.135:27021',priority:2},{_id:2,host:'172.16.35.151:27021',priority:1}]}
>rs.initiate(config)
#用mongo连接其中一台主机的27022端口的mongod(注意,不能是db02的仲裁节点):
mongo --port 27022
> config={_id:'db02',members:[{_id:0,host:'172.16.35.101:27022',priority:1},{_id:1,host:'172.16.35.135:27022',priority:3},{_id:2,host:'172.16.35.151:27022',priority:2}]}
> rs.initiate(config)
#用mongo连接其中一台主机的27023端口的mongod(注意,不能是db03的仲裁节点):
mongo --port 27023
> config={_id:'db03',members:[{_id:0,host:'172.16.35.101:27023',priority:2},{_id:1,host:'172.16.35.135:27023',priority:1},{_id:2,host:'172.16.35.151:27023',priority:3}]}
> rs.initiate(config)
> rs.status()
3.配置config server
#config server 配置文件如下:
#congfile server
systemLog:
destination: file
path: "/home/mongodb/logs/server/server.log"
logAppend: true
storage:
journal:
enabled: true
dbPath: "/home/mongodb/data/server/"
directoryPerDB: true
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 16
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement :
fork: true
pidFilePath: "/usr/local/mongodb/run/server.pid"
net:
port: 20000
sharding:
clusterRole: configsvr
#在3台服务器中分别启动配置服务
numactl --interleave=all mongod -f /usr/local/mongodb/conf/server.conf
4.配置router server
#router server 配置文件如下
#mongos
systemLog:
destination: file
path: "/home/mongodb/logs/mongos/mongos.log"
logAppend: true
processManagement :
fork: true
pidFilePath: "/usr/local/mongodb/run/mongos.pid"
net:
port: 27017
sharding:
configDB: 172.16.35.101:20000,172.16.35.135:20000,172.16.35.151:20000
#三台服务器上分别启动服务
numactl --interleave=all mongos -f /usr/local/mongodb/conf/mongos.conf
3.配置分片
mongo --port 27017
mongos> use admin
switched to db admin
mongos> db.runCommand({addshard:"db01/172.16.35.101:27021,172.16.35.135:27021,172.16.35.151:27021"})
mongos> db.runCommand({addshard:"db02/172.16.35.101:27022,172.16.35.135:27022,172.16.35.151:27022"})
mongos> db.runCommand({addshard:"db03/172.16.35.101:27023,172.16.35.135:27023,172.16.35.151:27023"})
#直接写主节点也可以;
mongos> db.runCommand({addshard:"db01/172.16.35.101:27021"})
{ "shardAdded" : "db01", "ok" : 1 }
mongos> db.runCommand({addshard:"db02/172.16.35.135:27022"})
{ "shardAdded" : "db02", "ok" : 1 }
mongos> db.runCommand({addshard:"db03/172.16.35.151:27023"})
{ "shardAdded" : "db03", "ok" : 1 }
#测试
7.激活数据库(work)和集合(status)的分片功能.
mongos> db.runCommand({enablesharding:"work"})
{ "ok" : 1 }
mongos> db.runCommand({shardcollection:"work.status",key:{_id:1}})
{ "collectionsharded" : "work.status", "ok" : 1 }