1.副本集的角色
官方网站:https://docs.mongodb.com/manual/replication/
主节点 负责读写
副本节点 同步主节点 shell下连接默认不可读
仲裁节点 不是必须的,不存储数据,也不参与竞选,只投票2.选举的机制
大多数投票原则,要想选出新的主节点,存活的节点数必须是副本集的一半以上数量
2.集群部署(多实例方式)
#1.创建节点目录和数据目录
mkdir -p /opt/mongo_2801{7,8,9}/{conf,log,pid}
mkdir -p /data/mongo_2801{7,8,9}
#2.创建配置文件
cat >/opt/mongo_28017/conf/mongodb.conf <<EOF
systemLog:
destination: file
logAppend: true
path: /opt/mongo_28017/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/mongo_28017
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/mongo_28017/pid/mongod.pid
net:
port: 28017
bindIp: 127.0.0.1,10.0.0.51
replication:
oplogSizeMB: 1024 # 记录混合回滚操作,类似mySQL的binlog
replSetName: dba # 副本集群名称
EOF
#3.复制到其他节点
cp /opt/mongo_28017/conf/mongodb.conf /opt/mongo_28018/conf/
cp /opt/mongo_28017/conf/mongodb.conf /opt/mongo_28019/conf/
#4.替换端口号
sed -i 's#28017#28018#g' /opt/mongo_28018/conf/mongodb.conf
sed -i 's#28017#28019#g' /opt/mongo_28019/conf/mongodb.conf
#5.启动所有节点
mongod -f /opt/mongo_28017/conf/mongodb.conf
mongod -f /opt/mongo_28018/conf/mongodb.conf
mongod -f /opt/mongo_28019/conf/mongodb.conf
- 初始化集群
mongo --port 28017
# 配置集群参数(定义一个函数)
config = {
_id : "dba",
members : [
{_id : 0, host : "10.0.0.51:28017"},
{_id : 1, host : "10.0.0.51:28018"},
{_id : 2, host : "10.0.0.51:28019"},
]}
# _id : "dba", #配置文件副本集群名称
# {_id : 0, host : "10.0.0.51:28017"}, #_id定义的集群编号,节点ip+端口
# 启动定义参数(启动函数)
rs.initiate(config)
- 主库上测试写入数据
mongo --port 28017
db.user_info.insertOne({"name":"json","age":27,"ad":"北京市朝阳区"})
db.user_info.insertOne({"name":"bobo","age":27,"ad":"北京市朝阳区"})
db.user_info.insertOne({"name":"lei","age":28,"ad":"北京市朝阳区"})
db.user_info.insertOne({"name":"bug","age":28,"ad":"北京市朝阳区"})
db.user_info.insertOne({"name":"bobo","age":28,"ad":"北京市朝阳区","sex":"null"})
- 副本节点查询数据
mongo --port 28018
#默认是无法查看库内容需要开启允许查看
rs.slaveOk()
show dbs
show tables
db.user_info.find()
#设置副本可读
#方法1:临时生效
rs.slaveOk()
#方法2:写入启动文件
cd ~/.
echo "rs.slaveOk()" >> .mongorc.js #mongod服务启动会自动读取/root/.mongo.js文件
3.副本集的权重调整
- 模拟故障转移
mongod -f /opt/mongo_28017/conf/mongodb.conf --shutdown
mongod -f /opt/mongo_28017/conf/mongodb.conf
- 查看当前集群配置
rs.conf()
- 调整节点权重
myconfig=rs.conf() # rs.conf结果赋值给myconfig
myconfig.members[0].priority=100 # myconfig调用rs.conf的值,members[0]指:集群的_id号(那个节点),priority=100指:该节点中字段priority(权限)调整为100
rs.reconfig(myconfig) # rs.reconfig重新加载函数myconfig修改后的值
rs.conf() # 再次观察权限是否修改成功
- 主节点主动降级
rs.stepDown() # 主master节点主动降级为从节点
- 恢复默认权重
myconfig=rs.conf()
myconfig.members[0].priority=1
rs.reconfig(myconfig)
4.增加节点和删除节点
- 创建新节点
mkdir -p /opt/mongo_28010/{conf,log,pid}
mkdir -p /data/mongo_28010
2.复制配置文件并修改端口号
cp /opt/mongo_28017/conf/mongodb.conf /opt/mongo_28010/conf/
sed -i 's#28017#28010#g' /opt/mongo_28010/conf/mongodb.conf
mongod -f /opt/mongo_28010/conf/mongodb.conf
mongo --port 28010
- 主节点上执行命令添加新节点
mongo --port 28017
rs.add("10.0.0.51:28010")
- 删除节点
rs.remove("10.0.0.51:28010")
5. 仲裁节点
仲裁节点的机制:不存储数据,不提供服务,指提供投票机制。几乎不占用性能资源
应用场景:当集群出现偶数时,不满足选3/2举机制,或偶数集群不能充分冗余资源利用时。常当做凑奇数集群来满足。选举机制
mkdir -p /opt/mongo_28011/{conf,log,pid}
mkdir -p /data/mongo_28011
#2.复制配置文件并修改端口号
cp /opt/mongo_28017/conf/mongodb.conf /opt/mongo_28011/conf/
sed -i 's#28017#28011#g' /opt/mongo_28011/conf/mongodb.conf
mongod -f /opt/mongo_28011/conf/mongodb.conf
mongo --port 28010
#3.主节点上执行命令添加仲裁节点
mongo --port 28017
rs.addArb("10.0.0.51:28011")
#4.登陆仲裁节点查看
mongo --port 28011
#5.模拟故障转移
结论:可以坏2台
# 集群存活数(加上仲裁节点)必须大于死亡集群数。
6.分片的集群
1.分片对比副本集概念
副本集资源利用率不高,容量受主master单台节点限制。
分片可以提高资源利用率,容量为整个分片集群,为主master的容量。2.分片的缺点
理想情况下需要机器比较多
配置和运维变得复杂且困难
提前规划好特别重要,一旦建立后在想改变架构变得很困难
mongo分片集群部署图
分片工作流程
1.路由服务mongos
路由服务,提供代理,替用户去向后请求shard分片的数据2.数据节点shard
负责处理数据的节点,每个shard都是分片集群的一部分3.分片配置信息服务器config
保存数据分配在哪个shard上
保存所有shard的配置信息
提供给mongos查询服务4.片键
数据存放到哪个shard的区分规则
片键就是索引
选择片键的依据:
能够被经常访问到的字段
索引字段基数够大
hash片建特点
特点:足够平均,足够随机
id name host sex
1 zhang SH boy
2 ya BJ boy
3 yaya SZ girl如果以id作为片键:
索引:id
1 hash 之后 shard2
2 hash 之后 shard3
3 hash 之后 shard1
7.部署分片集群
- 整体的目录和ip和端口规划
1.IP端口规划
db01:
shard1_master db01:28100
shard3_slave db03:28200
shard2_arbiter db02:28300
congfig server db01:40000
mongos server db01:60000
db02:
shard2_master db02:28100
shard1_slave db01:28200
shard3_arbiter db03:28300
congfig server db02:40000
mongos server db02:60000
db03:
shard3_master db03:28100
shard2_slave db02:28200
shard1_arbiter db01:28300
congfig server db03:40000
mongos server db03:60000
2.服务目录规划
/opt/master/{conf,log,pid}
/opt/slave/{conf,log,pid}
/opt/arbiter/{conf,log,pid}
/data/config/{conf,log,pid}
/data/mongos/{conf,log,pid}
3.数据目录
/data/master/
/data/slave/
/data/arbiter/
/data/config/
- 分片集群搭建步骤
#整体步骤
1.先搭建副本集
2.搭建config副本集
3.搭建mongos副本集
4.数据库启用分片功能
5.集合设置片建
6.插入测试数据
7.验证是否分片均衡
8.图形化工具连接
--------------------------------------------------------------------------------
#1.创建目录 (注意!3台机器都操作)
#配置文件目录
mkdir -p /opt/master/{conf,log,pid}
mkdir -p /opt/slave/{conf,log,pid}
mkdir -p /opt/arbiter/{conf,log,pid}
mkdir -p /opt/config/{conf,log,pid}
mkdir -p /opt/mongos/{conf,log,pid}
#数据目录(mongos无需数据目录)
mkdir -p /data/master/
mkdir -p /data/slave/
mkdir -p /data/arbiter/
mkdir -p /data/config/
#2.db01创建配置文件
#master配置文件:
cat >/opt/master/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/master/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/master/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/master/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo #时间同步
net:
port: 28100
bindIp: 127.0.0.1,10.0.0.51
replication:
oplogSizeMB: 1024
replSetName: shard1 #属于哪个shard节点,副本集需更具规范戳开
sharding:
clusterRole: shardsvr #集群中的角色,数据分片角色
EOF
#slave配置文件:
cat >/opt/slave/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/slave/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/slave/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/slave/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo #时间同步
net:
port: 28200
bindIp: 127.0.0.1,10.0.0.51
replication:
oplogSizeMB: 1024
replSetName: shard3 #属于哪个shard节点,副本集需更具规范戳开
sharding:
clusterRole: shardsvr #集群中的角色,数据分片角色
EOF
#arbiter配置文件
cat >/opt/arbiter/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/arbiter/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/arbiter/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/arbiter/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo #时间同步
net:
port: 28300
bindIp: 127.0.0.1,10.0.0.51
replication:
oplogSizeMB: 1024
replSetName: shard2 #属于哪个shard节点,副本集需更具规范戳开
sharding:
clusterRole: shardsvr #集群中的角色,数据分片角色
EOF
------------------------------------------db02配置文件----------------------------------------------
#db02创建配置文件
#master配置文件:
cat >/opt/master/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/master/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/master/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/master/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28100
bindIp: 127.0.0.1,10.0.0.52
replication:
oplogSizeMB: 1024
replSetName: shard2
sharding:
clusterRole: shardsvr
EOF
#slave配置文件:
cat >/opt/slave/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/slave/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/slave/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/slave/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28200
bindIp: 127.0.0.1,10.0.0.52
replication:
oplogSizeMB: 1024
replSetName: shard1
sharding:
clusterRole: shardsvr
EOF
#arbiter配置文件
cat >/opt/arbiter/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/arbiter/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/arbiter/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/arbiter/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28300
bindIp: 127.0.0.1,10.0.0.52
replication:
oplogSizeMB: 1024
replSetName: shard3
sharding:
clusterRole: shardsvr
EOF
---------------------------------------------db03配置文件-----------------------------------------------
#db03创建配置文件
#master配置文件:
cat >/opt/master/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/master/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/master/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/master/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28100
bindIp: 127.0.0.1,10.0.0.53
replication:
oplogSizeMB: 1024
replSetName: shard3
sharding:
clusterRole: shardsvr
EOF
#slave配置文件:
cat >/opt/slave/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/slave/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/slave/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/slave/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28200
bindIp: 127.0.0.1,10.0.0.53
replication:
oplogSizeMB: 1024
replSetName: shard2
sharding:
clusterRole: shardsvr
EOF
#arbiter配置文件
cat >/opt/arbiter/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/arbiter/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/arbiter/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/arbiter/pid/mongodb.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28300
bindIp: 127.0.0.1,10.0.0.53
replication:
oplogSizeMB: 1024
replSetName: shard1
sharding:
clusterRole: shardsvr
EOF
#3.启动服务(3台机器服务启动)
mongod -f /opt/master/conf/mongod.conf
mongod -f /opt/slave/conf/mongod.conf
mongod -f /opt/arbiter/conf/mongod.conf
netstat -lntup|grep mongod
#4.初始化副本集
db01节点创建shard1副本集:
mongo --port 28100
rs.initiate() # 需等待切换为PRIMARY后执行下面的添加
rs.add("10.0.0.52:28200")
rs.addArb("10.0.0.53:28300")
db02节点创建shard2副本集:
mongo --port 28100
rs.initiate()
rs.add("10.0.0.53:28200")
rs.addArb("10.0.0.51:28300")
db03节点创建shard3副本集:
mongo --port 28100
rs.initiate()
rs.add("10.0.0.51:28200")
rs.addArb("10.0.0.52:28300")
- 分片搭建config服务器步骤
#0.创建目录
mkdir -p /opt/config/{conf,log,pid}
mkdir -p /data/config/
#1.创建配置文件(注意,3台机器都操作)
cat >/opt/config/conf/mongod.conf<<EOF
systemLog:
destination: file
logAppend: true
path: /opt/config/log/mongodb.log
storage:
journal:
enabled: true
dbPath: /data/config/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 0.5
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /opt/config/pid/mongod.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 40000
bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}') #函数根据本机ip获取自动变更
replication:
replSetName: configset # 集群组的名称
sharding:
clusterRole: configsvr #分片集群角色
EOF
#2.启动服务
mongod -f /opt/config/conf/mongod.conf
#3.配置副本集,集群中一台操作即可(开启config集群的一主二从)
mongo --port 40000
rs.initiate()
rs.add("10.0.0.52:40000")
rs.add("10.0.0.53:40000")
rs.status()
- 分片集群搭建mongos配置
#3台mongos非集群,每台都是一个独立的server
1.创建目录
mkdir -p /opt/mongos/{conf,log,pid}
2.创建配置文件,配置文件3台机都需部署,且启动
cat >/opt/mongos/conf/mongos.conf <<EOF
systemLog:
destination: file
logAppend: true
path: /opt/mongos/log/mongos.log
processManagement:
fork: true
pidFilePath: /opt/mongos/pid/mongos.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 60000
bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
sharding:
configDB:
configset/10.0.0.51:40000,10.0.0.52:40000,10.0.0.53:40000 #填写config集群每个节点ip+端口
EOF
3.启动
mongos -f /opt/mongos/conf/mongos.conf
4.登陆mongos
mongo --port 60000
5.添加分片成员的信息(添加shard集群组的成员信息,master,slave,ARBITER),一台机执行即可
#进入系统管理库
use admin
db.runCommand({addShard:'shard1/10.0.0.51:28100,10.0.0.52:28200,10.0.0.53:28300'})
db.runCommand({addShard:'shard2/10.0.0.52:28100,10.0.0.53:28200,10.0.0.51:28300'})
db.runCommand({addShard:'shard3/10.0.0.53:28100,10.0.0.51:28200,10.0.0.52:28300'})
6.查看分片信息
db.runCommand( { listshards : 1 } )
- hash分片配置
image.png
#分片集群想开启分片,在创建新库时,必须经过下面3步骤。
#1.对某个库开启分片
#2.对该库创建索引以及算法
#3.对该库开启算法
#4.hashed算法特点:足够随机,足够平均,误差允许范围2%
1.数据库开启分片
mongo --port 60000
use admin
db.runCommand( { enablesharding : "oldboy" } ) # oldboy开分片库的名称
2.创建索引
use oldboy
db.hash.ensureIndex( { id: "hashed" } ) # hash集合,创建索引建立在id上,算法采用hashed算法
3.集合开启哈希分片
use admin
sh.shardCollection( "oldboy.hash", { id: "hashed" } ) #开启oldboy库下hash集合,索引是id上,算法为hashed
--------------------------------------------------------------------------------------------------------
4.生成测试数据
use oldboy
for(i=1;i<10000;i++){db.hash.insert({"id":i,"name":"shenzheng","age":18});}
5.验证节点的分片是否均衡
shard1:
mongo db01:28100
use oldboy
db.hash.count() # 统计集合内数据总数
3349
shard2:
mongo db02:28100
use oldboy
db.hash.count()
3366
shard3:
mongo db03:28100
use oldboy
db.hash.count()
3284
#如果分片主个master服务上的数据总量相差不足2%。那么整个mongo分片集群已经成功部署!
8.分片集群常用管理命令
1.列出分片所有详细信息
db.printShardingStatus()
sh.status()
2.列出所有分片成员信息
use admin
db.runCommand({ listshards : 1})
3.列出开启分片的数据库
use config
db.databases.find({"partitioned": true })
4.查看分片的片键
use config
db.collections.find().pretty()
5.查看集合的分片信息
db.getCollection('range').getShardDistribution()
db.getCollection('hash').getShardDistribution()
9.分片集群正确的启动顺序
1.mongos
2.所有的config
3.所有的master
4.所有的arbiter
5.所有的slave