Docker 搭建Redis集群环境

[toc]

一:启动6个Redis容器实例
  • --privileged=true 获取宿主机root 用户权限

  • --net host:使用宿主机ip和端口,默认

  • --cluster-enabled yes:开启redis 集群

  • --appendonly yes:开启持久化

docker run -d --name redis-node-1 --net host --privileged=true -v D:\Docker\redis-share\redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381

docker run -d --name redis-node-2 --net host --privileged=true -v D:\Docker\redis-share\redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382

docker run -d --name redis-node-3 --net host --privileged=true -v D:\Docker\redis-share\redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383

docker run -d --name redis-node-4 --net host --privileged=true -v D:\Docker\redis-share\redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384

docker run -d --name redis-node-5 --net host --privileged=true -v D:\Docker\redis-share\redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385

docker run -d --name redis-node-6 --net host --privileged=true -v D:\Docker\redis-share\redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386


二:进入容器 redis-node-1 并为6台机器构建集群关系
  • 进入容器

docker exec -it redis-node-1 /bin/bash

  • 构建主从关系(ip 为宿主机ip)

redis-cli --cluster create 192.168.43.176:6381 192.168.43.176:6382 192.168.43.176:6383 192.168.43.176:6384 192.168.43.176:6385 192.168.43.176:6386 --cluster-replicas 1

redis-cli --cluster create 192.168.43.176:6381 192.168.43.176:6382 192.168.43.176:6383 192.168.43.176:6384 192.168.43.176:6385 192.168.43.176:6386 --cluster-replicas 1

--cluster-replicas 1 表示为每一个master创建一个slave节点

root@docker-desktop:/data# redis-cli --cluster create 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 127.0.0.1:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:6385 to 127.0.0.1:6381
Adding replica 127.0.0.1:6386 to 127.0.0.1:6382
Adding replica 127.0.0.1:6384 to 127.0.0.1:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   replicates a13417523319078aeb96fe85814d745349798914
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join

>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

三:链接进入6381作为切入点,查看集群状态
  • cluster info

  • cluster nodes

root@docker-desktop:/data# redis-cli -p 6381
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:228
cluster_stats_messages_pong_sent:249
cluster_stats_messages_sent:477
cluster_stats_messages_ping_received:244
cluster_stats_messages_pong_received:228
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:477
127.0.0.1:6381> cluster nodes
af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384@16384 slave 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 0 1649171931198 1 connected
3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381@16381 myself,master - 0 1649171934000 1 connected 0-5460
b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383@16383 master - 0 1649171934212 3 connected 10923-16383
a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382@16382 master - 0 1649171935214 2 connected 5461-10922
93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386@16386 slave b42b70e3d1208919ece928bd49e1c67a41121a0f 0 1649171933208 3 connected
fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385@16385 slave a13417523319078aeb96fe85814d745349798914 0 1649171933000 2 connected

四:通过集群连接模式进入redis

redis-cli -p 6381 -c


五:查看集群信息

redis-cli --cluster check host:port

redis-cli --cluster check 127.0.0.1:6381


六:主从容错切换
  • 1:先停掉主节点 6381, 6381的从节点6385自动升级为主节点【todo 没搞成功】

docker stop redis-node-1

  • 2:还原之前的 三主三从

1:启动6381 节点,此时6381作为6385的从节点

2:停掉6385 节点,此时从节点6381 升级为主节点

3:启动6385,此时6385作为6381的从节点


七:集群扩容
  • 1:新建两个节点 6387, 6387

docker run -d --name redis-node-7 --net host --privileged=true -v D:\Docker\redis-share\redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387

docker run -d --name redis-node-8 --net host --privileged=true -v D:\Docker\redis-share\redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388

  • 2:进入redis-node-7 节点内部

docker exec -it redis-node-7 /bin/bash

  • 3:将新增的6387节点(空槽号)作为master节点加入原集群

【ip_x:port_x】 加入到集群 【ip_a:port_a】

redis-cli --cluster add-node ip_x:port_x ip_a:port_a(作为基准的ip+port)

redis-cli --cluster add-node 127.0.0.1:6387 127.0.0.1:6381

root@docker-desktop:/data# redis-cli --cluster add-node 127.0.0.1:6387 127.0.0.1:6381
>>> Adding node 127.0.0.1:6387 to cluster 127.0.0.1:6381
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6387 to make it join the cluster.
[OK] New node added correctly.
  • 4:检查集群分配情况第1次

redis-cli --cluster check 127.0.0.1:6387

root@docker-desktop:/data# redis-cli --cluster check 127.0.0.1:6387
# 没有分配槽位
127.0.0.1:6387 (480eb0d7...) -> 0 keys | 0 slots | 0 slaves. 
127.0.0.1:6383 (b42b70e3...) -> 2 keys | 5461 slots | 1 slaves.
127.0.0.1:6381 (3d83a29a...) -> 2 keys | 5461 slots | 1 slaves.
127.0.0.1:6382 (a1341752...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6387)
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots: (0 slots) master
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 5:重新分配槽号

redis-cli --cluster reshard ip:port (之前的一台master实例)

redis-cli --cluster reshard 127.0.0.1:6381

root@docker-desktop:/data# redis-cli --cluster reshard 127.0.0.1:6381
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots: (0 slots) master
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
 # 每个节点配置的槽位数
How many slots do you want to move (from 1 to 16384)? 4096 
# 6387节点的id
What is the receiving node ID? 480eb0d7347aad32a55b10ec52375d9651e69030
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all

Ready to move 4096 slots.
  Source nodes:
    M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
  Destination node:
    M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
       slots: (0 slots) master
  Resharding plan:
    Moving slot 5461 from a13417523319078aeb96fe85814d745349798914
  • 6:检查集群情况第2次

redis-cli --cluster check 127.0.0.1:6387

root@docker-desktop:/data# redis-cli --cluster check 127.0.0.1:6387
127.0.0.1:6387 (480eb0d7...) -> 1 keys | 4096 slots | 0 slaves.
127.0.0.1:6383 (b42b70e3...) -> 2 keys | 4096 slots | 1 slaves.
127.0.0.1:6381 (3d83a29a...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6382 (a1341752...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6387)
# 前三个master节点各自分配一部分槽号给6387
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 7:为主节点6387分配从节点6388

redis-cli --cluster add-node 127.0.0.1:6388 127.0.0.1:6387 --cluster-slave --cluster-master-id 480eb0d7347aad32a55b10ec52375d9651e69030

root@docker-desktop:/data# redis-cli --cluster add-node 127.0.0.1:6388 127.0.0.1:6387 --cluster-slave --cluster-master-id 480eb0d7347aad32a55b10ec52375d9651e69030
>>> Adding node 127.0.0.1:6388 to cluster 127.0.0.1:6387
>>> Performing Cluster Check (using node 127.0.0.1:6387)
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6388 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:6387.
[OK] New node added correctly.
  • 8:检查集群情况第3次
root@docker-desktop:/data# redis-cli --cluster check 127.0.0.1:6388
127.0.0.1:6381 (3d83a29a...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6387 (480eb0d7...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6383 (b42b70e3...) -> 2 keys | 4096 slots | 1 slaves.
127.0.0.1:6382 (a1341752...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6388)
S: eab39975bc2e9efd23cbfd3dd619efeab752c57d 127.0.0.1:6388
   slots: (0 slots) slave
   replicates 480eb0d7347aad32a55b10ec52375d9651e69030
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

八:集群缩容
  • 1:检查集群情况1获取6388的节点id
root@docker-desktop:/data# redis-cli --cluster check 127.0.0.1:6388
127.0.0.1:6381 (3d83a29a...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6387 (480eb0d7...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6383 (b42b70e3...) -> 2 keys | 4096 slots | 1 slaves.
127.0.0.1:6382 (a1341752...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6388)
# 【6388的id节点号】
S: eab39975bc2e9efd23cbfd3dd619efeab752c57d 127.0.0.1:6388
   slots: (0 slots) slave
   replicates 480eb0d7347aad32a55b10ec52375d9651e69030
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 2:在集群中将 slave从节点 6388 删除

redis-cli --cluster del-node ip:port nodeID

redis-cli --cluster del-node 127.0.0.1:6388 eab39975bc2e9efd23cbfd3dd619efeab752c57d

root@docker-desktop:/data# redis-cli --cluster del-node 127.0.0.1:6388 eab39975bc2e9efd23cbfd3dd619efeab752c57d
>>> Removing node eab39975bc2e9efd23cbfd3dd619efeab752c57d from cluster 127.0.0.1:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
  • 3:将6387的槽号清空,重新分配。//本例中将清理出来的 槽号给6381

redis-cli --cluster reshard 127.0.0.1:6381

root@docker-desktop:/data# redis-cli --cluster reshard 127.0.0.1:6381
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
# 【需要回收的槽号数量】
How many slots do you want to move (from 1 to 16384)? 4096
# 【需要接受槽号的节点id - 6381的节点id】
What is the receiving node ID? 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
 # 【回收槽号的节点id - 6487的节点id】 
Source node #1: 480eb0d7347aad32a55b10ec52375d9651e69030
# 【done】
Source node #2: done

Ready to move 4096 slots.
  Source nodes:
    M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
       slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
  Destination node:
    M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
       slots:[1365-5460] (4096 slots) master
       1 additional replica(s)
  Resharding plan:
    Moving slot 0 from 480eb0d7347aad32a55b10ec52375d9651e69030
  • 4:第2次检查集群情况
root@docker-desktop:/data# redis-cli --cluster check 127.0.0.1:6387
127.0.0.1:6387 (480eb0d7...) -> 0 keys | 0 slots | 0 slaves.
127.0.0.1:6383 (b42b70e3...) -> 2 keys | 4096 slots | 1 slaves.
127.0.0.1:6381 (3d83a29a...) -> 2 keys | 8192 slots | 1 slaves.
127.0.0.1:6382 (a1341752...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6387)
#注意【此时6387节点已经没有槽位了】
M: 480eb0d7347aad32a55b10ec52375d9651e69030 127.0.0.1:6387
   slots: (0 slots) master
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
   
 # 【6381节点有两个槽位段了】 
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 5:将 master 主节点 6387 删除

redis-cli --cluster del-node 127.0.0.1:6387 480eb0d7347aad32a55b10ec52375d9651e69030

root@docker-desktop:/data# redis-cli --cluster del-node 127.0.0.1:6387 480eb0d7347aad32a55b10ec52375d9651e69030
>>> Removing node 480eb0d7347aad32a55b10ec52375d9651e69030 from cluster 127.0.0.1:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
  • 6:第3次检查集群情况
root@docker-desktop:/data# redis-cli --cluster check 127.0.0.1:6381
127.0.0.1:6381 (3d83a29a...) -> 2 keys | 8192 slots | 1 slaves.
127.0.0.1:6382 (a1341752...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6383 (b42b70e3...) -> 2 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7 127.0.0.1:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
M: a13417523319078aeb96fe85814d745349798914 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: fc40e09aaa16cc553803594185a0c7d9e9d0598b 127.0.0.1:6385
   slots: (0 slots) slave
   replicates a13417523319078aeb96fe85814d745349798914
M: b42b70e3d1208919ece928bd49e1c67a41121a0f 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 93e86133e673e5ae9cea8d0c529a546f710f006d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates b42b70e3d1208919ece928bd49e1c67a41121a0f
S: af55c54373c36edb4f7e60707baf6d3a7ce65c03 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 3d83a29a60619a71b9f1b2db2b1a17d2cadcc2c7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 226,074评论 6 523
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 97,181评论 3 410
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 173,579评论 0 370
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 61,741评论 1 304
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 70,675评论 6 404
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 54,144评论 1 316
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 42,344评论 3 433
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 41,429评论 0 282
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 47,984评论 1 328
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 39,960评论 3 351
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 42,057评论 1 359
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 37,658评论 5 352
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 43,349评论 3 342
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 33,757评论 0 25
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 34,943评论 1 278
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 50,670评论 3 384
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 47,116评论 2 368

推荐阅读更多精彩内容