MySQL ( MGR ) 06-MGR 单主模式和多主模式的集群环境

  1. 克隆 MySQL ( MGR ) 05 后的VM & 修改配置(本次为 211节点,其他212、213同理)
[root@xag200 ~]# service mysqld stop
Shutting down MySQL.... SUCCESS!

[root@xag200 ~]# vim /usr/local/mysql/data/auto.cnf
[root@xag200 ~]# cat /usr/local/mysql/data/auto.cnf
[auto]
server-uuid=2116c7e9-63c9-11eb-a1ba-000c299e2211

[root@xag200 ~]# sed -i 's/127.0.0.1/xag211/' /root/.bashrc

[root@xag200 ~]# source  /root/.bash_profile

[root@xag200 ~]# hostname
xag211

[root@xag200 ~]# cat /etc/hostname
xag211

[root@xag200 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NOZEROCONF=yes
HOSTNAME=xag211

[root@xag200 ~]# cat /etc/hosts
...
192.168.40.211 xag211 xag211
192.168.40.212 xag212 xag212
192.168.40.213 xag213 xag213

[root@xag200 ~]# cat  /etc/sysconfig/network-scripts/ifcfg-ens33
。。。
IPADDR="192.168.40.211"
。。。

[root@xag200 ~]# reboot
  1. 修改 /etc/my.cnf 中 #复制框架设置 及 #组复制设置 如下(xag212、xag213类似)
    各个MGR节点除了server_id、loose-group_replication_local_address、report_host 三个参数不一样外,其他保持一致。
#复制框架设置
server_id = 211
# 表示本机的序列号为1,如果做主从,或者多实例,serverid一定不能相同。
log_bin = /usr/local/mysql/binlog/binlog 
# logbin数据库的操作日志,例如update、delete、create等都会存储到binlog日志,通过logbin可以实现增量恢复
binlog_format = row
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
log_slave_updates=ON
master_info_repository =TABLE
relay_log_info_repository=TABLE
 
#组复制设置
#server必须为每个事务收集写集合,并使用XXHASH64哈希算法将其编码为散列
transaction_write_set_extraction=XXHASH64
#告知插件加入或创建组命名,UUID
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
#server启动时不自启组复制,为了避免每次启动自动引导具有相同名称的第二个组,所以设置为OFF。
loose-group_replication_start_on_boot=off
#告诉插件使用IP地址,端口24901用于接收组中其他成员转入连接
loose-group_replication_local_address="xag211:24901"
#启动组server,种子server,加入组应该连接这些的ip和端口;其他server要加入组得由组成员同意
loose-group_replication_group_seeds="xag211:24901,xag:212:24901,xag213:24901"
loose-group_replication_bootstrap_group=off
report_host=xag211
report_port=3306

3.安装MGR(每个节点)

配置完成后, 要一次启动数据库,安装MGR插件,设置复制账号(所有MGR节点都要执行)

systemctl restart mysqld

mysql.login

INSTALL PLUGIN group_replication SONAME 'group_replication.so';

SET SQL_LOG_BIN=0;
 
CREATE USER repl@'%' IDENTIFIED BY 'repl';
 
GRANT REPLICATION SLAVE ON *.* TO repl@'%';
 
FLUSH PRIVILEGES;
 
SET SQL_LOG_BIN=1;
 
CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';
  1. 启动MGR单主模式
    4.1 主库(211)节点加入MGR集群
1) 启动MGR,在主库(211)节点上上执行
root@xag211:(none) [:30: ] 1 SQL->
SET GLOBAL group_replication_bootstrap_group=ON;
  
START GROUP_REPLICATION;
  
SET GLOBAL group_replication_bootstrap_group=OFF;
  
查看MGR组信息
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+-------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+-------------+
| group_replication_applier | 2116c7e9-63c9-11eb-a1ba-000c299e2211 | xag211      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+-------------+

4.2 在其他节点加入MGR集群,在从库(172.16.60.212)上执行

root@xag212:(none) [:36: ] 1 SQL->
START GROUP_REPLICATION;

再次查看MGR组信息 (在三个MGR节点上都可以查看)
root@xag212:(none) [:37: ] 4 SQL->
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2116c7e9-63c9-11eb-a1ba-000c299e2211 | xag211      |        3306 | ONLINE       |
| group_replication_applier | 2126c7e9-63c9-11eb-a1ba-000c299e2212 | xag212      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
  1. 查看master
方法1
root@xag212:(none) [:50: ] 13 SQL->
SELECT ta.* ,tb.MEMBER_HOST,tb.MEMBER_PORT,tb.MEMBER_STATE FROM 
performance_schema.global_status ta,performance_schema.replication_group_members tb 
WHERE ta.VARIABLE_NAME='group_replication_primary_member' and ta.VARIABLE_VALUE=tb.MEMBER_ID;
+----------------------------------+--------------------------------------+-------------+-------------+--------------+
| VARIABLE_NAME                    | VARIABLE_VALUE                       | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+----------------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_primary_member | 2116c7e9-63c9-11eb-a1ba-000c299e2211 | xag211      |        3306 | ONLINE       |
+----------------------------------+--------------------------------------+-------------+-------------+--------------+

方法2
root@xag212:(none) [:52: ] 18 SQL->
SELECT VARIABLE_VALUE FROM performance_schema.global_status 
WHERE VARIABLE_NAME='group_replication_primary_member';
+--------------------------------------+
| VARIABLE_VALUE                       |
+--------------------------------------+
| 2116c7e9-63c9-11eb-a1ba-000c299e2211 |
+--------------------------------------+

方法3
root@xag212:(none) [:53: ] 19 SQL->SHOW STATUS LIKE 'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 2116c7e9-63c9-11eb-a1ba-000c299e2211 |
+----------------------------------+--------------------------------------+

  1. 验证下MGR单主模式下节点数据的同步以及读写操作
主库
root@xag211:(none) [:55: ] 7 SQL->show variables like '%read_only';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | OFF   |
| super_read_only       | OFF   |
| transaction_read_only | OFF   |
| tx_read_only          | OFF   |
+-----------------------+-------+

root@xag211:(none) [:55: ] 8 SQL->use testdb;

root@xag211:testdb [:56: ] 9 SQL->select * from t1;
+----+----------+
| Id | TestName |
+----+----------+
|  1 | a        |
|  2 | b        |
|  3 | c        |
+----+----------+

root@xag211:testdb [:56: ] 10 SQL->insert into t1 values(4,'d');

root@xag211:testdb [:56: ] 11 SQL->select * from t1;
+----+----------+
| Id | TestName |
+----+----------+
|  1 | a        |
|  2 | b        |
|  3 | c        |
|  4 | d        |
+----+----------+

备库
root@xag212:(none) [:55: ] 22 SQL->show variables like '%read_only';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | ON    |
| super_read_only       | ON    |
| transaction_read_only | OFF   |
| tx_read_only          | OFF   |
+-----------------------+-------+

root@xag212:(none) [:56: ] 23 SQL->use testdb;

root@xag212:testdb [:57: ] 24 SQL->select * from t1;
+----+----------+
| Id | TestName |
+----+----------+
|  1 | a        |
|  2 | b        |
|  3 | c        |
|  4 | d        |
+----+----------+

root@xag212:testdb [:57: ] 25 SQL->insert into t1 values(5,'e');
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
  1. 关闭所有的组成员
关闭所有的组成员必须先先关闭slave,最后关闭master.
[root@xag212 ~]# service mysqld stop

root@xag211:testdb [:56: ] 12 SQL->select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2116c7e9-63c9-11eb-a1ba-000c299e2211 | xag211      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

[root@xag211 ~]# systemctl stop mysqld

8.启动所有的组成员

开启组复制需要先开始primary,接着一个启动slave server.
[root@xag211 ~]# service mysqld start

#由于group_replication_bootstrap_group参数 我们在配置文件中设为了OFF,所以需要手动打开设为ON.
root@xag211:(none) [:05: ] 1 SQL->show variables like 'group_replication_bootstrap_group';
+-----------------------------------+-------+
| Variable_name                     | Value |
+-----------------------------------+-------+
| group_replication_bootstrap_group | OFF   |
+-----------------------------------+-------+

root@xag211:(none) [:07: ] 3 SQL->set global group_replication_bootstrap_group=on;

root@xag211:(none) [:07: ] 4 SQL->select * from performance_schema.replication_group_members;
+---------------------------+-----------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+-----------+-------------+-------------+--------------+
| group_replication_applier |           |             |        NULL | OFFLINE      |
+---------------------------+-----------+-------------+-------------+--------------+
#在没有开启组复制之前member_state为OFFLINE,开启组复制之后记得将group_replication_bootstrap_group再设为off
root@xag211:(none) [:08: ] 6 SQL->start group_replication;

root@xag211:(none) [:08: ] 7 SQL->set global group_replication_bootstrap_group=off;

root@xag211:(none) [:09: ] 8 SQL->
select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2116c7e9-63c9-11eb-a1ba-000c299e2211 | xag211      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

待primary server准备好后,其它的slave数据库可以一个个打开并开启组复制
[root@xag212 ~]# service mysqld start

[root@xag212 ~]# mysql.login

root@xag212:(none) [:13: ] 2 SQL->start group_replication;

#启动后检查
root@xag211:(none) [:12: ] 10 SQL->
select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2116c7e9-63c9-11eb-a1ba-000c299e2211 | xag211      |        3306 | ONLINE       |
| group_replication_applier | 2126c7e9-63c9-11eb-a1ba-000c299e2212 | xag212      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
  1. 第3个节点加入
root@xag213:(none) [:16: ] 1 SQL->START GROUP_REPLICATION;

root@xag213:(none) [:16: ] 2 SQL->
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2116c7e9-63c9-11eb-a1ba-000c299e2211 | xag211      |        3306 | ONLINE       |
| group_replication_applier | 2126c7e9-63c9-11eb-a1ba-000c299e2212 | xag212      |        3306 | ONLINE       |
| group_replication_applier | 2136c7e9-63c9-11eb-a1ba-000c299e2213 | xag213      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

异常问题处理1

#如果报如下错误
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
则查看错误日志:

2019-03-04T09:11:30.683714Z 0 [ERROR] Plugin group_replication reported: 
'This member has more executed transactions than those present in the group. Local transactions: 
87135ebb-3e51-11e9-8931-005056880888:1-2 > 
Group transactions: 851d03bb-3e51-11e9-8f8d-00505688047c:1-2,
8769f936-3e51-11e9-acaa-005056ac6820:1-2,
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-4'
2019-03-04T09:11:30.683817Z 0 [Warning] Plugin group_replication reported: 
'The member contains transactions not present in the group. 
It is only allowed to join due to group_replication_allow_local_disjoint_gtids_join option'
  
解决办法:
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON;

然后再接着加入MGR集群
mysql> START GROUP_REPLICATION;

异常问题处理2

查看MGR组信息 (在三个MGR节点上都可以查看)
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST   | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| group_replication_applier | 851d03bb-3e51-11e9-8f8d-00505688047c | 172.16.60.212 |        3306 | RECOVERING   |
| group_replication_applier | 87135ebb-3e51-11e9-8931-005056880888 | 172.16.60.213 |        3306 | RECOVERING   |
| group_replication_applier | 8769f936-3e51-11e9-acaa-005056ac6820 | 172.16.60.211 |        3306 | ONLINE       |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
3 rows in set (0.00 sec)
  
发现新加入的MGR-node2 , MGR-node3两个节点在集群里的状态是RECOVERING!!!
  
查看日志
[root@MGR-node3 ~]# tail -2000 /var/log/mysqld.log
.....................
.....................
2019-03-04T09:15:35.146740Z 734 [ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when
reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has
purged binary logs containing GTIDs that the slave requires.', Error_code: 1236
  
解决办法:
登录主库172.16.60.211, 查看被purge的GTID:
[root@MGR-node1 ~]#  mysql -p123456
....................
mysql> show global variables like 'gtid_purged';
+---------------+------------------------------------------+
| Variable_name | Value                                    |
+---------------+------------------------------------------+
| gtid_purged   | 8769f936-3e51-11e9-acaa-005056ac6820:1-2 |
+---------------+------------------------------------------+
1 row in set (0.00 sec)
  
接着在两个从库172.16.60.212, 172.16.60.213的数据库上执行下面命令,即跳过这个GTID:
mysql> STOP GROUP_REPLICATION;
Query OK, 0 rows affected (10.14 sec)
  
mysql> reset master;
Query OK, 0 rows affected (0.06 sec)
  
mysql> set global gtid_purged = '8769f936-3e51-11e9-acaa-005056ac6820:1-2';
Query OK, 0 rows affected (0.24 sec)
  
mysql>  START GROUP_REPLICATION;
Query OK, 0 rows affected, 1 warning (3.49 sec)
  
再次查看查看MGR组信息 (在三个MGR节点上都可以查看),
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST   | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| group_replication_applier | 851d03bb-3e51-11e9-8f8d-00505688047c | 172.16.60.212 |        3306 | ONLINE       |
| group_replication_applier | 87135ebb-3e51-11e9-8931-005056880888 | 172.16.60.213 |        3306 | ONLINE       |
| group_replication_applier | 8769f936-3e51-11e9-acaa-005056ac6820 | 172.16.60.211 |        3306 | ONLINE       |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
3 rows in set (0.00 sec)
  
通过上面可以看出:  三个MGR节点状态为online,并且主节点为172.16.60.211,只有主节点可以写入,其他两个MGR节点只读,MGR单主模式搭建成功。
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 217,734评论 6 505
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,931评论 3 394
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 164,133评论 0 354
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,532评论 1 293
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,585评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,462评论 1 302
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,262评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,153评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,587评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,792评论 3 336
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,919评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,635评论 5 345
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,237评论 3 329
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,855评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,983评论 1 269
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,048评论 3 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,864评论 2 354

推荐阅读更多精彩内容