MySQL:半同步复制不工作的 bug 分析

问题描述

MySQL版本:5.7.16,5.7.17,5.7.21

存在多个半同步从库时,如果参数 rpl_semi_sync_master_wait_for_slave_count=1,启动第1个半同步从库时可以正常启动,启动第2个半同步从库后有很大概率 slave_io_thread 停滞(复制状态正常,Slave_IO_Running: Yes,Slave_SQL_Running: Yes,但是完全不同步主库 binlog )

复现步骤

1. 主库配置参数如下:

rpl_semi_sync_master_wait_for_slave_count = 1
rpl_semi_sync_master_wait_no_slave = OFF
rpl_semi_sync_master_enabled = ON
rpl_semi_sync_master_wait_point = AFTER_SYNC

2. 启动从库A的半同步复制 start slave,查看从库A复制正常

3. 启动从库B的半同步复制 start slave,查看从库B,复制线程正常,但是不同步主库 binlog

分析过程

首先弄清楚这个问题 ,需要先结合MySQL其他的一些状态信息,尤其是主库的 dump 线程状态来进行分析:

1. 从库A启动复制后,主库的半同步状态已启动:

show global status like '%semi%';
+--------------------------------------------+-----------+
| Variable_name | Value |
+--------------------------------------------+-----------+
| Rpl_semi_sync_master_clients | 1
....
| Rpl_semi_sync_master_status | ON
...

再看主库的 dump 线程,也已经启动:

select * from performance_schema.threads where PROCESSLIST_COMMAND='Binlog Dump GTID'\G
*************************** 1. row ***************************
THREAD_ID: 21872
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 21824
PROCESSLIST_USER: universe_op
PROCESSLIST_HOST: 172.16.21.5
PROCESSLIST_DB: NULL
PROCESSLIST_COMMAND: Binlog Dump GTID
PROCESSLIST_TIME: 300
PROCESSLIST_STATE: Master has sent all binlog to slave; waiting for more updates
PROCESSLIST_INFO: NULL
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
HISTORY: YES
CONNECTION_TYPE: TCP/IP
THREAD_OS_ID: 24093

再看主库的 error log,也显示 dump 线程(21824)启动成功,其启动的半同步复制:

2018-05-25T11:21:58.385227+08:00 21824 [Note] Start binlog_dump to master_thread_id(21824) slave_server(1045850818), pos(, 4)
2018-05-25T11:21:58.385267+08:00 21824 [Note] Start semi-sync binlog_dump to slave (server_id: 1045850818), pos(, 4)
2018-05-25T11:21:59.045568+08:00 0 [Note] Semi-sync replication switched ON at (mysql-bin.000005, 81892061)

2. 从库B启动复制后,主库的半同步状态,还是只有1个半同步从库 Rpl_semi_sync_master_clients=1:

show global status like '%semi%';
+--------------------------------------------+-----------+
| Variable_name | Value |
+--------------------------------------------+-----------+
| Rpl_semi_sync_master_clients | 1
...
| Rpl_semi_sync_master_status | ON
...

再看主库的 dump 线程,这时有3个 dump 线程,但是新起的那两个一直为 starting 状态:

select * from performance_schema.threads where PROCESSLIST_COMMAND='Binlog Dump GTID'\G
*************************** 1. row ***************************
THREAD_ID: 21872
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 21824
PROCESSLIST_USER: universe_op
PROCESSLIST_HOST: 172.16.21.5
PROCESSLIST_DB: NULL
PROCESSLIST_COMMAND: Binlog Dump GTID
PROCESSLIST_TIME: 695
PROCESSLIST_STATE: Master has sent all binlog to slave; waiting for more updates
PROCESSLIST_INFO: NULL
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
HISTORY: YES
CONNECTION_TYPE: TCP/IP
THREAD_OS_ID: 24093
*************************** 2. row ***************************
THREAD_ID: 21895
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 21847
PROCESSLIST_USER: universe_op
PROCESSLIST_HOST: 172.16.21.6
PROCESSLIST_DB: NULL
PROCESSLIST_COMMAND: Binlog Dump GTID
PROCESSLIST_TIME: 94
PROCESSLIST_STATE: starting
PROCESSLIST_INFO: NULL
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
HISTORY: YES
CONNECTION_TYPE: TCP/IP
THREAD_OS_ID: 24102

*************************** 3. row ***************************
THREAD_ID: 21898
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 21850
PROCESSLIST_USER: universe_op
PROCESSLIST_HOST: 172.16.21.6
PROCESSLIST_DB: NULL
PROCESSLIST_COMMAND: Binlog Dump GTID
PROCESSLIST_TIME: 34
PROCESSLIST_STATE: starting
PROCESSLIST_INFO: NULL
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
HISTORY: YES
CONNECTION_TYPE: TCP/IP
THREAD_OS_ID: 24966
3 rows in set (0.00 sec)

再看主库的 error log,21847 这个新的 dump 线程一直没起来,直到1分钟之后从库 retry ( Connect_Retry 和 Master_Retry_Count 相关),主库上又启动了1个 dump 线程 21850,还是起不来,并且 21847 这个僵尸线程还停不掉:

2018-05-25T11:31:59.586214+08:00 21847 [Note] Start binlog_dump to master_thread_id(21847) slave_server(873074711), pos(, 4)
2018-05-25T11:32:59.642278+08:00 21850 [Note] While initializing dump thread for slave with UUID <f4958715-5ef3-11e8-9271-0242ac101506>, found a zombie dump thread with the same UUID. Master is killing the zombie dump thread(21847).
2018-05-25T11:32:59.642452+08:00 21850 [Note] Start binlog_dump to master_thread_id(21850) slave_server(873074711), pos(, 4)

3. 到这里我们可以知道,从库B slave_io_thread 停滞的根本原因是因为主库上对应的 dump 线程启动不了。如何进一步分析线程调用情况?推荐使用 gstack 或者 pstack(实为gstack软链)来查看线程调用栈,其用法很简单:gstack <process-id>

4. 看主库的 gstack,可以看到 24102 线程(旧的复制 dump 线程)堆栈:

Thread 4 (Thread 0x7f0be0549700 (LWP 24102)):
#0 0x00007f0c03c0f1bd in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f0c03c0ad38 in _L_lock_975 () from /lib64/libpthread.so.0
#2 0x00007f0c03c0ace1 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00007f0be7588d84 in native_mutex_lock (mutex=0x7f0be778daf0 <ack_receiver+16>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/include/thr_mutex.h:84
#4 my_mutex_lock (mp=0x7f0be778daf0 <ack_receiver+16>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/include/thr_mutex.h:182
#5 inline_mysql_mutex_lock (src_line=160, that=<optimized out>, src_file=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/include/mysql/psi/mysql_thread.h:730
#6 Ack_receiver::remove_slave (this=0x7f0be778dae0 <ack_receiver>, thd=0x7f0b8800fa20) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/plugin/semisync/semisync_master_ack_receiver.cc:160
#7 0x00007f0be7588313 in repl_semi_binlog_dump_end (param=0x7f0be0546e90) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/plugin/semisync/semisync_master_plugin.cc:171
#8 0x0000000000c6013c in Binlog_transmit_delegate::transmit_stop (this=0x1e57040 <delegates_init()::transmit_mem>, thd=<optimized out>, flags=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_handler.cc:666
#9 0x0000000000ed1ade in Binlog_sender::cleanup (this=0x7f0be0547540) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_binlog_sender.cc:175
#10 0x0000000000ed39eb in Binlog_sender::run (this=0x7f0be0547540) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_binlog_sender.cc:280
#11 0x0000000000ecdd12 in mysql_binlog_send (thd=<optimized out>, log_ident=<optimized out>, pos=<optimized out>, slave_gtid_executed=<optimized out>, flags=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_master.cc:415
#12 0x0000000000ed0533 in com_binlog_dump_gtid (thd=0x7f0b8800fa20, packet=0x4 <Address 0x4 out of bounds>, packet_length=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_master.cc:399
#13 0x0000000000cf66ac in dispatch_command (thd=0x7f0b8800fa20, com_data=0x7f0be0548de0, command=COM_BINLOG_DUMP_GTID) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/sql_parse.cc:1643
#14 0x0000000000cf7c94 in do_command (thd=0x7f0b8800fa20) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/sql_parse.cc:997
#15 0x0000000000dc2a34 in handle_connection (arg=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/conn_handler/connection_handler_per_thread.cc:301
#16 0x00000000011cf5f4 in pfs_spawn_thread (arg=0x30f7820) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/storage/perfschema/pfs.cc:2188
#17 0x00007f0c03c08dc5 in start_thread () from /lib64/libpthread.so.0
#18 0x00007f0c028d276d in clone () from /lib64/libc.so.6

可以看到 24966 线程(新的复制 dump 线程)堆栈:

Thread 12 (Thread 0x7f0be0751700 (LWP 24966)):
#0 0x00007f0c03c0f1bd in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f0c03c0ad38 in _L_lock_975 () from /lib64/libpthread.so.0
#2 0x00007f0c03c0ace1 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00007f0be75895cc in native_mutex_lock (mutex=0x7f0be778daf0 <ack_receiver+16>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/include/thr_mutex.h:84
#4 my_mutex_lock (mp=0x7f0be778daf0 <ack_receiver+16>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/include/thr_mutex.h:182
#5 inline_mysql_mutex_lock (src_line=138, that=0x7f0be778daf0 <ack_receiver+16>, src_file=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/include/mysql/psi/mysql_thread.h:730
#6 Ack_receiver::add_slave (this=0x7f0be778dae0 <ack_receiver>, thd=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/plugin/semisync/semisync_master_ack_receiver.cc:138
#7 0x00007f0be7588405 in repl_semi_binlog_dump_start (param=0x7f0be074ecc0, log_file=0x7f0be074ff20 "", log_pos=4) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/plugin/semisync/semisync_master_plugin.cc:131
#8 0x0000000000c6043a in Binlog_transmit_delegate::transmit_start (this=0x1e57040 <delegates_init()::transmit_mem>, thd=<optimized out>, flags=<optimized out>, log_file=0x7f0be074ff20 "", log_pos=4, observe_transmission=0x7f0be074fecc) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_handler.cc:652
#9 0x0000000000ed1eb8 in Binlog_sender::init (this=0x7f0be074f540) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_binlog_sender.cc:111
#10 0x0000000000ed375b in Binlog_sender::run (this=0x7f0be074f540) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_binlog_sender.cc:200
#11 0x0000000000ecdd12 in mysql_binlog_send (thd=<optimized out>, log_ident=<optimized out>, pos=<optimized out>, slave_gtid_executed=<optimized out>, flags=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_master.cc:415
#12 0x0000000000ed0533 in com_binlog_dump_gtid (thd=0x7f0bb0045770, packet=0x4 <Address 0x4 out of bounds>, packet_length=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/rpl_master.cc:399
#13 0x0000000000cf66ac in dispatch_command (thd=0x7f0bb0045770, com_data=0x7f0be0750de0, command=COM_BINLOG_DUMP_GTID) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/sql_parse.cc:1643
#14 0x0000000000cf7c94 in do_command (thd=0x7f0bb0045770) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/sql_parse.cc:997
#15 0x0000000000dc2a34 in handle_connection (arg=<optimized out>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/sql/conn_handler/connection_handler_per_thread.cc:301
#16 0x00000000011cf5f4 in pfs_spawn_thread (arg=0x3051570) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/storage/perfschema/pfs.cc:2188
#17 0x00007f0c03c08dc5 in start_thread () from /lib64/libpthread.so.0

两线程都在等待 Ack_Receiver 的锁,而线程 21875 在持有锁,等待select:

Thread 15 (Thread 0x7f0bce7fc700 (LWP 21875)):
#0 0x00007f0c028c9bd3 in select () from /lib64/libc.so.6
#1 0x00007f0be7589070 in Ack_receiver::run (this=0x7f0be778dae0 <ack_receiver>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/plugin/semisync/semisync_master_ack_receiver.cc:261
#2 0x00007f0be75893f9 in ack_receive_handler (arg=0x7f0be778dae0 <ack_receiver>) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/plugin/semisync/semisync_master_ack_receiver.cc:34
#3 0x00000000011cf5f4 in pfs_spawn_thread (arg=0x2d68f00) at /export/home/pb2/build/sb_0-19016729-1464157976.67/mysql-5.7.13/storage/perfschema/pfs.cc:2188
#4 0x00007f0c03c08dc5 in start_thread () from /lib64/libpthread.so.0
#5 0x00007f0c028d276d in clone () from /lib64/libc.so.6

理论上 select 不应hang, Ack_receiver 中的逻辑也不会死循环,请教公司大神黄炎进行一波源码分析。

  1. semisync_master_ack_receiver.cc 的以下代码形成了对互斥锁的抢占, 饿死了其他竞争者:

void Ack_receiver::run()
{
...
while(1)
{
mysql_mutex_lock(&m_mutex);
...
select(...);
...
mysql_mutex_unlock(&m_mutex);
}
...
}

在 mysql_mutex_unlock 调用后,应适当增加其他线程的调度机会。

试验: 在 mysql_mutex_unlock 调用后增加 sched_yield();,可验证问题现象消失。

结论

  1. 从库 slave_io_thread 停滞的根本原因是主库对应的 dump thread 启动不了;

2. rpl_semi_sync_master_wait_for_slave_count=1 时,启动第一个半同步后,主库 ack_receiver 线程会不断的循环判断收到的 ack 数量是否 >= rpl_semi_sync_master_wait_for_slave_count,此时判断为 true,ack_receiver基本不会空闲一直持有锁。此时启动第2个半同步,对应主库要启动第2个 dump thread,启动 dump thread 要等待 ack_receiver 锁释放,一直等不到,所以第2个 dump thread 启动不了。

这个问题已经由我司大神在1月份提交了 bug 和 patch:https://bugs.mysql.com/bug.php?id=89370,已经在 5.7.24 版本上修复了。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,457评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,837评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,696评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,183评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,057评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,105评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,520评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,211评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,482评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,574评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,353评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,213评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,576评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,897评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,174评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,489评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,683评论 2 335

推荐阅读更多精彩内容