MySQL MGR搭建过程中常遇见的问题及解决办法
mgr搭建过程中遇到的一些故障
实际中我一共部署了三套mgr环境,分别是单机多实例的mgr环境,多机同网段的mgr环境,多机不同网段的mgr环境,部署的过程大同小异,但是还是有一些有出入的地方,这里把部署过程遇到的故障列举出来,供大家参考,如果能有幸解决您在部署时候的问题,那是极好的。
01 常见故障1
?1 2 3 | [error] plugin group_replication reported: 'this member has more executed transactions than those present in the group. local transactions: bb874065-c485-11e8-8b52-000c2934472e:1 > group transactions: 3db33b36-0e51-409f-a61d-c99756e90155:1-11' [error] plugin group_replication reported: 'the member contains transactions not present in the group. the member will now exit the group.' [note] plugin group_replication reported: ‘ to force this member into the group you can use the group_replication_allow_local_disjoint_gtids_join option ' |
解决方案:
根据提示打开set global group_replication_allow_local_disjoint_gtids_join=on;
02 常见故障2
?1 2 3 | [error] plugin group_replication reported: 'this member has more executed transactions than those present in the group. local transactions: bb874065-c485-11e8-8b52-000c2934472e:1 > group transactions: 3db33b36-0e51-409f-a61d-c99756e90155:1-15' [warning] plugin group_replication reported: 'the member contains transactions not present in the group. it is only allowed to join due to group_replication_allow_local_disjoint_gtids_join option' [note] plugin group_replication reported: 'this server is working as secondary member with primary member address localhost.localdomaion:3306.' |
解决方案:
该故障和故障1的不同之处在于该问题出现时,参数group_replication_allow_local_disjoint_gtids_join已经设置成为on了。解决该问题的方法是执行reset master就行,然后重新在主节点和从节点开启通道,即
change master to master_user='rpl_user', master_password='rpl_pass' for channel 'group_replication_recovery';
03 常见故障3
本机测试时,遇到下面的问题
?1 2 3 4 5 | [warning] storing mysql user name or password information in the master info repository is not secure and is therefore not recommended. please consider using the user and password connection options for start slave; see the 'start slave syntax' in the mysql manual for more information. [error] slave i/o for channel 'group_replication_recovery' : error connecting to master 'rpl_user@localhost.localdomaion:' - retry- time : 60 retries: 1, error_code: 2005 [error] plugin group_replication reported: 'there was an error when connecting to the donor server. please check that group_replication_recovery channel credentials and all member_host column values of performance_schema.replication_group_members table are correct and dns resolvable.' [error] plugin group_replication reported: 'for details please check performance_schema.replication_connection_status table and error log messages of slave i/o for channel group_replication_recovery.' [note] plugin group_replication reported: 'retrying group recovery connection with another donor. attempt /' |
解决方案:
这个问题是由于测试环境上三台主机的hostname设置成为了同一个名称,改了hostname之后,这个问题就解决了。
04 常见故障4
?1 2 3 4 5 | #在线上正式环境操作时,出现下面的错误, mysql --root@localhost:(none) ::>>start group_replication; error (hy000): the server is not configured properly to be an active member of the group . please see more details on error log. #查看log文件,发现只有一个warning: 2019-02-20t07::30.233937z [warning] plugin group_replication reported: ' group replication requires slave-preserve- commit - order to be set to on when using more than 1 applier threads. |
解决方案:
?1 2 3 4 5 6 7 8 9 | mysql --root@localhost:(none) ::>>show variables like "%preserve%"; + --------------------------------+---------+ | variable_name | value | + --------------------------------+---------+ | slave_preserve_commit_order | off | + --------------------------------+---------+ row in set (0.01 sec) mysql --root@localhost:(none) ::>>set global slave_preserve_commit_order=; query ok, rows affected (0.00 sec) |
05 常见问题5
?1 2 3 4 | 2019-02-20t08::31.088437z [warning] plugin group_replication reported: '[gcs] connection attempt from ip address 192.168.9.208 refused. address is not in the ip whitelist.' 2019-02-20t08::32.088676z [warning] plugin group_replication reported: '[gcs] connection attempt from ip address 192.168.9.208 refused. address is not in the ip whitelist.' |
解决方法:
在my.cnf中配置group_replication_ip_whitelist参数即可解决
06 常见问题6
?1 2 3 | 2019-02-20t08::44.087492z [warning] plugin group_replication reported: 'read failed' 2019-02-20t08::44.096171z [error] plugin group_replication reported: '[gcs] the member was unable to join the group. local port: 24801' 2019-02-20t08::14.065775z [error] plugin group_replication reported: 'timeout on wait for view after joining group |
解决方案:
将my.cnf中的参数group_replication_group_seeds设置为只包含除自身外其他group成员的ip地址以及内部通信端口,如果写成group所有成员的ip地址,则会出现这个错误,这和相同网段的mgr部署方式有些差异。
07 常见问题7
?1 2 3 | [error] plugin group_replication reported: ‘[gcs] error on opening a connection to oceanbase07: on local port: '.' [error] plugin group_replication reported: ‘[gcs] error on opening a connection to oceanbase08: on local port: '.' [error] plugin group_replication reported: ‘[gcs] error on opening a connection to oceanbase07: on local port: '.' |
解决方案:
未开通防火墙上的固定端口,开通防火墙之后即可解决
08 常见问题8
?1 2 3 4 | [warning] storing mysql user name or password information in the master info repository is not secure and is therefore not recommended. please consider using the user and password connection options for start slave; see the 'start slave syntax' in the mysql manual for more information. [error] slave i/o for channel 'group_replication_recovery' : master command com_register_slave failed: access denied for user 'rpl_user' @ '%' (using password : yes) (errno: 1045), error_code: 1597 [error] slave i/o thread couldn 't register on master [note] slave i/o thread exiting for channel ' group_replication_recovery ', read up to log ' first ', position |
解决方案:
漏掉了某个节点的用户,为了保险起见,在group节点上执行
create user rpl_user@'%';
grant replication slave on *.* to rpl_user@'%' identified by 'rpl_pass';
09 常见问题9
?1 2 3 4 5 6 7 8 9 10 | [error] failed to open the relay log './localhost-relay-bin.000011' (relay_log_pos ). [error] could not find target log file mentioned in relay log info in the index file './work_nat_1-relay-bin. index' during relay log initialization. [error] slave: failed to initialize the master info structure for channel '' ; its record may still be present in 'mysql.slave_master_info' table , consider deleting it. [error] failed to open the relay log './localhost-relay-bin-group_replication_recovery.000001' (relay_log_pos ). [error] could not find target log file mentioned in relay log info in the index file './work_nat_1-relay-bin-group_replication_recovery.index' during relay log initialization. [error] slave: failed to initialize the master info structure for channel 'group_replication_recovery' ; its record may still be present in 'mysql.slave_master_info' table , consider deleting it. [error] failed to create or recover replication info repositories. [error] slave sql for channel '' : slave failed to initialize relay log info structure from the repository, error_code: [error] /usr/ local /mysql/bin/mysqld: slave failed to initialize relay log info structure from the repository [error] failed to start slave threads for channel '' |
解决方案:
这个错误是由于slave节点由于某种原因导致找不到relay-log的位置了,需要重新reset slave
以上就是mysql mgr搭建过程中常遇见的问题及解决办法的详细内容,更多关于mysql mgr搭建的资料请关注服务器之家其它相关文章!
原文链接:https://cloud.tencent.com/developer/article/1533657
1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。