一、概述 1.简述: mariadb galera cluster 是一套在mysql innodb存储引擎上面实现multi-master及数据实时同步的系统架构,业务层面无需做读写分离工作,数据库读写压力都能按照既定的规则分发到各个节点上去。在数据方面完全兼容 mariadb、percona server和
一、概述
1.简述: mariadb galera cluster 是一套在mysql innodb存储引擎上面实现multi-master及数据实时同步的系统架构,业务层面无需做读写分离工作,数据库读写压力都能按照既定的规则分发到各个节点上去。在数据方面完全兼容 mariadb、percona server和mysql。
650) this.width=650; src=http://www.68idc.cn/help/uploads/allimg/151111/1213362954-0.jpg title=704836f4tdce55a1fb890&690.png alt=wkiom1t3ylpcmkppaae-9ehays0729.jpg>
2.特性:
(1).同步复制 synchronous replication
(2).active-active multi-master 拓扑逻辑
(3).可对集群中任一节点进行数据读写
(4).自动成员控制,故障节点自动从集群中移除
(5).自动节点加入
(6).真正并行的复制,基于行级
(7).直接客户端连接,原生的 mysql 接口
(8).每个节点都包含完整的数据副本
(9).多台数据库中数据同步由 wsrep 接口实现
3.局限性:
(1).目前的复制仅仅支持innodb存储引擎,任何写入其他引擎的表,包括mysql.*表将不会复制,但是ddl语句会被复制的,因此创建用户将会被复制,但是insert into mysql.user…将不会被复制的 (2).delete操作不支持没有主键的表,没有主键的表在不同的节点顺序将不同,如果执行select…limit… 将出现不同的结果集
(3).在多主环境下lock/unlock tables不支持,以及锁函数get_lock(), release_lock()…
(4).查询日志不能保存在表中。如果开启查询日志,只能保存到文件中
(5).允许最大的事务大小由wsrep_max_ws_rows和wsrep_max_ws_size定义。任何大型操作将被拒绝。如大型的load data操作
(6).由于集群是乐观的并发控制,事务commit可能在该阶段中止。如果有两个事务向在集群中不同的节点向同一行写入并提交,失败的节点将中止。对于集群级别的中止,集群返回死锁错误代码(error: 1213 sqlstate: 40001 (er_lock_deadlock))
(7).xa事务不支持,由于在提交上可能回滚
(8).整个集群的写入吞吐量是由最弱的节点限制,如果有一个节点变得缓慢,那么整个集群将是缓慢的。为了稳定的高性能要求,所有的节点应使用统一的硬件
(9).集群节点建议最少3个
(10).如果ddl语句有问题将破坏集群。
二、架构介绍
1.keepalived+lvs的经典组合作为前端负载均衡和高可用保障,可以使用单独两台主机分别作为主、备,如果数据库集群数量不多,比如两台,也可以直接在数据库主机上使用此组合
2.一共5台主机,2台作为keepalived+lvs的主备,另外三台分别为mdb1、mdb2和mdb3,mdb1作为参考节点,不执行任何客户端sql,这样做的好处有如下几条:
(1).数据一致性:因为参考节点本身不执行任何客户端sql,所以在这个节点上发生transaction冲突的可能性最小。因此如果发现集群有数据不一致的时候,参考节点上的数据应该是集群中最准确的。
(2).数据安全性:因为参考节点本身不执行任何客户端sql,所以在这个节点上发生灾难事件的可能性最小。因此当整个集群宕掉的时候,参考节点应该是恢复集群的最佳节点。
(3).高可用:参考节点可以作为专门state snapshot donor。因为参考节点不服务于客户端,因此当使用此节点进行sst的时候,不会影响用户体验,并且前端的负载均衡设备也不需要重新配置。
三、 环境准备
1.系统和软件
系统环境
系统
centos release 6.5
系统位数 x86_64
内核版本
2.6.32-431
软件版本
keepalived
1.2.13
lvs 1.24
mariddb 10.0.16
socat 1.7.3.0
2.主机环境
mdb1(参考点) 172.16.21.180
mdb2 172.16.21.181
mdb3 172.16.21.182
ha1(keepalived+lvs主) 172.16.21.201
ha2(keepalived+lvs备) 172.16.21.202
vip 172.16.21.188
四、 集群安装配置
以主机mdb1为例:
1.配置hosts文件
编辑/etc/hosts加入下列内容
[root@mdb1 ~]# vi /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6172.16.21.201 ha1172.16.21.202 ha2172.16.21.180 mdb1172.16.21.181 mdb2172.16.21.182 mdb3
2. 准备yum源
除了系统自带的官方源,再添加epel,percona,mariadb的源
[root@mdb1 ~]# vi /etc/yum.repos.d/maridb.repo # mariadb 5.5 redhat repository list - created 2015-03-04 02:45 utc# http://mariadb.org/mariadb/repositories/[mariadb]name = mariadbbaseurl = http://yum.mariadb.org/10.0/rhel6-amd64gpgkey=https://yum.mariadb.org/rpm-gpg-key-mariadbgpgcheck=1[root@mdb1 ~]#rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm[root@mdb1 ~]#rpm --import https://yum.mariadb.org/rpm-gpg-key-mariadb[root@mdb1 ~]# vi /etc/yum.repos.d/percona.repo[percona]name = centos $releasever - perconabaseurl=http://repo.percona.com/centos/$releasever/os/$basearch/enabled = 1gpgkey = file:///etc/pki/rpm-gpg/rpm-gpg-key-perconagpgcheck = 1[root@mdb1 ~]#wget -o /etc/pki/rpm-gpg/rpm-gpg-key-percona http://www.percona.com/downloads/rpm-gpg-key-percona[root@mdb1 ~]#yum clean all
3.安装socat
socat是一个多功能的网络工具,名字来由是”socket cat”,可以看作是netcat的n倍加强版。
事实证明,如果不安装socat,mariadb-galera-server最后的数据同步会失败报错,网上很多配置文档都没有讲到这点,请记住一定要安装
[root@mdb1 ~]# tar -xzvf socat-1.7.3.0.tar.gz[root@mdb1 ~]# cd socat-1.7.3.0[root@mdb1 socat-1.7.3.0]# ./configure --prefix=/usr/local/socat[root@mdb1 socat-1.7.3.0]# make && make install[root@mdb1 socat-1.7.3.0]# ln -s /usr/local/socat/bin/socat /usr/sbin/
4.安装mariadb、galera、xtrabackup
[root@mdb1 ~]# rpm -e --nodeps mysql-libs[root@mdb1 ~]# yum install mariadb-galera-server galera mariadb-client xtrabackup[root@mdb1 ~]#chkconfig mysql on[root@mdb1 ~]#service mysql start
5.置mariadb的root密码,并做安全加固
[root@mdb1 ~]#/usr/bin/mysql_secure_installation
6.创建用于同步数据库的sst帐号
[root@mdb1 ~]# mysql -uroot -penter password: welcome to the mariadb monitor. commands end with ; or \g.your mariadb connection id is 12server version: 10.0.16-mariadb-wsrep-log mariadb server, wsrep_25.10.r4144copyright (c) 2000, 2015, oracle, mariadb corporation ab and others.type 'help;' or '\h' for help. type '\c' to clear the current input statement.mariadb [(none)]> grant all privileges on *.* to sst@'%' identified by '123456'; mariadb [(none)]> flush privileges;mariadb [(none)]> quit
7.创建wsrep.cnf文件
[root@mdb1 ~]#cp /usr/share/mysql/wsrep.cnf /etc/my.cnf.d/[root@mdb1 ~]# vi /etc/my.cnf.d/wsrep.cnf只需要修改如下4行:wsrep_provider=/usr/lib64/galera/libgalera_smm.sowsrep_cluster_address=gcomm://wsrep_sst_auth=sst:123456wsrep_sst_method=xtrabackup
注意:
gcomm:// 是特殊的地址,仅仅是galera cluster初始化启动时候使用。
如果集群启动以后,我们关闭了第一个节点,那么再次启动的时候必须先修改
gcomm://为其他节点的集群地址,例如下次启动时需要更改
wsrep_cluster_address=gcomm://172.16.21.182:4567
650) this.width=650; src=http://www.68idc.cn/help/uploads/allimg/151111/12133c510-1.jpg title=704836f4t7c7d58fd365e&690.png alt=wkiom1t4dgcs9gylaad_eroxuyo868.jpg />
图中的node a就是我们的mdb1,node n就是后面需要添加的主机mdb3
8.修改/etc/my.cnf
添加如下一行
!includedir /etc/my.cnf.d/
另外最好在/etc/my.cnf中指定datadir路径
datadir = /var/lib/mysql
否则可能会遇到报错说找不到路径,所以最好加上这条
9.关闭防火墙iptables和selinux
很多人在启动数据库集群时总是失败,很可能就是因为防火墙没有关闭或者没有打开相应端口,最好的办法就是清空iptables并关闭selinux
[root@mdb1 ~]# iptables -f[root@mdb1 ~]# iptables-save > /etc/sysconfig/iptables[root@mdb1 ~]# setenforce 0[root@mdb1 ~]# vi /etc/selinux/config # this file controls the state of selinux on the system.# selinux= can take one of these three values:# enforcing - selinux security policy is enforced.# permissive - selinux prints warnings instead of enforcing.# disabled - no selinux policy is loaded.selinux=disabled# selinuxtype= can take one of these two values:# targeted - targeted processes are protected,# mls - multi level security protection.selinuxtype=targeted
10.重启mariadb
[root@mdb1 ~]# service mysql restart[root@mdb1 ~]# netstat -tulpn | grep -e 4567 -e 3306tcp 0 0 0.0.0.0:4567 0.0.0.0:* listen 11325/mysqld tcp 0 0 0.0.0.0:3306 0.0.0.0:* listen 11325/mysqld
到此,单节点的配置完成
11.添加mdb2、mdb3到集群
整个集群就是首位相连,简单说来就是在gcomm://处的ip不一样,mdb3—>mdb2—>mdb1—>mdb3,在生产环境,可以考虑将mdb1作为参考节点,不执行客户端的sql,用来保障数据一致性和数据恢复时用。具体构造方法如下:
(1)按照上述1-10的步骤安装和配置另外两条主机
(2)除了第7步wsrep_cluster_address要改为对应的主机地址
mdb2:wsrep_cluster_address=gcomm://172.16.21.180:4567
mdb3:wsrep_cluster_address=gcomm://172.16.21.181:4567
如果有更多主机要加入集群,以此类推,将wsrep_cluster_address指向前一个主机地址,而集群第一台主机指向最后一台的地址就行了
12.最后将mdb2和mdb3启动
[root@mdb2 ~]# service mysql start[root@mdb3 ~]# service mysql start
13.给集群加入galera arbitrator
对于只有2个节点的galera cluster和其他集群软件一样,需要面对极端情况下的脑裂状态。
为了避免这种问题,galera引入了arbitrator(仲裁人)。
仲裁人节点上没有数据,它在集群中的作用就是在集群发生分裂时进行仲裁,集群中可以有多个仲裁人节点。
仲裁人节点加入集群的方法很简单,运行如下命令即可:
[root@mdb1 ~]# garbd -a gcomm://172.16.21.180:4567 -g my_wsrep_cluster -d
参数说明:
-d 以daemon模式运行
-a 集群地址
-g 集群名称
14.确认galera集群正确安装和运行
mariadb [(none)]> show status like 'ws%';+------------------------------+----------------------------------------------------------+| variable_name | value |+------------------------------+----------------------------------------------------------+| wsrep_local_state_uuid | 64784714-c23a-11e4-b7d7-5edbdea0e62c uuid 集群唯一标记 || wsrep_protocol_version | 5 || wsrep_last_committed | 94049 sql 提交记录 || wsrep_replicated | 0 || wsrep_replicated_bytes | 0 || wsrep_repl_keys | 0 || wsrep_repl_keys_bytes | 0 || wsrep_repl_data_bytes | 0 || wsrep_repl_other_bytes | 0 || wsrep_received | 3 || wsrep_received_bytes | 287 || wsrep_local_commits | 0 本地执行的 sql || wsrep_local_cert_failures | 0 本地失败事务 || wsrep_local_replays | 0 || wsrep_local_send_queue | 0 || wsrep_local_send_queue_avg | 0.333333 队列平均时间间隔 || wsrep_local_recv_queue | 0 || wsrep_local_recv_queue_avg | 0.000000 || wsrep_local_cached_downto | 18446744073709551615 || wsrep_flow_control_paused_ns | 0 || wsrep_flow_control_paused | 0.000000 || wsrep_flow_control_sent | 0 || wsrep_flow_control_recv | 0 || wsrep_cert_deps_distance | 0.000000 并发数量 || wsrep_apply_oooe | 0.000000 || wsrep_apply_oool | 0.000000 || wsrep_apply_window | 0.000000 || wsrep_commit_oooe | 0.000000 || wsrep_commit_oool | 0.000000 || wsrep_commit_window | 0.000000 || wsrep_local_state | 4 || wsrep_local_state_comment | synced || wsrep_cert_index_size | 0 || wsrep_causal_reads | 0 || wsrep_cert_interval | 0.000000 || wsrep_incoming_addresses | 172.16.21.180:3306,172.16.21.182:3306,172.16.21.188:3306 || wsrep_cluster_conf_id | 19 || wsrep_cluster_size | 3 集群成员个数 || wsrep_cluster_state_uuid | 64784714-c23a-11e4-b7d7-5edbdea0e62c || wsrep_cluster_status | primary 主服务器 || wsrep_connected | on 当前是否连接中 || wsrep_local_bf_aborts | 0 || wsrep_local_index | 0 || wsrep_provider_name | galera || wsrep_provider_vendor | codership oy || wsrep_provider_version | 25.3.5(rxxxx) || wsrep_ready | on || wsrep_thread_count | 3 |+------------------------------+----------------------------------------------------------+
wsrep_ready为on,则说明mariadb galera集群已经正确运行了
监控状态说明:
(1)集群完整性检查:
wsrep_cluster_state_uuid:在集群所有节点的值应该是相同的,有不同值的节点,说明其没有连接入集群.
wsrep_cluster_conf_id:正常情况下所有节点上该值是一样的.如果值不同,说明该节点被临时”分区”了.当节点之间网络连接恢复的时候应该会恢复一样的值.
wsrep_cluster_size:如果这个值跟预期的节点数一致,则所有的集群节点已经连接.
wsrep_cluster_status:集群组成的状态.如果不为”primary”,说明出现”分区”或是”split-brain”状况.
(2)节点状态检查:
wsrep_ready: 该值为on,则说明可以接受sql负载.如果为off,则需要检查wsrep_connected.
wsrep_connected: 如果该值为off,且wsrep_ready的值也为off,则说明该节点没有连接到集群.(可能是wsrep_cluster_address或wsrep_cluster_name等配置错造成的.具体错误需要查看错误日志)
wsrep_local_state_comment:如果wsrep_connected为on,但wsrep_ready为off,则可以从该项查看原因.
(3)复制健康检查:
wsrep_flow_control_paused:表示复制停止了多长时间.即表明集群因为slave延迟而慢的程度.值为0~1,越靠近0越好,值为1表示复制完全停止.可优化wsrep_slave_threads的值来改善.
wsrep_cert_deps_distance:有多少事务可以并行应用处理.wsrep_slave_threads设置的值不应该高出该值太多.
wsrep_flow_control_sent:表示该节点已经停止复制了多少次.
wsrep_local_recv_queue_avg:表示slave事务队列的平均长度.slave瓶颈的预兆.
最慢的节点的wsrep_flow_control_sent和wsrep_local_recv_queue_avg这两个值最高.这两个值较低的话,相对更好.
(4)检测慢网络问题:
wsrep_local_send_queue_avg:网络瓶颈的预兆.如果这个值比较高的话,可能存在网络瓶
冲突或死锁的数目:
wsrep_last_committed:最后提交的事务数目
wsrep_local_cert_failures和wsrep_local_bf_aborts:回滚,检测到的冲突数目
15.测试数据是否能同步
分别在每个节点创建库和表,再删除,查看其它节点是否同步,如若配置正确,应该是同步的,具体操作省略
五、 keepalived+lvs配置
1.使用yum方式安装
[root@ha1 ~]# yum install keepalived ipvsadm[root@ha2 ~]# yum install keepalived ipvsadm
2.keepalived配置
主机ha1的配置
[root@ha1 ~]# vi /etc/keepalived/keepalived.confglobal_defs { notification_email { xx@xxxx.com } notification_email_from root@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id lvs_201}vrrp_instance vi_1 { state master interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type pass auth_pass 1111 } virtual_ipaddress { 172.16.21.188/24 dev eth0 label eth0:0 }}virtual_server 172.16.21.188 3306 { delay_loop 6 lb_algo rr lb_kind dr nat_mask 255.255.255.0 persistence_timeout 50 protocol tcp real_server 172.16.21.181 3306 { weight 1 tcp_check { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } real_server 172.16.21.182 3306 { weight 1 tcp_check { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } }}
备机ha2的配置
global_defs { notification_email { xx@xxxx.com } notification_email_from root@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id lvs_202}vrrp_instance vi_1 { state backup interface eth0 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type pass auth_pass 1111 } virtual_ipaddress { 172.16.21.188/24 dev eth0 label eth0:0 }}virtual_server 172.16.21.188 3306 { delay_loop 6 lb_algo rr lb_kind dr nat_mask 255.255.255.0 persistence_timeout 50 protocol tcp real_server 172.16.21.181 3306 { weight 1 tcp_check { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } real_server 172.16.21.182 3306 { weight 1 tcp_check { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } }}
3.lvs脚本配置
两台realserver服务器上都要配置如下脚本
[root@mdb2 ~]#vi /etc/init.d/lvsdr.sh#!/bin/bash# description: config realserver lo and apply noarpvip=172.16.21.188 . /etc/rc.d/init.d/functionscase $1 instart) /sbin/ifconfig lo down /sbin/ifconfig lo up echo 1 >/proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 >/proc/sys/net/ipv4/conf/lo/arp_announce echo 1 >/proc/sys/net/ipv4/conf/all/arp_ignore echo 2 >/proc/sys/net/ipv4/conf/all/arp_announce /sbin/sysctl -p >/dev/null 2>&1 /sbin/ifconfig lo:0 $vip netmask 255.255.255.255 up /sbin/route add -host $vip dev lo:0 echo lvs-dr real server starts successfully. ;;stop) /sbin/ifconfig lo:0 down /sbin/route del $vip >/dev/null 2>&1 echo 0 >/proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 >/proc/sys/net/ipv4/conf/lo/arp_announce echo 0 >/proc/sys/net/ipv4/conf/all/arp_ignore echo 0 >/proc/sys/net/ipv4/conf/all/arp_announce echo lvs-dr real server stopped. ;;status) isloon=`/sbin/ifconfig lo:0 | grep $vip` isroon=`/bin/netstat -rn | grep $vip` if [ $isloon == -a $isroon == ]; then echo lvs-dr real server has to run yet. else echo lvs-dr real server is running. fi exit 3 ;;*) echo usage: $0 {start|stop|status} exit 1 esac exit 0[root@mdb2 ~]# chmod +x /etc/init.d/lvsdr.sh[root@mdb3 ~]# chmod +x /etc/init.d/lvsdr.sh
4.启动keepalived和lvs
[root@mdb2 ~]# /etc/init.d/lvsdr.sh start[root@mdb3 ~]# /etc/init.d/lvsdr.sh start[root@ha1 ~]# service keepalived start[root@ha2 ~]# service keepalived start
5.加入开机自动启动
[root@mdb2 ~]#echo /etc/init.d/lvsdr.sh start >> /etc/rc.d/rc.local[root@mdb3 ~]#echo /etc/init.d/lvsdr.sh start >> /etc/rc.d/rc.local[root@ha1 ~]# chkconfig keepalived on[root@ha2 ~]# chkconfig keepalived on
6.测试
将主服务器ha1的keepalived关闭,在备机ha2上观察日志和ip变化
[root@ha1 ~]#service keepalived stop[root@ha2 ~]#tail -f /var/log/messagesmar 5 10:36:03 ha2 keepalived_healthcheckers[11249]: opening file '/etc/keepalived/keepalived.conf'.mar 5 10:36:03 ha2 keepalived_healthcheckers[11249]: configuration is using : 14697 bytesmar 5 10:36:03 ha2 keepalived_vrrp[11250]: opening file '/etc/keepalived/keepalived.conf'.mar 5 10:36:03 ha2 keepalived_vrrp[11250]: configuration is using : 63250 bytesmar 5 10:36:03 ha2 keepalived_vrrp[11250]: using linkwatch kernel netlink reflector...mar 5 10:36:03 ha2 keepalived_healthcheckers[11249]: using linkwatch kernel netlink reflector...mar 5 10:36:03 ha2 keepalived_healthcheckers[11249]: activating healthchecker for service [172.16.21.181]:3306mar 5 10:36:03 ha2 keepalived_healthcheckers[11249]: activating healthchecker for service [172.16.21.182]:3306mar 5 10:36:03 ha2 keepalived_vrrp[11250]: vrrp_instance(vi_1) entering backup statemar 5 10:36:03 ha2 keepalived_vrrp[11250]: vrrp sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]mar 6 08:41:53 ha2 keepalived_vrrp[11250]: vrrp_instance(vi_1) transition to master statemar 6 08:41:54 ha2 keepalived_vrrp[11250]: vrrp_instance(vi_1) entering master statemar 6 08:41:54 ha2 keepalived_vrrp[11250]: vrrp_instance(vi_1) setting protocol vips.mar 6 08:41:54 ha2 keepalived_vrrp[11250]: vrrp_instance(vi_1) sending gratuitous arps on eth0 for 172.16.21.188mar 6 08:41:54 ha2 keepalived_healthcheckers[11249]: netlink reflector reports ip 172.16.21.188 addedmar 6 08:41:59 ha2 keepalived_vrrp[11250]: vrrp_instance(vi_1) sending gratuitous arps on eth0 for 172.16.21.188[root@ha2 ~]#ifconfig eth0 link encap:ethernet hwaddr 00:0c:29:1d:77:9c inet addr:172.16.21.202 bcast:172.16.21.255 mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe1d:779c/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:2969375670 errors:0 dropped:0 overruns:0 frame:0 tx packets:2966841735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 rx bytes:225643845081 (210.1 gib) tx bytes:222421642143 (207.1 gib)eth0:0 link encap:ethernet hwaddr 00:0c:29:1d:77:9c inet addr:172.16.21.188 bcast:0.0.0.0 mask:255.255.255.0 up broadcast running multicast mtu:1500 metric:1lo link encap:local loopback inet addr:127.0.0.1 mask:255.0.0.0 inet6 addr: ::1/128 scope:host up loopback running mtu:16436 metric:1 rx packets:55694 errors:0 dropped:0 overruns:0 frame:0 tx packets:55694 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:3176387 (3.0 mib) tx bytes:3176387 (3.0 mib)
将ha1的keepalived启动再观察ha1的日志和ip
[root@ha1 ~]#service keepalived start[root@ha1 ~]#tail -f /var/log/messagesmar 6 08:54:42 ha1 keepalived[13310]: starting keepalived v1.2.13 (10/15,2014)mar 6 08:54:42 ha1 keepalived[13311]: starting healthcheck child process, pid=13312mar 6 08:54:42 ha1 keepalived[13311]: starting vrrp child process, pid=13313mar 6 08:54:42 ha1 keepalived_vrrp[13313]: netlink reflector reports ip 172.16.21.181 addedmar 6 08:54:42 ha1 keepalived_vrrp[13313]: netlink reflector reports ip fe80::20c:29ff:fe4d:8e83 addedmar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: netlink reflector reports ip 172.16.21.181 addedmar 6 08:54:42 ha1 keepalived_vrrp[13313]: registering kernel netlink reflectormar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: netlink reflector reports ip fe80::20c:29ff:fe4d:8e83 addedmar 6 08:54:42 ha1 keepalived_vrrp[13313]: registering kernel netlink command channelmar 6 08:54:42 ha1 keepalived_vrrp[13313]: registering gratuitous arp shared channelmar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: registering kernel netlink reflectormar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: registering kernel netlink command channelmar 6 08:54:42 ha1 keepalived_vrrp[13313]: opening file '/etc/keepalived/keepalived.conf'.mar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: opening file '/etc/keepalived/keepalived.conf'.mar 6 08:54:42 ha1 keepalived_vrrp[13313]: configuration is using : 63252 bytesmar 6 08:54:42 ha1 keepalived_vrrp[13313]: using linkwatch kernel netlink reflector...mar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: configuration is using : 14699 bytesmar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: using linkwatch kernel netlink reflector...mar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: activating healthchecker for service [172.16.21.181]:3306mar 6 08:54:42 ha1 keepalived_healthcheckers[13312]: activating healthchecker for service [172.16.21.182]:3306mar 6 08:54:42 ha1 keepalived_vrrp[13313]: vrrp sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]mar 6 08:54:43 ha1 keepalived_vrrp[13313]: vrrp_instance(vi_1) transition to master statemar 6 08:54:43 ha1 keepalived_vrrp[13313]: vrrp_instance(vi_1) received lower prio advert, forcing new electionmar 6 08:54:44 ha1 keepalived_vrrp[13313]: vrrp_instance(vi_1) entering master statemar 6 08:54:44 ha1 keepalived_vrrp[13313]: vrrp_instance(vi_1) setting protocol vips.mar 6 08:54:44 ha1 keepalived_vrrp[13313]: vrrp_instance(vi_1) sending gratuitous arps on eth0 for 172.16.21.188mar 6 08:54:44 ha1 keepalived_healthcheckers[13312]: netlink reflector reports ip 172.16.21.188 addedmar 6 08:54:49 ha1 keepalived_vrrp[13313]: vrrp_instance(vi_1) sending gratuitous arps on eth0 for 172.16.21.188[root@ha1 ~]#ifconfig eth0 link encap:ethernet hwaddr 00:0c:29:4d:8e:83 inet addr:172.16.21.201 bcast:172.16.21.255 mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe4d:8e83/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:2968402607 errors:0 dropped:0 overruns:0 frame:0 tx packets:2966256067 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 rx bytes:224206102960 (208.8 gib) tx bytes:221258814612 (206.0 gib)eth0:0 link encap:ethernet hwaddr 00:0c:29:4d:8e:83 inet addr:172.16.21.188 bcast:0.0.0.0 mask:255.255.255.0 up broadcast running multicast mtu:1500 metric:1lo link encap:local loopback inet addr:127.0.0.1 mask:255.0.0.0 inet6 addr: ::1/128 scope:host up loopback running mtu:16436 metric:1 rx packets:54918 errors:0 dropped:0 overruns:0 frame:0 tx packets:54918 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:3096422 (2.9 mib) tx bytes:3096422 (2.9 mib)
到此,所有配置就完成了
六、 总结
提到mysql多主复制,大家很可能都是想到mysql+mmm的架构,mariadb galera cluster很好地替代了前者并且可靠性更高,具体比较可以参考http://www.oschina.net/translate/from-mysql-mmm-to-mariadb-galera-cluster-a-high-availability-makeover这篇文章。
当然,mariadb galera cluster并不是适合所有需要复制的情形,你必须根据自己的需求来决定,比如,如果你是数据一致性考虑的多,而且写操作和更新的东西多,但写入量不是很大,mariadb galera cluster就适合你;如果你是查询的多,且读写分离也容易实现,那就用replication好,简单易用,用一个master保证数据的一致性,可以有多个slave用来读去数据,分担负载,只要能解决好数据一致性和唯一性,replication就更适合你,毕竟mariadb galera cluster集群遵循“木桶”原理,如果写的量很大,数据同步速度是由集群节点中io最低的节点决定的,整体上,写入的速度会比replication慢许多。
如果文中有任何遗漏和错误