k8s搭建mysql集群实现主从复制的方法步骤

吾爱主题 阅读:142 2023-05-31 14:44:00 评论:0

环境介绍

名称 版本 操作系统 IP 备注
K8S集群 1.20.15 Centos7.9 192.168.11.21 192.168.11.22 192.168.11.23 21为k8s-master 22为k8s-node01 23为k8s-node02
MySql 5.7 Centos7.9   一主两从
nfs服务器   Centos7.9 192.168.11.24 共享目录为/nfs

一、部署NFS服务器

11.24:

1.创建NFS共享目录

?
1 mkdir -p /nfs

2.安装NFS服务

?
1 yum -y install nfs-utils rpcbind

3.编辑NFS配置

?
1 echo "/nfs  *(rw,async,no_root_squash)" >> /etc/exports

4.启动服务

?
1 2 systemctl enable --now nfs-server systemctl enable --now rpcbind

5.验证

?
1 showmount -e  ##看是否能看到/nfs *字段;如果没有该命令yum -y install showmount

11.21/22/23(所有K8S节点):

1.安装NFS

?
1 yum -y install nfs-utils

2.测试是否能检测到NFS共享目录

?
1 showmount -e 192.168.11.24  ##看是否能看到/nfs *

二、创建PV

11.21:

1、创建存放MySQL的yaml清单目录

?
1 2 mkdir  -p /webapp cd /webapp

2、创建NFS的YAML文件

vim nfs-client.yaml 

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 apiVersion: apps/v1 kind: Deployment metadata:    name: nfs-client-provisioner    labels:      app: nfs-client-provisioner      # replace with namespace where provisioner is deployed    namespace: default spec:    replicas: 1    strategy:      type: Recreate    selector:      matchLabels:        app: nfs-client-provisioner    template:      metadata:        labels:          app: nfs-client-provisioner      spec:        serviceAccountName: nfs-client-provisioner        containers:          - name : nfs-client-provisioner            image: registry.cn-beijing.aliyuncs.com/xngczl/nfs-subdir-external-provisione : v4.0.0            volumeMounts:              - name : nfs-client-root                mountPath: /persistentvolumes            env:              - name : PROVISIONER_NAME                value: fuseim.pri/ifs #注意这个值,可以自定义              - name : NFS_SERVER                value: 192.168.11.24  ##IP不同修改此处              - name : NFS_PATH                value: /nfs   ##nfs共享目录        volumes:          - name : nfs-client-root            nfs:              server: 192.168.11.24  ##IP不同修改此处              path: /nfs  ##nfs共享目录

创建rbac

vim nfs-client-rbac.yaml 

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 apiVersion: v1 kind: ServiceAccount metadata:    name: nfs-client-provisioner    # replace with namespace where provisioner is deployed    namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:    name: nfs-client-provisioner-runner rules:    - apiGroups : [ "" ]      resources: [ "nodes" ]      verbs: [ "get" , "list" , "watch" ]    - apiGroups : [ "" ]      resources: [ "persistentvolumes" ]      verbs: [ "get" , "list" , "watch" , "create" , "delete" ]    - apiGroups : [ "" ]      resources: [ "persistentvolumeclaims" ]      verbs: [ "get" , "list" , "watch" , "update" ]    - apiGroups : [ "storage.k8s.io" ]      resources: [ "storageclasses" ]      verbs: [ "get" , "list" , "watch" ]    - apiGroups : [ "" ]      resources: [ "events" ]      verbs: [ "create" , "update" , "patch" ] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:    name: run-nfs-client-provisioner subjects:    - kind : ServiceAccount      name: nfs-client-provisioner      # replace with namespace where provisioner is deployed      namespace: default roleRef:    kind: ClusterRole    name: nfs-client-provisioner-runner    apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata:    name: leader-locking-nfs-client-provisioner    # replace with namespace where provisioner is deployed    namespace: default rules:    - apiGroups : [ "" ]      resources: [ "endpoints" ]      verbs: [ "get" , "list" , "watch" , "create" , "update" , "patch" ] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:    name: leader-locking-nfs-client-provisioner    # replace with namespace where provisioner is deployed    namespace: default subjects:    - kind : ServiceAccount      name: nfs-client-provisioner      # replace with namespace where provisioner is deployed      namespace: default roleRef:    kind: Role    name: leader-locking-nfs-client-provisioner    apiGroup: rbac.authorization.k8s.io

创建sc

vim nfs-client-class.yaml

?
1 2 3 4 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:    name: course-nfs-storage

启动:

?
1 2 3 4 5 6 7 8 9 kubectl  apply  -f nfs-client.yaml  ​​​​​​​kubectl  apply  -f nfs-client-rbac.yaml kubectl  apply  -f nfs-client-class.yaml  kubectl  get po,sc NAME                                          READY   STATUS    RESTARTS   AGE pod/nfs-client-provisioner-8579c9d69b-m6vp4   1/1     Running   0          13m   NAME                                             PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE storageclass.storage.k8s.io/course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  13m

三、编写MySQL的yaml文件

11.21:

?
1 2 mkdir -p /weapp/mysql cd  /weapp/mysql

创建CM

```bash

vim mysql-configmap.yaml

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 apiVersion: v1 kind: ConfigMap metadata:    name: mysql    labels:      app: mysql data:    master.cnf: |      # Apply this config only on the master.      [ mysqld ]      log-bin    slave.cnf: |      # Apply this config only on slaves.      [ mysqld ]      super-read-only

此文件定义了两个MySQL的配置文件
1.是master.cnf,开启了log-bin。开启二进制日志文件后才能进行主从复制
2.slave.cnf,开启了super-read-only,表示从节点只能读,不能进行其他操作。
两个文件以配置文件形式挂载到mysql容器中`

创建MySQL的Service

vim mysql-services.yaml

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 apiVersion: v1 kind: Service metadata:    name: mysql    labels:      app: mysql spec:    ports:    - name : mysql      port: 3306    clusterIP: None    selector:      app: mysql --- # Client service for connecting to any MySQL instance for reads. # For writes, you must instead connect to the master: mysql-0.mysql. apiVersion: v1 kind: Service metadata:    name: mysql-read    labels:      app: mysql spec:    ports:    - name : mysql      port: 3306    selector:      app: mysql

创MySQL的StatefulSet

vim mysql-statefulset.yaml

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 apiVersion: apps/v1 kind: StatefulSet metadata:    name: mysql spec:    selector:      matchLabels:        app: mysql    serviceName: mysql    replicas: 3    template:      metadata:        labels:          app: mysql      spec:        initContainers:        - name : init-mysql          image: mysql : 5.7          command:          - bash          - "-c"          - |            set -ex            # Generate mysql server-id from pod ordinal index.            [ [ `hostname` = ~ -( [ 0-9 ] +)$ ] ] || exit 1            ordinal=$ { BASH_REMATCH [ 1 ] }            echo [ mysqld ] > /mnt/conf.d/server-id.cnf            # Add an offset to avoid reserved server-id=0 value.            echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf            # Copy appropriate conf.d files from config-map to emptyDir.            if [ [ $ordinal -eq 0 ] ] ; then              cp /mnt/config-map/master.cnf /mnt/conf.d/            else              cp /mnt/config-map/slave.cnf /mnt/conf.d/            fi                   volumeMounts:          - name : conf            mountPath: /mnt/conf.d          - name : config-map            mountPath: /mnt/config-map        - name : clone-mysql          image: fxkjnj/xtrabackup : 1.0          command:          - bash          - "-c"          - |            set -ex            # Skip the clone if data already exists.            [ [ -d /var/lib/mysql/mysql ] ] && exit 0            # Skip the clone on master (ordinal index 0).            [ [ `hostname` = ~ -( [ 0-9 ] +)$ ] ] || exit 1            ordinal=$ { BASH_REMATCH [ 1 ] }            [ [ $ordinal -eq 0 ] ] && exit 0            # Clone data from previous peer.            ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql            # Prepare the backup.            xtrabackup --prepare --target-dir=/var/lib/mysql                   volumeMounts:          - name : data            mountPath: /var/lib/mysql            subPath: mysql          - name : conf            mountPath: /etc/mysql/conf.d        containers:        - name : mysql          image: mysql : 5.7          env:          - name : MYSQL_ALLOW_EMPTY_PASSWORD            value: "1"          ports:          - name : mysql            containerPort: 3306          volumeMounts:          - name : data            mountPath: /var/lib/mysql            subPath: mysql          - name : conf            mountPath: /etc/mysql/conf.d          resources:            requests:              cpu: 500m              memory: 1Gi          livenessProbe:            exec:              command: [ "mysqladmin" , "ping" ]            initialDelaySeconds: 30            periodSeconds: 10            timeoutSeconds: 5          readinessProbe:            exec:              # Check we can execute queries over TCP (skip-networking is off).              command: [ "mysql" , "-h" , "127.0.0.1" , "-e" , "SELECT 1" ]            initialDelaySeconds: 5            periodSeconds: 2            timeoutSeconds: 1        - name : xtrabackup          image: fxkjnj/xtrabackup : 1.0          ports:          - name : xtrabackup            containerPort: 3307          command:          - bash          - "-c"          - |            set -ex            cd /var/lib/mysql              # Determine binlog position of cloned data, if any.            if [ [ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ] ] ; then              # XtraBackup already generated a partial "CHANGE MASTER TO" query              # because we're cloning from an existing slave. (Need to remove the tailing semicolon!)              cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in              # Ignore xtrabackup_binlog_info in this case (it's useless).              rm -f xtrabackup_slave_info xtrabackup_binlog_info            elif [ [ -f xtrabackup_binlog_info ] ] ; then              # We're cloning directly from master. Parse binlog position.              [ [ `cat xtrabackup_binlog_info` = ~ ^(.*?) [ [ : space : ] ] +(.*?)$ ] ] || exit 1              rm -f xtrabackup_binlog_info xtrabackup_slave_info              echo "CHANGE MASTER TO MASTER_LOG_FILE= '${BASH_REMATCH[1]}' , \                    MASTER_LOG_POS=$ { BASH_REMATCH [ 2 ] } " > change_master_to.sql.in            fi              # Check if we need to complete a clone by starting replication.            if [ [ -f change_master_to.sql.in ] ] ; then              echo "Waiting for mysqld to be ready (accepting connections)"              until mysql -h 127.0.0.1 -e "SELECT 1" ; do sleep 1; done                echo "Initializing replication from clone position"              mysql -h 127.0.0.1 \                    - e "$(<change_master_to.sql.in) , \                            MASTER_HOST= 'mysql-0.mysql' , \                            MASTER_USER= 'root' , \                            MASTER_PASSWORD= '' , \                            MASTER_CONNECT_RETRY=10; \                          START SLAVE;" || exit 1              # In case of container restart, attempt this at-most-once.              mv change_master_to.sql.in change_master_to.sql.orig            fi              # Start a server to send backups when requested by peers.            exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \              "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"                   volumeMounts:          - name : data            mountPath: /var/lib/mysql            subPath: mysql          - name : conf            mountPath: /etc/mysql/conf.d          resources:            requests:              cpu: 100m              memory: 100Mi        volumes:        - name : conf          emptyDir: { }        - name : config-map          configMap:            name: mysql    volumeClaimTemplates:    - metadata :        name: data      spec:        storageClassName: "course-nfs-storage"        accessModes: [ "ReadWriteOnce" ]        resources:          requests:            storage: 0.5Gi
  • 使用xtrbackup工具进行初始化数据的备份
  • 使用linux自带的ncat工具进行容器初始化数据拷贝
  • 使用mysql的bin-log实现主从复制
  • 使用mysqladmin的ping作为健康检查方式
  • 利用pod的主机名的序号来判定当前节点为主还是从,再根据节点拷贝不同的配置文件到指定目录

四、启动MySQL

11.21

?
1 2 3 4 5 6 7 8 kubectl apply -f mysql-configmap.yaml kubectl apply -f mysql-services.yaml kubectl apply -f mysql-statefulset.yaml kubectl get po NAME      READY   STATUS    RESTARTS   AGE      IP            NODE            NOMINATED NODE    READINESS GATES mysql-0   2/2     Running   0          3h12m    10.244.0.5    k8s-master1     <none>            <none> mysql-1   2/2     Running   0          3h11m    10.244.1.6    k8s-node02      <none>            <none> mysql-2   2/2     Running   0          3h10m    10.244.1.5    k8s-node01      <none>            <none>

五、验证MySQL主从复制

11.21:

?
1 2 3 4 5 6 7 8 9 10 kubectl  exec  -it mysql-0 -- bash  ##进入mysqk-0pod    mysql -h mysql-0.mysql  ##进入数据库      CREATE DATABASE test;  ##创建库表。      CREATE TABLE test.messages (message VARCHAR (250));      INSERT INTO test.messages VALUES ( 'hello' );      \q exit kubectl  exec  -it mysql-1 -- bash  ##进入mysql-1pod    mysql -h mysql-1.mysql  ##进入数据库      SELECT * FROM test.messages;  ##看是否看得到创建的test库

获得以下输出    

Waiting for pod default/mysql-client to be running, status is Pending, pod ready: false
+---------+
| message |
+---------+
| hello   |
+---------+

到此这篇关于k8s搭建mysql集群实现主从复制的方法步骤的文章就介绍到这了,更多相关k8s搭建mysql实现主从复制内容请搜索服务器之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持服务器之家!

原文链接:https://blog.csdn.net/q425453572/article/details/128518257

可以去百度分享获取分享代码输入这里。
声明

1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

【腾讯云】云服务器产品特惠热卖中
搜索
标签列表
    关注我们

    了解等多精彩内容