一个专注于IT互联网运维的技术博客

etcd集群的v2 api数据备份和恢复

2020.01.08

本文介绍etcd v2 api数据的备份和恢复。适用场景:3个节点的etcd集群,使用etcdctl backup命令备份了v2 api的数据,现在集群故障数据无法恢复,只能通过备份的数据恢复到目的etcd集群。

首先使用etcdctl backup命令备份健康的etcd集群的v2 api的数据。

备份etcd v2 api的数据

// 检查etcd集群健康状况
[root@dev-master-001 shm]# export MASTER01_IP=192.168.100.101 MASTER02_IP=192.168.100.102 MASTER03_IP=192.168.100.103
[root@dev-master-001 shm]# ETCDCTL_API=3 etcdctl \
  --endpoints=http://$MASTER01_IP:2381,http://$MASTER02_IP:2381,http://$MASTER03_IP:2381 \
  endpoint health
http://10.41.225.129:2381 is healthy: successfully committed proposal: took = 2.339929ms
http://10.40.150.141:2381 is healthy: successfully committed proposal: took = 1.904845ms
http://10.41.225.130:2381 is healthy: successfully committed proposal: took = 37.034775ms

// 使用etcdctl backup命令备份v2 api的数据
[root@dev-master-001 shm]# ETCDCTL_API=2 etcdctl \
  --endpoints=http://$MASTER01_IP:2381,http://$MASTER02_IP:2381,http://$MASTER03_IP:2381 \
  backup \
  --data-dir /var/lib/dev-etcd/ \
  --backup-dir dev-etcd-20200107
2020-01-07 14:28:06.067154 I | ignoring EntryConfChange raft entry
2020-01-07 14:28:06.067303 I | ignoring EntryConfChange raft entry
2020-01-07 14:28:06.067347 I | ignoring EntryConfChange raft entry
2020-01-07 14:28:06.067427 I | ignoring member attribute update on /0/members/40660789f82c2e0b/attributes
2020-01-07 14:28:06.067457 I | ignoring member attribute update on /0/members/eb5f1aa72077efc/attributes
2020-01-07 14:28:06.067490 I | ignoring member attribute update on /0/members/3de7f5ce308a5933/attributes
2020-01-07 14:28:06.067545 I | ignoring member attribute update on /0/members/3de7f5ce308a5933/attributes
2020-01-07 14:28:06.067606 I | ignoring v3 raft entry
2020-01-07 14:28:06.067644 I | ignoring v3 raft entry
2020-01-07 14:28:06.069319 I | ignoring v3 raft entry
2020-01-07 14:28:06.069374 I | ignoring v3 raft entry
2020-01-07 14:28:06.069402 I | ignoring v3 raft entry
2020-01-07 14:28:06.069422 I | ignoring v3 raft entry
2020-01-07 14:28:06.069448 I | ignoring v3 raft entry
2020-01-07 14:28:06.069467 I | ignoring v3 raft entry
2020-01-07 14:28:06.069498 I | ignoring v3 raft entry
2020-01-07 14:28:06.069523 I | ignoring v3 raft entry
2020-01-07 14:28:06.069550 I | ignoring v3 raft entry
2020-01-07 14:28:06.069568 I | ignoring v3 raft entry
2020-01-07 14:28:06.069593 I | ignoring v3 raft entry
2020-01-07 14:28:06.069611 I | ignoring v3 raft entry
2020-01-07 14:28:06.069646 I | ignoring v3 raft entry
2020-01-07 14:28:06.069665 I | ignoring v3 raft entry
2020-01-07 14:28:06.069838 I | ignoring v3 raft entry
2020-01-07 14:28:06.069862 I | ignoring v3 raft entry

// 将etcd的备份数据拷贝的备份服务器
[root@dev-master-001 shm]# ll dev-etcd-20200107
总用量 0
drwx------ 4 root root 80 1月   7 14:28 member
[root@dev-master-001 shm]# cd dev-etcd-20200107
[root@dev-master-001 dev-etcd-20200107]# tar -czf dev-etcd-20200107.tar.gz member
[root@dev-master-001 dev-etcd-20200107]# ll dev-etcd-20200107.tar.gz
-rw-r--r-- 1 root root 335635 1月   7 15:38 dev-etcd-20200107.tar.gz
[root@dev-master-001 dev-etcd-20200107]# scp dev-etcd-20200107.tar.gz 192.168.200.101:/root/kubernetes/dev-etcd-20200107.tar.gz

假设上面的etcd集群出现故障导致数据丢失,下面使用备份文件dev-etcd-20200107.tar.gz恢复v2 api的数据。

恢复etcd v2 api的数据

首先尝试直接将备份数据拷贝到新的etcd集群:

[root@test-master-001 kubernetes]# ll dev-etcd-20200107.tar.gz
-rw-r--r-- 1 root root 335737 Jan  7 14:51 dev-etcd-20200107.tar.gz
[root@test-master-001 kubernetes]# ansible test-master-nodes -m systemd -a "name=etcd state=stopped"
[root@test-master-001 kubernetes]# ansible test-master-nodes -m file -a "path=/var/lib/etcd state=absent"
[root@test-master-001 kubernetes]# ansible test-master-nodes -m file -a "path=/var/lib/etcd state=directory"
[root@test-master-001 kubernetes]# ansible test-master-nodes -m unarchive -a "src=dev-etcd-20200107.tar.gz dest=/var/lib/etcd"
[root@test-master-001 kubernetes]# ansible test-master-nodes -m systemd -a "name=etcd state=started"
  • 发现如下报错:

    [root@test-master-001 kubernetes]# /usr/local/bin/etcd \
    --data-dir=/var/lib/etcd/ \
    --name=test-master-001 \
    --advertise-client-urls=http://192.168.200.101:2381 \
    --listen-client-urls=http://192.168.200.101:2381,http://127.0.0.1:2381 --initial-advertise-peer-urls=http://192.168.200.101:2382 \
    --listen-peer-urls=http://192.168.200.101:2382 \
    --initial-cluster=test-master-001=http://192.168.200.101:2382,test-master-002=http://192.168.200.102:2382,test-master-003=http://192.168.200.103:2382 \
    --initial-cluster-state=new \
    --initial-cluster-token=etcd-cluster \
    --debug
    2020-01-07 15:49:21.070304 I | etcdmain: etcd Version: 3.3.13
    2020-01-07 15:49:21.070342 I | etcdmain: Git SHA: 98d3084
    2020-01-07 15:49:21.070345 I | etcdmain: Go Version: go1.10.8
    2020-01-07 15:49:21.070350 I | etcdmain: Go OS/Arch: linux/amd64
    2020-01-07 15:49:21.070353 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2020-01-07 15:49:21.070389 N | etcdmain: the server is already initialized as member before, starting as etcd member...
    2020-01-07 15:49:21.070446 I | embed: listening for peers on http://192.168.200.101:2382
    2020-01-07 15:49:21.070477 I | embed: listening for client requests on 192.168.200.101:2381
    2020-01-07 15:49:21.070525 I | embed: listening for client requests on 127.0.0.1:2381
    2020-01-07 15:49:21.070867 I | etcdserver: name = test-master-001
    2020-01-07 15:49:21.070879 I | etcdserver: data dir = /var/lib/etcd/
    2020-01-07 15:49:21.070883 I | etcdserver: member dir = /var/lib/etcd/member
    2020-01-07 15:49:21.070887 I | etcdserver: heartbeat = 100ms
    2020-01-07 15:49:21.070890 I | etcdserver: election = 1000ms
    2020-01-07 15:49:21.070893 I | etcdserver: snapshot count = 100000
    2020-01-07 15:49:21.070909 I | etcdserver: advertise client URLs = http://192.168.200.101:2381
    2020-01-07 15:49:21.083574 I | etcdserver: restarting member 6f7eb0614101 in cluster 6f7eb0614102 at commit index 9495
    2020-01-07 15:49:21.084134 I | raft: 6f7eb0614101 became follower at term 13
    2020-01-07 15:49:21.084150 I | raft: newRaft 6f7eb0614101 [peers: [], term: 13, commit: 9495, applied: 0, lastindex: 9496, lastterm: 13]
    2020-01-07 15:49:21.085234 W | auth: simple token is not cryptographically signed
    2020-01-07 15:49:21.085757 I | etcdserver: starting server... [version: 3.3.13, cluster version: to_be_decided]
    2020-01-07 15:49:21.086214 N | etcdserver/membership: set the initial cluster version to 3.2
    2020-01-07 15:49:21.086321 I | etcdserver/api: enabled capabilities for version 3.2
    2020-01-07 15:49:28.086360 E | etcdserver: publish error: etcdserver: request timed out
    2020-01-07 15:49:35.086485 E | etcdserver: publish error: etcdserver: request timed out
    2020-01-07 15:49:42.086609 E | etcdserver: publish error: etcdserver: request timed out
    ^C
    
  • 所有节点都无法启动,说明无法通过直接拷贝文件的方式恢复数据。

查看etcd v2官方文档找到了正确的恢复方法,步骤如下:

1、将备份文件dev-etcd-20200107.tar.gz解压到etcd集群的某一个节点(这里是test-master-001)的工作目录/var/lib/etcd/:

// 停止集群所有节点的etcd.service,并清空工作目录/var/lib/etcd/
[root@test-master-001 kubernetes]# ansible test-master-nodes -m systemd -a "name=etcd state=stopped"
[root@test-master-001 kubernetes]# ansible test-master-nodes -m file -a "path=/var/lib/etcd state=absent"
[root@test-master-001 kubernetes]# ansible test-master-nodes -m file -a "path=/var/lib/etcd state=directory"

// 解压备份数据到test-master-001的/var/lib/etcd/
[root@test-master-001 kubernetes]# tar -zxf dev-etcd-20200107.tar.gz -C /var/lib/etcd/
[root@test-master-001 kubernetes]# ll /var/lib/etcd/
total 4
drwx------ 4 root root 4096 Jan  7 14:28 member

2、使用“--force-new-cluster”参数启动test-master-001节点的etcd服务: To restore a backup using the procedure created above, start etcd with the --force-new-cluster option and pointing to the backup directory. This will initialize a new, single-member cluster with the default advertised peer URLs, but preserve the entire contents of the etcd data store. Continuing from the previous example:

// 删除了“--initial-cluster”参数,添加了“--force-new-cluster”参数
[root@test-master-001 kubernetes]# /usr/local/bin/etcd \
  --data-dir=/var/lib/etcd/ \
  --name=test-master-001 \
  --advertise-client-urls=http://192.168.200.101:2381 \
  --listen-client-urls=http://192.168.200.101:2381,http://127.0.0.1:2381 \
  --initial-advertise-peer-urls=http://192.168.200.101:2382 \
  --listen-peer-urls=http://192.168.200.101:2382 \
  --initial-cluster-state=new \
  --initial-cluster-token=etcd-cluster \
  --force-new-cluster

3、使用etcdctl member update命令修改test-master-001节点的advertised peer URLs: Now that the node is running successfully, change its advertised peer URLs, as the --force-new-cluster option has set the peer URL to the default listening on localhost.

[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381 member list
6f7eb0614101: name=test-master-001 peerURLs=http://localhost:2380 clientURLs=http://192.168.200.101:2381 isLeader=true
[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381 member update 6f7eb0614101 http://192.168.200.101:2382
Updated member with ID 6f7eb0614101 in cluster
[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381 member list
6f7eb0614101: name=test-master-001 peerURLs=http://192.168.200.101:2382 clientURLs=http://192.168.200.101:2381 isLeader=true

4、使用etcdctl member add命令添加第二个etcd节点: You can then add more nodes to the cluster and restore resiliency. See the add a new member guide for more details. Note: If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.

// 添加第二个节点test-master-002
[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381 member add test-master-002 http://192.168.200.102:2382
Added member named test-master-002 with ID ff69528fcc000b88 to cluster

ETCD_NAME="test-master-002"
ETCD_INITIAL_CLUSTER="test-master-001=http://192.168.200.101:2382,test-master-002=http://192.168.200.102:2382"
ETCD_INITIAL_CLUSTER_STATE="existing"

// 修改“--initial-cluster”参数并启动test-master-002
[root@test-master-002 ~]# rm -rf /var/lib/etcd/*
[root@test-master-002 ~]# /usr/local/bin/etcd \
  --data-dir=/var/lib/etcd/ \
  --name=test-master-002 \
  --advertise-client-urls=http://192.168.200.102:2381 \
  --listen-client-urls=http://192.168.200.102:2381,http://127.0.0.1:2381 \
  --initial-advertise-peer-urls=http://192.168.200.102:2382 \
  --listen-peer-urls=http://192.168.200.102:2382 \
  --initial-cluster=test-master-001=http://192.168.200.101:2382,test-master-002=http://192.168.200.102:2382 \
  --initial-cluster-state=existing \
  --initial-cluster-token=etcd-cluster
[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381 member list
6f7eb0614101: name=test-master-001 peerURLs=http://192.168.200.101:2382 clientURLs=http://192.168.200.101:2381 isLeader=true
ff69528fcc000b88: name=test-master-002 peerURLs=http://192.168.200.102:2382 clientURLs=http://192.168.200.102:2381 isLeader=false

5、同理添加第三个etcd节点:

// 添加第三个节点test-master-003
[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381 member add test-master-003 http://192.168.200.103:2382
Added member named test-master-003 with ID c8eab04c155d4a69 to cluster

ETCD_NAME="test-master-003"
ETCD_INITIAL_CLUSTER="test-master-001=http://192.168.200.101:2382,test-master-003=http://192.168.200.103:2382,test-master-002=http://192.168.200.102:2382"
ETCD_INITIAL_CLUSTER_STATE="existing"
[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381 member list
6f7eb0614101: name=test-master-001 peerURLs=http://192.168.200.101:2382 clientURLs=http://192.168.200.101:2381 isLeader=true
c8eab04c155d4a69[unstarted]: peerURLs=http://192.168.200.103:2382
ff69528fcc000b88: name=test-master-002 peerURLs=http://192.168.200.102:2382 clientURLs=http://192.168.200.102:2381 isLeader=false

// 启动test-master-003
[root@test-master-003 ~]# rm -rf /var/lib/etcd/*
/usr/local/bin/etcd \
  --data-dir=/var/lib/etcd/ \
  --name=test-master-003 \
  --advertise-client-urls=http://192.168.200.103:2381 \
  --listen-client-urls=http://192.168.200.103:2381,http://127.0.0.1:2381 \
  --initial-advertise-peer-urls=http://192.168.200.103:2382 \
  --listen-peer-urls=http://192.168.200.103:2382 \
  --initial-cluster=test-master-001=http://192.168.200.101:2382,test-master-002=http://192.168.200.102:2382,test-master-003=http://192.168.200.103:2382 \
  --initial-cluster-state=existing \
  --initial-cluster-token=etcd-cluster
[root@test-master-001 ~]# etcdctl --endpoints=http://192.168.200.101:2381,http://192.168.200.102:2381,http://192.168.200.103:2381 member list
6f7eb0614101: name=test-master-001 peerURLs=http://192.168.200.101:2382 clientURLs=http://192.168.200.101:2381 isLeader=true
c8eab04c155d4a69: name=test-master-003 peerURLs=http://192.168.200.103:2382 clientURLs=http://192.168.200.103:2381 isLeader=false
ff69528fcc000b88: name=test-master-002 peerURLs=http://192.168.200.102:2382 clientURLs=http://192.168.200.102:2381 isLeader=false

6、etcdctl member list命令看到etcd集群恢复成功,下面停掉命令行启动的etcd实例并使用systemd启动:

// Ctrl + C停止所有etcd实例后,然后使用systemd启动:
[root@test-master-001 kubernetes]# ansible test-master-nodes -m systemd -a "name=etcd state=started"
[root@test-master-001 kubernetes]# ETCDCTL_API=3 etcdctl --endpoints=http://192.168.200.101:2381,http://192.168.200.102:2381,http://192.168.200.103:2381 endpoint health
http://192.168.200.101:2381 is healthy: successfully committed proposal: took = 1.521889ms
http://192.168.200.103:2381 is healthy: successfully committed proposal: took = 1.223369ms
http://192.168.200.102:2381 is healthy: successfully committed proposal: took = 1.581474ms

至此,成功使用备份文件恢复了etcd集群v2 api的数据,可见etcd的健壮和易用。

发表评论