1、kubernetes组件-etcd:
etcd 是CoreOS公司开发目前是Kubernetes默认使用的key-value数据存储系统,用于保存
kubernetes的所有集群数据,etcd支持分布式集群功能,生产环境使用时需要为etcd数据提
供定期备份机制
官网:https://etcd.io/
GitHub:https://github.com/etcd-io/etcd
官方文档:
~# cat /etc/systemd/system/etcd.service
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/ #数据保存目录ExecStart=/usr/local/bin/etcd \ #二进制文件路径
--name=etcd1 \ #当前node 名称
--cert-file=/etc/etcd/ssl/etcd.pem
--key-file=/etc/etcd/ssl/etcd-key.pem
--peer-cert-file=/etc/etcd/ssl/etcd.pem
--peer-key-file=/etc/etcd/ssl/etcd-key.pem
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem
--initial-advertise-peer-urls=https://192.168.7.101:2380 \ #通告自己的集群端口
--listen-peer-urls=https://192.168.7.101:2380 \ #集群之间通讯端口
--listen-client-urls=https://192.168.7.101:2379,http://127.0.0.1:2379 \ #客户端访问地址
--advertise-client-urls=https://192.168.7.101:2379 \ #通告自己的客户端端口
--initial-cluster-token=etcd-cluster-0 \ #创建集群使用的token,一个集群内的节点保持一致
--initial-cluster=etcd1=https://192.168.7.101:2380,etcd2=https://192.168.7.102:2380,etcd3=https://192.168.7.103:2380 \ #集群所有的节点信息 --initial-cluster-state=new \ #新建集群的时候的值为new,如果是已经存在的集群为existing。
--data-dir=/var/lib/etcd #数据目录路径
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
2、etcdctl常用命令
查看成员信息
etcd
有多个不同的API
访问版本,v1
版本已经废弃,etcd
v2
和 v3
本质上是共享同一套 raft
协议代码的两个独立的应用,接口不一样,存储不一样,数据互相隔离。也就是说如果从 Etcd
v2
升级到 Etcd v3
,原来v2 的数据还是只能用 v2 的接口访问,v3 的接口创建的数据也只能访问通过 v3 的接口访问。 WARNING:
Environment variable ETCDCTL_API is not set; defaults to etcdctl v2\.
#默认使用V2版本
Set environment variable ETCDCTL_API=3 to use v3 API or ETCDCTL_API=2 to use v2 API.
#设置API版本
--help
查看帮助文档
root@k8s-master1-etcd1:~# etcdctl --help
NAME:
etcdctl - A simple command line client for etcd3.
USAGE:
etcdctl [flags]
VERSION:
3.5.1
API VERSION:
3.5
COMMANDS:
alarm disarm Disarms all alarms
alarm list Lists all alarms
auth disable Disables authentication
auth enable Enables authentication
auth status Returns authentication status
check datascale Check the memory usage of holding data for different workloads on a given server endpoint.
check perf Check the performance of the etcd cluster
compaction Compacts the event history in etcd
defrag Defragments the storage of the etcd members with given endpoints
del Removes the specified key or range of keys [key, range_end)
elect Observes and participates in leader election
endpoint hashkv Prints the KV history hash for each endpoint in --endpoints
endpoint health Checks the healthiness of endpoints specified in--endpoints
flag
endpoint status Prints out the status of endpoints specified in--endpoints
flag
get Gets the key or a range of keys
help Help about any command
lease grant Creates leases
lease keep-alive Keeps leases alive (renew)
lease list List all active leases
lease revoke Revokes leases
lease timetolive Get lease information
lock Acquires a named lock
make-mirror Makes a mirror at the destination etcd cluster
member add Adds a member into the cluster
member list Lists all members in the cluster
member promote Promotes a non-voting member in the cluster
member remove Removes a member from the cluster
member update Updates a member in the cluster
move-leader Transfers leadership to another etcd cluster member.
put Puts the given key into the store
role add Adds a new role
role delete Deletes a role
role get Gets detailed information of a role
role grant-permission Grants a key to a role
role list Lists all roles
role revoke-permission Revokes a key from a role
snapshot restore Restores an etcd member snapshot to an etcd directory
snapshot save Stores an etcd node backend snapshot to a given file
snapshot status [deprecated] Gets backend snapshot status of a given file
txn Txn processes all the requests in one transaction
user add Adds a new user
user delete Deletes a user
user get Gets detailed information of a user
user grant-role Grants a role to a user
user list Lists all users
user passwd Changes password of user
user revoke-role Revokes a role from a user
version Prints the version of etcdctl
watch Watches events stream on keys or prefixes
OPTIONS:
--cacert="" verify certificates of TLS-enabled secure servers using this CA bundle
--cert="" identify secure client using this TLS certificate file
--command-timeout=5s timeout for short running command (excluding dial timeout)
--debug[=false] enable client-side debug logging
--dial-timeout=2s dial timeout for client connections
-d, --discovery-srv="" domain name to query for SRV records describing cluster endpoints
--discovery-srv-name="" service name to query when using DNS discovery
--endpoints=[127.0.0.1:2379] gRPC endpoints
-h, --help[=false] help for etcdctl
--hex[=false] print byte strings as hex encoded strings
--insecure-discovery[=true] accept insecure SRV records describing cluster endpoints
--insecure-skip-tls-verify[=false] skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)
--insecure-transport[=true] disable transport security for client connections
--keepalive-time=2s keepalive time for client connections
--keepalive-timeout=6s keepalive timeout for client connections
--key="" identify secure client using this TLS key file
--password="" password for authentication (if this option is used, --user option shouldn't include password)
--user="" username[:password] for authentication (prompt if password is not supplied)
-w, --write-out="simple" set the output format (fields, json, protobuf, simple, table)
验证当前etcd所有成员状态:
#export NODE_IPS="172.31.7.101 172.31.7.102 172.31.7.103"
# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
显示集群成员信息:
ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table member list --endpoints=https://172.31.7.101:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem
以表格方式显示节点详细状态:
export NODE_IPS="172.31.7.101 172.31.7.102 172.31.7.103"
for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table endpoint status --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem; done
查看etcd数据信息:
~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only #以路径的方式所有key信息
pod信息:
~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep pod
namespace信息:
~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep namespaces
控制器信息:
root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep deployment
calico组件信息:
root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep calico
</pre>
etcd增删改查数据:
添加数据
root@k8s-master1-etcd1:~# etcdctl put /name "sam"
OK
查询数据
root@k8s-master1-etcd1:~# etcdctl get /name
/name
sam
改动数据,#直接覆盖就是更新数据
root@k8s-master1-etcd1:~# etcdctl put /name "sam1"
OK
验证改动
root@k8s-master1-etcd1:~# etcdctl get /name
/name
sam1
删除数据
root@k8s-master1-etcd1:~# etcdctl del /name
1
root@k8s-master1-etcd1:~# etcdctl get /name
root@k8s-master1-etcd1:~#
root@k8s-master1-etcd1:~# kubectl get pods -A |grep net-test1
default net-test1 1/1 Running 5 (36m ago) 4d22h
root@k8s-master1-etcd1:~# etcdctl del /registry/pods/default/net-test1
1
root@k8s-master1-etcd1:~# kubectl get pods -A |grep net-test1
root@k8s-master1-etcd1:~#
etcd数据watch机制:
基于不断监看数据,发生变化就主动触发通知客户端,Etcd v3 的watch机制支持watch某个固定的key,也支持watch一个范围。
在etcd node1上watch一个key,没有此key也可以执行watch,后期可以再创建:
etcdctl watch /data
在etcd
node2
修改数据,验证etcd
node1
是否能够发现数据变化
etcdctl put /data "data v1"
OK
#etcdctl put /data "data v2"
OK
验证etcd node1
3、etcd V3 API版本数据备份与恢复:
WAL是write ahead log的缩写,顾名思义,也就是在执行真正的写操作之前先写一个日志,预写日志。
wal: 存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要先写入到WAL中。
V3版本备份数据:
root@k8s-master1-etcd1:~# etcdctl snapshot save /data/etcd_backup/etcd_backup_20220419.db
{"level":"info","ts":1650383531.7492867,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/data/etcd_backup/etcd_backup_20220419.db.part"}
{"level":"info","ts":1650383531.7510343,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1650383531.751134,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
{"level":"info","ts":1650383531.787254,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":1650383531.793083,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","size":"3.5 MB","took":"now"}
{"level":"info","ts":1650383531.7934062,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/data/etcd_backup/etcd_backup_20220419.db"}
Snapshot saved at /data/etcd_backup/etcd_backup_20220419.db
root@k8s-master1-etcd1:~# ll /data/etcd_backup/etcd_backup_20220419.db
-rw------- 1 root root 3469344 Apr 19 23:52 /data/etcd_backup/etcd_backup_20220419.db
V3版本恢复数据:
root@k8s-master1-etcd1:~# etcdctl snapshot restore --help
NAME:
snapshot restore - Restores an etcd member snapshot to an etcd directory
USAGE:
etcdctl snapshot restore <filename> [options] [flags]
DESCRIPTION:
Moved to `etcdctl snapshot restore ...`
OPTIONS:
--data-dir="" Path to the data directory
-h, --help[=false] help for restore
--initial-advertise-peer-urls="http://localhost:2380" List of this member's peer URLs to advertise to the rest of the cluster
--initial-cluster="default=http://localhost:2380" Initial cluster configuration for restore bootstrap
--initial-cluster-token="etcd-cluster" Initial cluster token for the etcd cluster during restore bootstrap
--name="default" Human-readable name for this member
--skip-hash-check[=false] Ignore snapshot integrity hash value (required if copied from data directory)
--wal-dir="" Path to the WAL directory (use --data-dir if none given)
GLOBAL OPTIONS:
--cacert="" verify certificates of TLS-enabled secure servers using this CA bundle
--cert="" identify secure client using this TLS certificate file
--command-timeout=5s timeout for short running command (excluding dial timeout)
--debug[=false] enable client-side debug logging
--dial-timeout=2s dial timeout for client connections
-d, --discovery-srv="" domain name to query for SRV records describing cluster endpoints
--discovery-srv-name="" service name to query when using DNS discovery
--endpoints=[127.0.0.1:2379] gRPC endpoints
--hex[=false] print byte strings as hex encoded strings
--insecure-discovery[=true] accept insecure SRV records describing cluster endpoints
--insecure-skip-tls-verify[=false] skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)
--insecure-transport[=true] disable transport security for client connections
--keepalive-time=2s keepalive time for client connections
--keepalive-timeout=6s keepalive timeout for client connections
--key="" identify secure client using this TLS key file
--password="" password for authentication (if this option is used, --user option shouldn't include password)
--user="" username[:password] for authentication (prompt if password is not supplied)
-w, --write-out="simple" set the output format (fields, json, protobuf, simple, table)
还原
root@k8s-master1-etcd1:~# etcdctl snapshot restore /data/etcd_backup/etcd_backup_20220419.db --data-dir="/data/etcddir/"#将数据恢复到一个新的不存在的目录中
自动备份数据
~# mkdir /data/etcd-backup-dir/ -p
~# cat script.sh
#!/bin/bash
source /etc/profile
DATE=`date +%Y-%m-%d_%H-%M-%S`
ETCDCTL_API=3 /usr/bin/etcdctl snapshot save /data/etcd-backup-dir/etcd-snap
4、使用kubeasz项目自带的etcd集群还原功能
查看pods资源
root@k8s-master1-etcd1:~/yaml# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test1 1/1 Running 0 6h37m
default net-test2 1/1 Running 0 6h37m
kube-system calico-kube-controllers-754966f84c-c8d2g 1/1 Running 0 6h49m
kube-system calico-node-csnl7 1/1 Running 0 6h49m
kube-system calico-node-czwwf 1/1 Running 0 6h49m
kube-system calico-node-smmk4 1/1 Running 0 6h49m
kube-system calico-node-wlpl9 1/1 Running 0 6h49m
kube-system coredns-79688b6cb4-kqpgs 1/1 Running 0 3h40m
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-l52xb 1/1 Running 0 149m
kubernetes-dashboard kubernetes-dashboard-fb8648fd9-p7qt6 1/1 Running 0 149m
使用94号剧本备份etcd集群
root@k8s-master1-etcd1:/etc/kubeasz# ll ./playbooks/
total 96
drwxrwxr-x 2 root root 4096 Apr 23 23:08 ./
drwxrwxr-x 12 root root 4096 Apr 23 16:33 ../
-rw-rw-r-- 1 root root 422 Apr 23 16:05 01.prepare.yml
-rw-rw-r-- 1 root root 58 Jan 5 20:19 02.etcd.yml
-rw-rw-r-- 1 root root 209 Jan 5 20:19 03.runtime.yml
-rw-rw-r-- 1 root root 482 Jan 5 20:19 04.kube-master.yml
-rw-rw-r-- 1 root root 218 Jan 5 20:19 05.kube-node.yml
-rw-rw-r-- 1 root root 408 Jan 5 20:19 06.network.yml
-rw-rw-r-- 1 root root 77 Jan 5 20:19 07.cluster-addon.yml
-rw-rw-r-- 1 root root 34 Jan 5 20:19 10.ex-lb.yml
-rw-rw-r-- 1 root root 3893 Jan 5 20:19 11.harbor.yml
-rw-rw-r-- 1 root root 1567 Jan 5 20:19 21.addetcd.yml
-rw-rw-r-- 1 root root 1520 Jan 5 20:19 22.addnode.yml
-rw-rw-r-- 1 root root 1050 Jan 5 20:19 23.addmaster.yml
-rw-rw-r-- 1 root root 3344 Jan 5 20:19 31.deletcd.yml
-rw-rw-r-- 1 root root 2018 Jan 5 20:19 32.delnode.yml
-rw-rw-r-- 1 root root 2071 Jan 5 20:19 33.delmaster.yml
-rw-rw-r-- 1 root root 1891 Jan 5 20:19 90.setup.yml
-rw-rw-r-- 1 root root 1054 Jan 5 20:19 91.start.yml
-rw-rw-r-- 1 root root 934 Jan 5 20:19 92.stop.yml
-rw-rw-r-- 1 root root 1042 Jan 5 20:19 93.upgrade.yml
-rw-rw-r-- 1 root root 1786 Jan 5 20:19 94.backup.yml
-rw-rw-r-- 1 root root 999 Jan 5 20:19 95.restore.yml
-rw-rw-r-- 1 root root 337 Jan 5 20:19 99.clean.yml
#查看帮助文档
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl --help
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
list to list all of the managed clusters
checkout <cluster> to switch default kubeconfig of the cluster
new <cluster> to start a new k8s deploy with name 'cluster'
setup <cluster> <step> to setup a cluster, also supporting a step-by-step way
start <cluster> to start all of the k8s services stopped by 'ezctl stop'
stop <cluster> to stop all of the k8s services temporarily
upgrade <cluster> to upgrade the k8s cluster
destroy <cluster> to destroy the k8s cluster
backup <cluster> to backup the cluster state (etcd snapshot)
restore <cluster> to restore the cluster state from backups
start-aio to quickly setup an all-in-one cluster with 'default' settings
Cluster ops:
add-etcd <cluster> <ip> to add a etcd-node to the etcd cluster
add-master <cluster> <ip> to add a master node to the k8s cluster
add-node <cluster> <ip> to add a work node to the k8s cluster
del-etcd <cluster> <ip> to delete a etcd-node from the etcd cluster
del-master <cluster> <ip> to delete a master node from the k8s cluster
del-node <cluster> <ip> to delete a work node from the k8s cluster
Extra operation:
kcfg-adm <cluster> <args> to manage client kubeconfig of the k8s cluster
Use "ezctl help <command>" for more information about a given command.
#开始备份
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl backup k8s-cluster-01
ansible-playbook -i clusters/k8s-cluster-01/hosts -e @clusters/k8s-cluster-
...
...
PLAY RECAP ***************************************************************************************************************************************************************************************************************************
localhost : ok=10 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
此时在kubeasz k8s集群下有一个backup目录,etcd备份目录就放在这里
root@k8s-master1-etcd1:/etc/kubeasz# ll clusters/k8s-cluster-01/backup/
total 5472
drwxr-xr-x 2 root root 4096 Apr 23 23:12 ./
drwxr-xr-x 5 root root 4096 Apr 23 19:17 ../
-rw------- 1 root root 2793504 Apr 23 23:12 snapshot.db
-rw------- 1 root root 2793504 Apr 23 23:12 snapshot_202204232312.db
删除pods做还原试验
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test2 1/1 Running 0 6h46m
kube-system calico-kube-controllers-754966f84c-c8d2g 1/1 Running 0 6h58m
kube-system calico-node-csnl7 1/1 Running 0 6h58m
kube-system calico-node-czwwf 1/1 Running 0 6h58m
kube-system calico-node-smmk4 1/1 Running 0 6h58m
kube-system calico-node-wlpl9 1/1 Running 0 6h58m
kube-system coredns-79688b6cb4-kqpgs 1/1 Running 0 3h49m
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-l52xb 1/1 Running 0 158m
kubernetes-dashboard kubernetes-dashboard-fb8648fd9-p7qt6 1/1 Running 0 158m
root@k8s-master1-etcd1:/etc/kubeasz# kubectl delete pods -n default net-test1
pod "net-test1" deleted
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test2 1/1 Running 0 6h54m
kube-system calico-kube-controllers-754966f84c-c8d2g 1/1 Running 0 7h7m
kube-system calico-node-csnl7 1/1 Running 0 7h7m
kube-system calico-node-czwwf 1/1 Running 0 7h7m
kube-system calico-node-smmk4 1/1 Running 0 7h7m
kube-system calico-node-wlpl9 1/1 Running 0 7h7m
kube-system coredns-79688b6cb4-kqpgs 1/1 Running 0 3h57m
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-l52xb 1/1 Running 0 166m
kubernetes-dashboard kubernetes-dashboard-fb8648fd9-p7qt6 1/1 Running 0 166m
使用95号剧本还原etcd集群
root@k8s-master1-etcd1:/etc/kubeasz# ll playbooks/
total 96
drwxrwxr-x 2 root root 4096 Apr 23 23:22 ./
drwxrwxr-x 12 root root 4096 Apr 23 16:33 ../
-rw-rw-r-- 1 root root 422 Apr 23 16:05 01.prepare.yml
-rw-rw-r-- 1 root root 58 Jan 5 20:19 02.etcd.yml
-rw-rw-r-- 1 root root 209 Jan 5 20:19 03.runtime.yml
-rw-rw-r-- 1 root root 482 Jan 5 20:19 04.kube-master.yml
-rw-rw-r-- 1 root root 218 Jan 5 20:19 05.kube-node.yml
-rw-rw-r-- 1 root root 408 Jan 5 20:19 06.network.yml
-rw-rw-r-- 1 root root 77 Jan 5 20:19 07.cluster-addon.yml
-rw-rw-r-- 1 root root 34 Jan 5 20:19 10.ex-lb.yml
-rw-rw-r-- 1 root root 3893 Jan 5 20:19 11.harbor.yml
-rw-rw-r-- 1 root root 1567 Jan 5 20:19 21.addetcd.yml
-rw-rw-r-- 1 root root 1520 Jan 5 20:19 22.addnode.yml
-rw-rw-r-- 1 root root 1050 Jan 5 20:19 23.addmaster.yml
-rw-rw-r-- 1 root root 3344 Jan 5 20:19 31.deletcd.yml
-rw-rw-r-- 1 root root 2018 Jan 5 20:19 32.delnode.yml
-rw-rw-r-- 1 root root 2071 Jan 5 20:19 33.delmaster.yml
-rw-rw-r-- 1 root root 1891 Jan 5 20:19 90.setup.yml
-rw-rw-r-- 1 root root 1054 Jan 5 20:19 91.start.yml
-rw-rw-r-- 1 root root 934 Jan 5 20:19 92.stop.yml
-rw-rw-r-- 1 root root 1042 Jan 5 20:19 93.upgrade.yml
-rw-rw-r-- 1 root root 1786 Jan 5 20:19 94.backup.yml
-rw-rw-r-- 1 root root 999 Jan 5 20:19 95.restore.yml
-rw-rw-r-- 1 root root 337 Jan 5 20:19 99.clean.yml
#帮助文档
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl --help
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
list to list all of the managed clusters
checkout <cluster> to switch default kubeconfig of the cluster
new <cluster> to start a new k8s deploy with name 'cluster'
setup <cluster> <step> to setup a cluster, also supporting a step-by-step way
start <cluster> to start all of the k8s services stopped by 'ezctl stop'
stop <cluster> to stop all of the k8s services temporarily
upgrade <cluster> to upgrade the k8s cluster
destroy <cluster> to destroy the k8s cluster
backup <cluster> to backup the cluster state (etcd snapshot)
restore <cluster> to restore the cluster state from backups
start-aio to quickly setup an all-in-one cluster with 'default' settings
Cluster ops:
add-etcd <cluster> <ip> to add a etcd-node to the etcd cluster
add-master <cluster> <ip> to add a master node to the k8s cluster
add-node <cluster> <ip> to add a work node to the k8s cluster
del-etcd <cluster> <ip> to delete a etcd-node from the etcd cluster
del-master <cluster> <ip> to delete a master node from the k8s cluster
del-node <cluster> <ip> to delete a work node from the k8s cluster
#开始还原
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl restore k8s-cluster-01
ansible-playbook -i clusters/k8s-cluster-01/hosts -e @clusters/k8s-cluster-01/config.yml playbooks/95.restore.yml
2022-04-23 23:38:48 INFO cluster:k8s-cluster-01 restore begins in 5s, press any key to abort:
...
....
PLAY RECAP ***************************************************************************************************************************************************************************************************************************
172.31.7.101 : ok=14 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.31.7.102 : ok=14 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.31.7.103 : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.31.7.111 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.31.7.112 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
#验证
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test1 1/1 Running 0 7h10m
default net-test2 1/1 Running 0 7h10m
kube-system calico-kube-controllers-754966f84c-c8d2g 1/1 Running 2 (53s ago) 7h23m
kube-system calico-node-csnl7 1/1 Running 0 7h23m
kube-system calico-node-czwwf 1/1 Running 0 7h23m
kube-system calico-node-smmk4 1/1 Running 0 7h23m
kube-system calico-node-wlpl9 1/1 Running 0 7h23m
kube-system coredns-79688b6cb4-kqpgs 1/1 Running 0 4h14m
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-l52xb 1/1 Running 0 3h3m
kubernetes-dashboard kubernetes-dashboard-fb8648fd9-p7qt6 1/1 Running 0 3h3m