07-etcd组件和etcdctl常命令

1、kubernetes组件-etcd:
etcd 是CoreOS公司开发目前是Kubernetes默认使用的key-value数据存储系统,用于保存
kubernetes的所有集群数据,etcd支持分布式集群功能,生产环境使用时需要为etcd数据提
供定期备份机制
官网:https://etcd.io/
GitHub:https://github.com/etcd-io/etcd
官方文档:

~# cat /etc/systemd/system/etcd.service
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/ #数据保存目录ExecStart=/usr/local/bin/etcd \ #二进制文件路径
--name=etcd1 \ #当前node 名称
--cert-file=/etc/etcd/ssl/etcd.pem
--key-file=/etc/etcd/ssl/etcd-key.pem
--peer-cert-file=/etc/etcd/ssl/etcd.pem
--peer-key-file=/etc/etcd/ssl/etcd-key.pem
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem
--initial-advertise-peer-urls=https://192.168.7.101:2380 \ #通告自己的集群端口
--listen-peer-urls=https://192.168.7.101:2380 \ #集群之间通讯端口
--listen-client-urls=https://192.168.7.101:2379,http://127.0.0.1:2379 \ #客户端访问地址
--advertise-client-urls=https://192.168.7.101:2379 \ #通告自己的客户端端口
--initial-cluster-token=etcd-cluster-0 \ #创建集群使用的token,一个集群内的节点保持一致
--initial-cluster=etcd1=https://192.168.7.101:2380,etcd2=https://192.168.7.102:2380,etcd3=https://192.168.7.103:2380 \ #集群所有的节点信息 --initial-cluster-state=new \ #新建集群的时候的值为new,如果是已经存在的集群为existing。
--data-dir=/var/lib/etcd #数据目录路径
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

2、etcdctl常用命令
查看成员信息

etcd有多个不同的API访问版本,v1版本已经废弃,etcd v2v3 本质上是共享同一套 raft 协议代码的两个独立的应用,接口不一样,存储不一样,数据互相隔离。也就是说如果从 Etcd v2 升级到 Etcd v3,原来v2 的数据还是只能用 v2 的接口访问,v3 的接口创建的数据也只能访问通过 v3 的接口访问。 WARNING:

Environment variable ETCDCTL_API is not set; defaults to etcdctl v2\. #默认使用V2版本
Set environment variable ETCDCTL_API=3 to use v3 API or ETCDCTL_API=2 to use v2 API. #设置API版本

--help查看帮助文档

root@k8s-master1-etcd1:~# etcdctl --help
NAME:
etcdctl - A simple command line client for etcd3.
USAGE:
etcdctl [flags]
VERSION:
3.5.1
API VERSION:
3.5
COMMANDS:
alarm disarm Disarms all alarms
alarm list Lists all alarms
auth disable Disables authentication
auth enable Enables authentication
auth status Returns authentication status
check datascale Check the memory usage of holding data for different workloads on a given server endpoint.
check perf Check the performance of the etcd cluster
compaction Compacts the event history in etcd
defrag Defragments the storage of the etcd members with given endpoints
del Removes the specified key or range of keys [key, range_end)
elect Observes and participates in leader election
endpoint hashkv Prints the KV history hash for each endpoint in --endpoints
endpoint health Checks the healthiness of endpoints specified in --endpoints flag
endpoint status Prints out the status of endpoints specified in --endpoints flag
get Gets the key or a range of keys
help Help about any command
lease grant Creates leases
lease keep-alive Keeps leases alive (renew)
lease list List all active leases
lease revoke Revokes leases
lease timetolive Get lease information
lock Acquires a named lock
make-mirror Makes a mirror at the destination etcd cluster
member add Adds a member into the cluster
member list Lists all members in the cluster
member promote Promotes a non-voting member in the cluster
member remove Removes a member from the cluster
member update Updates a member in the cluster
move-leader Transfers leadership to another etcd cluster member.
put Puts the given key into the store
role add Adds a new role
role delete Deletes a role
role get Gets detailed information of a role
role grant-permission Grants a key to a role
role list Lists all roles
role revoke-permission Revokes a key from a role
snapshot restore Restores an etcd member snapshot to an etcd directory
snapshot save Stores an etcd node backend snapshot to a given file
snapshot status [deprecated] Gets backend snapshot status of a given file
txn Txn processes all the requests in one transaction
user add Adds a new user
user delete Deletes a user
user get Gets detailed information of a user
user grant-role Grants a role to a user
user list Lists all users
user passwd Changes password of user
user revoke-role Revokes a role from a user
version Prints the version of etcdctl
watch Watches events stream on keys or prefixes
OPTIONS:
--cacert="" verify certificates of TLS-enabled secure servers using this CA bundle
--cert="" identify secure client using this TLS certificate file
--command-timeout=5s timeout for short running command (excluding dial timeout)
--debug[=false] enable client-side debug logging
--dial-timeout=2s dial timeout for client connections
-d, --discovery-srv="" domain name to query for SRV records describing cluster endpoints
--discovery-srv-name="" service name to query when using DNS discovery
--endpoints=[127.0.0.1:2379] gRPC endpoints
-h, --help[=false] help for etcdctl
--hex[=false] print byte strings as hex encoded strings
--insecure-discovery[=true] accept insecure SRV records describing cluster endpoints
--insecure-skip-tls-verify[=false] skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)
--insecure-transport[=true] disable transport security for client connections
--keepalive-time=2s keepalive time for client connections
--keepalive-timeout=6s keepalive timeout for client connections
--key="" identify secure client using this TLS key file
--password="" password for authentication (if this option is used, --user option shouldn't include password)
--user="" username[:password] for authentication (prompt if password is not supplied)
-w, --write-out="simple" set the output format (fields, json, protobuf, simple, table)

验证当前etcd所有成员状态:

#export NODE_IPS="172.31.7.101 172.31.7.102 172.31.7.103"
# for ip in ${NODE_IPS}; do   ETCDCTL_API=3 /usr/local/bin/etcdctl   --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem   --cert=/etc/kubernetes/ssl/etcd.pem   --key=/etc/kubernetes/ssl/etcd-key.pem   endpoint health; done
查看所有所员

显示集群成员信息:

ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table member list   --endpoints=https://172.31.7.101:2379 --cacert=/etc/kubernetes/ssl/ca.pem   --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem
显示集群成员信息

以表格方式显示节点详细状态:

export NODE_IPS="172.31.7.101 172.31.7.102 172.31.7.103"
for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table endpoint status --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem; done
以表格方式显示节点详细状态

查看etcd数据信息:

~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only #以路径的方式所有key信息

pod信息:

~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep  pod

namespace信息:

~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep namespaces

控制器信息:

root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep deployment

calico组件信息:

root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep calico
</pre>

etcd增删改查数据:
添加数据

root@k8s-master1-etcd1:~# etcdctl put /name "sam"
OK

查询数据

root@k8s-master1-etcd1:~# etcdctl get /name
/name
sam

改动数据,#直接覆盖就是更新数据

root@k8s-master1-etcd1:~# etcdctl put /name "sam1"
OK

验证改动

root@k8s-master1-etcd1:~# etcdctl get /name
/name
sam1

删除数据

root@k8s-master1-etcd1:~# etcdctl del /name
1
root@k8s-master1-etcd1:~# etcdctl get /name
root@k8s-master1-etcd1:~#

root@k8s-master1-etcd1:~# kubectl get pods -A |grep net-test1
default                net-test1                                    1/1     Running   5 (36m ago)   4d22h
root@k8s-master1-etcd1:~# etcdctl del /registry/pods/default/net-test1
1
root@k8s-master1-etcd1:~# kubectl get pods -A |grep net-test1
root@k8s-master1-etcd1:~#

etcd数据watch机制:

基于不断监看数据,发生变化就主动触发通知客户端,Etcd v3 的watch机制支持watch某个固定的key,也支持watch一个范围。
在etcd node1上watch一个key,没有此key也可以执行watch,后期可以再创建:
etcdctl watch /data
etcd node2修改数据,验证etcd node1是否能够发现数据变化

etcdctl put  /data "data v1"
OK
#etcdctl put  /data "data v2"
OK

验证etcd node1

image-20220419234515236.png

3、etcd V3 API版本数据备份与恢复:

WAL是write ahead log的缩写,顾名思义,也就是在执行真正的写操作之前先写一个日志,预写日志。
wal: 存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要先写入到WAL中。

V3版本备份数据:

root@k8s-master1-etcd1:~# etcdctl snapshot save /data/etcd_backup/etcd_backup_20220419.db
{"level":"info","ts":1650383531.7492867,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/data/etcd_backup/etcd_backup_20220419.db.part"}
{"level":"info","ts":1650383531.7510343,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1650383531.751134,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
{"level":"info","ts":1650383531.787254,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":1650383531.793083,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","size":"3.5 MB","took":"now"}
{"level":"info","ts":1650383531.7934062,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/data/etcd_backup/etcd_backup_20220419.db"}
Snapshot saved at /data/etcd_backup/etcd_backup_20220419.db
root@k8s-master1-etcd1:~# ll /data/etcd_backup/etcd_backup_20220419.db
-rw------- 1 root root 3469344 Apr 19 23:52 /data/etcd_backup/etcd_backup_20220419.db

V3版本恢复数据:

root@k8s-master1-etcd1:~# etcdctl snapshot restore --help
NAME:
 snapshot restore - Restores an etcd member snapshot to an etcd directory

USAGE:
 etcdctl snapshot restore <filename> [options] [flags]

DESCRIPTION:
 Moved to `etcdctl snapshot restore ...`

OPTIONS:
 --data-dir=""                                             Path to the data directory
 -h, --help[=false]                                            help for restore
 --initial-advertise-peer-urls="http://localhost:2380"     List of this member's peer URLs to advertise to the rest of the cluster
 --initial-cluster="default=http://localhost:2380"         Initial cluster configuration for restore bootstrap
 --initial-cluster-token="etcd-cluster"                    Initial cluster token for the etcd cluster during restore bootstrap
 --name="default"                                          Human-readable name for this member
 --skip-hash-check[=false]                                 Ignore snapshot integrity hash value (required if copied from data directory)
 --wal-dir=""                                              Path to the WAL directory (use --data-dir if none given)

GLOBAL OPTIONS:
 --cacert=""                               verify certificates of TLS-enabled secure servers using this CA bundle
 --cert=""                                 identify secure client using this TLS certificate file
 --command-timeout=5s                      timeout for short running command (excluding dial timeout)
 --debug[=false]                           enable client-side debug logging
 --dial-timeout=2s                         dial timeout for client connections
 -d, --discovery-srv=""                        domain name to query for SRV records describing cluster endpoints
 --discovery-srv-name=""                   service name to query when using DNS discovery
 --endpoints=[127.0.0.1:2379]              gRPC endpoints
 --hex[=false]                             print byte strings as hex encoded strings
 --insecure-discovery[=true]               accept insecure SRV records describing cluster endpoints
 --insecure-skip-tls-verify[=false]        skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)
 --insecure-transport[=true]               disable transport security for client connections
 --keepalive-time=2s                       keepalive time for client connections
 --keepalive-timeout=6s                    keepalive timeout for client connections
 --key=""                                  identify secure client using this TLS key file
 --password=""                             password for authentication (if this option is used, --user option shouldn't include password)
 --user=""                                 username[:password] for authentication (prompt if password is not supplied)
 -w, --write-out="simple"                      set the output format (fields, json, protobuf, simple, table)

还原

root@k8s-master1-etcd1:~# etcdctl snapshot restore /data/etcd_backup/etcd_backup_20220419.db --data-dir="/data/etcddir/"#将数据恢复到一个新的不存在的目录中

自动备份数据

~# mkdir /data/etcd-backup-dir/ -p
~# cat  script.sh 
#!/bin/bash
source /etc/profile
DATE=`date +%Y-%m-%d_%H-%M-%S`
ETCDCTL_API=3 /usr/bin/etcdctl  snapshot save  /data/etcd-backup-dir/etcd-snap
还原时得改数据目录

4、使用kubeasz项目自带的etcd集群还原功能
查看pods资源

root@k8s-master1-etcd1:~/yaml# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
default                net-test1                                    1/1     Running   0          6h37m
default                net-test2                                    1/1     Running   0          6h37m
kube-system            calico-kube-controllers-754966f84c-c8d2g     1/1     Running   0          6h49m
kube-system            calico-node-csnl7                            1/1     Running   0          6h49m
kube-system            calico-node-czwwf                            1/1     Running   0          6h49m
kube-system            calico-node-smmk4                            1/1     Running   0          6h49m
kube-system            calico-node-wlpl9                            1/1     Running   0          6h49m
kube-system            coredns-79688b6cb4-kqpgs                     1/1     Running   0          3h40m
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-l52xb   1/1     Running   0          149m
kubernetes-dashboard   kubernetes-dashboard-fb8648fd9-p7qt6         1/1     Running   0          149m

使用94号剧本备份etcd集群

root@k8s-master1-etcd1:/etc/kubeasz# ll ./playbooks/
total 96
drwxrwxr-x  2 root root 4096 Apr 23 23:08 ./
drwxrwxr-x 12 root root 4096 Apr 23 16:33 ../
-rw-rw-r--  1 root root  422 Apr 23 16:05 01.prepare.yml
-rw-rw-r--  1 root root   58 Jan  5 20:19 02.etcd.yml
-rw-rw-r--  1 root root  209 Jan  5 20:19 03.runtime.yml
-rw-rw-r--  1 root root  482 Jan  5 20:19 04.kube-master.yml
-rw-rw-r--  1 root root  218 Jan  5 20:19 05.kube-node.yml
-rw-rw-r--  1 root root  408 Jan  5 20:19 06.network.yml
-rw-rw-r--  1 root root   77 Jan  5 20:19 07.cluster-addon.yml
-rw-rw-r--  1 root root   34 Jan  5 20:19 10.ex-lb.yml
-rw-rw-r--  1 root root 3893 Jan  5 20:19 11.harbor.yml
-rw-rw-r--  1 root root 1567 Jan  5 20:19 21.addetcd.yml
-rw-rw-r--  1 root root 1520 Jan  5 20:19 22.addnode.yml
-rw-rw-r--  1 root root 1050 Jan  5 20:19 23.addmaster.yml
-rw-rw-r--  1 root root 3344 Jan  5 20:19 31.deletcd.yml
-rw-rw-r--  1 root root 2018 Jan  5 20:19 32.delnode.yml
-rw-rw-r--  1 root root 2071 Jan  5 20:19 33.delmaster.yml
-rw-rw-r--  1 root root 1891 Jan  5 20:19 90.setup.yml
-rw-rw-r--  1 root root 1054 Jan  5 20:19 91.start.yml
-rw-rw-r--  1 root root  934 Jan  5 20:19 92.stop.yml
-rw-rw-r--  1 root root 1042 Jan  5 20:19 93.upgrade.yml
-rw-rw-r--  1 root root 1786 Jan  5 20:19 94.backup.yml
-rw-rw-r--  1 root root  999 Jan  5 20:19 95.restore.yml
-rw-rw-r--  1 root root  337 Jan  5 20:19 99.clean.yml
#查看帮助文档
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl --help
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
    list                             to list all of the managed clusters
    checkout    <cluster>            to switch default kubeconfig of the cluster
    new         <cluster>            to start a new k8s deploy with name 'cluster'
    setup       <cluster>  <step>    to setup a cluster, also supporting a step-by-step way
    start       <cluster>            to start all of the k8s services stopped by 'ezctl stop'
    stop        <cluster>            to stop all of the k8s services temporarily
    upgrade     <cluster>            to upgrade the k8s cluster
    destroy     <cluster>            to destroy the k8s cluster
    backup      <cluster>            to backup the cluster state (etcd snapshot)
    restore     <cluster>            to restore the cluster state from backups
    start-aio                        to quickly setup an all-in-one cluster with 'default' settings

Cluster ops:
    add-etcd    <cluster>  <ip>      to add a etcd-node to the etcd cluster
    add-master  <cluster>  <ip>      to add a master node to the k8s cluster
    add-node    <cluster>  <ip>      to add a work node to the k8s cluster
    del-etcd    <cluster>  <ip>      to delete a etcd-node from the etcd cluster
    del-master  <cluster>  <ip>      to delete a master node from the k8s cluster
    del-node    <cluster>  <ip>      to delete a work node from the k8s cluster

Extra operation:
    kcfg-adm    <cluster>  <args>    to manage client kubeconfig of the k8s cluster

Use "ezctl help <command>" for more information about a given command.

#开始备份
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl backup k8s-cluster-01
ansible-playbook -i clusters/k8s-cluster-01/hosts -e @clusters/k8s-cluster-
...
...
PLAY RECAP ***************************************************************************************************************************************************************************************************************************
localhost                  : ok=10   changed=6    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

此时在kubeasz k8s集群下有一个backup目录,etcd备份目录就放在这里


root@k8s-master1-etcd1:/etc/kubeasz# ll clusters/k8s-cluster-01/backup/
total 5472
drwxr-xr-x 2 root root    4096 Apr 23 23:12 ./
drwxr-xr-x 5 root root    4096 Apr 23 19:17 ../
-rw------- 1 root root 2793504 Apr 23 23:12 snapshot.db
-rw------- 1 root root 2793504 Apr 23 23:12 snapshot_202204232312.db

删除pods做还原试验

root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
default                net-test2                                    1/1     Running   0          6h46m
kube-system            calico-kube-controllers-754966f84c-c8d2g     1/1     Running   0          6h58m
kube-system            calico-node-csnl7                            1/1     Running   0          6h58m
kube-system            calico-node-czwwf                            1/1     Running   0          6h58m
kube-system            calico-node-smmk4                            1/1     Running   0          6h58m
kube-system            calico-node-wlpl9                            1/1     Running   0          6h58m
kube-system            coredns-79688b6cb4-kqpgs                     1/1     Running   0          3h49m
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-l52xb   1/1     Running   0          158m
kubernetes-dashboard   kubernetes-dashboard-fb8648fd9-p7qt6         1/1     Running   0          158m
root@k8s-master1-etcd1:/etc/kubeasz# kubectl delete pods -n default net-test1
pod "net-test1" deleted

root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
default                net-test2                                    1/1     Running   0          6h54m
kube-system            calico-kube-controllers-754966f84c-c8d2g     1/1     Running   0          7h7m
kube-system            calico-node-csnl7                            1/1     Running   0          7h7m
kube-system            calico-node-czwwf                            1/1     Running   0          7h7m
kube-system            calico-node-smmk4                            1/1     Running   0          7h7m
kube-system            calico-node-wlpl9                            1/1     Running   0          7h7m
kube-system            coredns-79688b6cb4-kqpgs                     1/1     Running   0          3h57m
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-l52xb   1/1     Running   0          166m
kubernetes-dashboard   kubernetes-dashboard-fb8648fd9-p7qt6         1/1     Running   0          166m

使用95号剧本还原etcd集群

root@k8s-master1-etcd1:/etc/kubeasz# ll playbooks/
total 96
drwxrwxr-x  2 root root 4096 Apr 23 23:22 ./
drwxrwxr-x 12 root root 4096 Apr 23 16:33 ../
-rw-rw-r--  1 root root  422 Apr 23 16:05 01.prepare.yml
-rw-rw-r--  1 root root   58 Jan  5 20:19 02.etcd.yml
-rw-rw-r--  1 root root  209 Jan  5 20:19 03.runtime.yml
-rw-rw-r--  1 root root  482 Jan  5 20:19 04.kube-master.yml
-rw-rw-r--  1 root root  218 Jan  5 20:19 05.kube-node.yml
-rw-rw-r--  1 root root  408 Jan  5 20:19 06.network.yml
-rw-rw-r--  1 root root   77 Jan  5 20:19 07.cluster-addon.yml
-rw-rw-r--  1 root root   34 Jan  5 20:19 10.ex-lb.yml
-rw-rw-r--  1 root root 3893 Jan  5 20:19 11.harbor.yml
-rw-rw-r--  1 root root 1567 Jan  5 20:19 21.addetcd.yml
-rw-rw-r--  1 root root 1520 Jan  5 20:19 22.addnode.yml
-rw-rw-r--  1 root root 1050 Jan  5 20:19 23.addmaster.yml
-rw-rw-r--  1 root root 3344 Jan  5 20:19 31.deletcd.yml
-rw-rw-r--  1 root root 2018 Jan  5 20:19 32.delnode.yml
-rw-rw-r--  1 root root 2071 Jan  5 20:19 33.delmaster.yml
-rw-rw-r--  1 root root 1891 Jan  5 20:19 90.setup.yml
-rw-rw-r--  1 root root 1054 Jan  5 20:19 91.start.yml
-rw-rw-r--  1 root root  934 Jan  5 20:19 92.stop.yml
-rw-rw-r--  1 root root 1042 Jan  5 20:19 93.upgrade.yml
-rw-rw-r--  1 root root 1786 Jan  5 20:19 94.backup.yml
-rw-rw-r--  1 root root  999 Jan  5 20:19 95.restore.yml
-rw-rw-r--  1 root root  337 Jan  5 20:19 99.clean.yml
#帮助文档
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl --help
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
    list                             to list all of the managed clusters
    checkout    <cluster>            to switch default kubeconfig of the cluster
    new         <cluster>            to start a new k8s deploy with name 'cluster'
    setup       <cluster>  <step>    to setup a cluster, also supporting a step-by-step way
    start       <cluster>            to start all of the k8s services stopped by 'ezctl stop'
    stop        <cluster>            to stop all of the k8s services temporarily
    upgrade     <cluster>            to upgrade the k8s cluster
    destroy     <cluster>            to destroy the k8s cluster
    backup      <cluster>            to backup the cluster state (etcd snapshot)
    restore     <cluster>            to restore the cluster state from backups
    start-aio                        to quickly setup an all-in-one cluster with 'default' settings

Cluster ops:
    add-etcd    <cluster>  <ip>      to add a etcd-node to the etcd cluster
    add-master  <cluster>  <ip>      to add a master node to the k8s cluster
    add-node    <cluster>  <ip>      to add a work node to the k8s cluster
    del-etcd    <cluster>  <ip>      to delete a etcd-node from the etcd cluster
    del-master  <cluster>  <ip>      to delete a master node from the k8s cluster
    del-node    <cluster>  <ip>      to delete a work node from the k8s cluster

#开始还原
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl restore k8s-cluster-01
ansible-playbook -i clusters/k8s-cluster-01/hosts -e @clusters/k8s-cluster-01/config.yml playbooks/95.restore.yml
2022-04-23 23:38:48 INFO cluster:k8s-cluster-01 restore begins in 5s, press any key to abort:
...
....

PLAY RECAP ***************************************************************************************************************************************************************************************************************************
172.31.7.101               : ok=14   changed=11   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
172.31.7.102               : ok=14   changed=11   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
172.31.7.103               : ok=10   changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
172.31.7.111               : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
172.31.7.112               : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
#验证
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS      AGE
default                net-test1                                    1/1     Running   0             7h10m
default                net-test2                                    1/1     Running   0             7h10m
kube-system            calico-kube-controllers-754966f84c-c8d2g     1/1     Running   2 (53s ago)   7h23m
kube-system            calico-node-csnl7                            1/1     Running   0             7h23m
kube-system            calico-node-czwwf                            1/1     Running   0             7h23m
kube-system            calico-node-smmk4                            1/1     Running   0             7h23m
kube-system            calico-node-wlpl9                            1/1     Running   0             7h23m
kube-system            coredns-79688b6cb4-kqpgs                     1/1     Running   0             4h14m
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-l52xb   1/1     Running   0             3h3m
kubernetes-dashboard   kubernetes-dashboard-fb8648fd9-p7qt6         1/1     Running   0             3h3m

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 220,367评论 6 512
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,959评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,750评论 0 357
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,226评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,252评论 6 397
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,975评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,592评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,497评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 46,027评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,147评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,274评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,953评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,623评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,143评论 0 23
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,260评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,607评论 3 375
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,271评论 2 358

推荐阅读更多精彩内容