一、什么是 Static Pod?
静态 Pod 在指定的节点上由 kubelet 守护进程直接管理,不需要 API 服务器监管,与由控制面管理的 Pod(如 Deployment) 不同。
kubelet 监视每个静态 Pod(在它崩溃之后重新启动)。
静态 Pod 永远都会绑定到一个指定节点上的 Kubelet。
kubelet 会尝试通过 Kubernetes API 服务器为每个静态 Pod 自动创建一个 镜像 Pod。
这意味着节点上运行的静态 Pod 对 API 服务来说是可见的,但是不能通过 API 服务器来控制。
常见的 Static Pod:
- etcd
- kube-apiserver
- kube-controller-manager
- kube-scheduler
二 、堆叠etcd集群
堆叠etcd集群:一个集群里面有3个master,每个master里面都有etcd这个pod,这个pod通常通过kubelet自管理。
kubelet会去读取一个config,这个config定义了static pod path,kubelet它本身的作用是维护pod的生命周期,他有几种方式来加载pod的清单,一种方式是监听api server,还有一种方式是它会去扫描本地的static pod path,扫描这个目录,它要去看这个目录里面有没有pod清单,如果有的话就直接加载。
这种模式就是etcd会在每个master上面部署,它是以静态的方式去部署。
堆叠式好处:apiserver和etcd之间是紧绑定的,所有apiserver的请求以loopback方式发送到etcd,读操作可以直接读走,读操作不需要经过master,读操作不需要经过网络调用的,本地就走了,其次这些组件放在一起好管理维护,不需要网络调用就可以读取数据。
etcd是一个重落盘的,它对disk io是有要求的,其他的应用没有那么高的要求,那么放在一起也是ok的。
三、如何创建 Static Pod?
1. 静态文件
通过查看kubelet的service文件可以看到:
运行中的 kubelet 会定期扫描配置的目录(比如例子中的 /etc/kubernetes/manifests 目录)中的变化, 并且根据文件中出现/消失的 Pod 来添加/删除 Pod。
# ls -l /etc/kubernetes/manifests
total 16
-rw-------. 1 root root 1776 Mar 3 2022 etcd.yaml
-rw-------. 1 root root 2665 Mar 3 2022 kube-apiserver.yaml
-rw-------. 1 root root 2679 Mar 3 2022 kube-controller-manager.yaml
-rw-------. 1 root root 1182 Mar 3 2022 kube-scheduler.yaml
2. Http请求
kubelet 周期地从 –manifest-url= 参数指定的地址下载⽂件,并且把它翻译成 Json/Yaml 格式的pod 定义。
此后的操作⽅式与 –pod-manifest-path= 相同,kubelet 会不时地重新下载该⽂件,当⽂件变化时对应地终止或启动静态 pod。
静态Pod是由kubelet进行管理的仅存在特定的Node上的Pod。
这些Pod不能通过API Server进行管理,也无法和RC、Deployment或者DaemonSet关联,并且不存在健康检查,该类型的Pod由kubelet创建的,并且只在kubelet所在的Node上运行。
四 、基于静态pod的etcd集群
# kubectl get pod -A | grep etcd
kube-system etcd-k8s-master01 1/1 Running 6 441d
kube-system etcd-k8-smaster02 1/1 Running 2 441d
kube-system etcd-k8-smaster03 1/1 Running 2 441d
# kubectl describe pod etcd-k8s-master01 -n kube-system
Name: etcd-k8s-master01
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: k8s-master01/192.168.32.118
Start Time: Fri, 04 Mar 2022 20:33:41 +0800
Labels: component=etcd
tier=control-plane
Annotations: kubernetes.io/config.hash: dc79ee34c5f742495f1acbb752c1a2c9
kubernetes.io/config.mirror: dc79ee34c5f742495f1acbb752c1a2c9
kubernetes.io/config.seen: 2022-03-03T17:14:15.273839067+08:00
kubernetes.io/config.source: file
Status: Running
IP: 192.168.32.118
IPs:
IP: 192.168.32.118
Controlled By: Node/k8s-master01
Containers:
etcd:
Container ID: docker://4b3cd07cde9c1470f8df250848c44dee03eec7bd815e9bd0cce13831d35e13ff
Image: harbor.example.com/k8s/etcd:3.4.3-0
Image ID: docker-pullable://harbor.example.com/k8s/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216
Port: <none>
Host Port: <none>
comand:
etcd
--advertise-client-urls=https://192.168.32.118:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://192.168.32.118:2380
--initial-cluster=k8s-master01=https://192.168.32.118:2380
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379,https://192.168.32.118:2379
--listen-metrics-urls=http://127.0.0.1:2381
--listen-peer-urls=https://192.168.32.118:2380
--name=k8s-master01
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
State: Running
Started: Sat, 06 May 2023 18:14:25 +0800
Ready: True
Restart Count: 6
Liveness: http-get http://127.0.0.1:2381/health delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/pki/etcd from etcd-certs (rw)
/var/lib/etcd from etcd-data (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki/etcd
HostPathType: DirectoryOrCreate
etcd-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/etcd
HostPathType: DirectoryOrCreate
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
运行"kubectl describe pod etcd-k8s-master01 -n kube-system"命令来获取有关名为"etcd-k8s-master01"的Kubernetes Pod的详细信息。
下面是对输出的解读:
- Name: Pod的名称为"etcd-k8s-master01"。
- Namespace: Pod所在的命名空间为"kube-system"。
- Priority: Pod的优先级为"2000000000"。
- Priority Class Name: Pod所属的优先级类别名称为"system-cluster-critical"。
- Node: Pod运行在名为"k8s-master01"的Kubernetes节点上,其IP地址为"192.168.32.118"。
- Start Time: Pod的启动时间为"Fri, 04 Mar 2022 20:33:41 +0800"。
- Labels: Pod的标签包括"component=etcd"和"tier=control-plane"。
- Annotations: Pod的注释包括"kubernetes.io/config.hash"、"kubernetes.io/config.mirror"、"kubernetes.io/config.seen"和"kubernetes.io/config.source"。
- Status: Pod的状态为"Running",即正在运行。
- IP: Pod的IP地址为"192.168.32.118"。
- IPs: Pod的IP地址列表只包含一个IP地址,即"192.168.32.118"。
- Controlled By: Pod由"k8s-master01"节点控制。
- Containers: Pod中包含一个名为"etcd"的容器。
- etcd: 容器的名称为"etcd"。
- Container ID: 容器的ID为"docker://4b3cd07cde9c1470f8df250848c44dee03eec7bd815e9bd0cce13831d35e13ff"。
- Image: 容器的镜像为"harbor.example.com/k8s/etcd:3.4.3-0"。
- Image ID: 容器的镜像ID为"docker-pullable://harbor.example.com/k8s/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216"。
- Port: 容器没有暴露任何端口。
- Host Port: 容器没有暴露任何主机端口。
- Command: 容器的启动命令为一组"etcd"命令及其参数。
- State: 容器的状态为"Running",即正在运行。
- Started: 容器的启动时间为"Sat, 06 May 2023 18:14:25 +0800"。
- Ready: 容器已经准备就绪。
- Restart Count: 容器已经重启了6次。
- Liveness: 容器的活跃性检查是通过HTTP GET请求来实现的,请求的URL为"http://127.0.0.1:2381/health",延迟为15秒,超时为15秒,检查周期为10秒,最多允许失败8次。
- Environment: 容器没有设置任何环境变量。
- Mounts: 容器挂载了两个主机路径卷,分别为"/etc/kubernetes/pki/etcd"和"/var/lib/etcd"。
- Conditions: Pod的状态条件包括"Initialized"、"Ready"、"ContainersReady"和"PodScheduled"。
- Volumes: Pod挂载了两个主机路径卷,分别为"etcd-certs"和"etcd-data"。
- QoS Class: Pod的QoS类别为"BestEffort"。
- Node-Selectors: Pod没有设置任何节点选择器。
- Tolerations: Pod设置了"NoExecute"容忍策略。
- Events: Pod没有任何事件。
etcd
--advertise-client-urls=[https://192.168.32.118:2379](https://192.168.32.118:2379/)
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=[https://192.168.32.118:2380](https://192.168.32.118:2380/)
--initial-cluster=k8s-master01=[https://192.168.32.118:2380](https://192.168.32.118:2380/)
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=[https://127.0.0.1:2379](https://127.0.0.1:2379/),[https://192.168.32.118:2379](https://192.168.32.118:2379/)
--listen-metrics-urls=[http://127.0.0.1:2381](http://127.0.0.1:2381/)
--listen-peer-urls=[https://192.168.32.118:2380](https://192.168.32.118:2380/)
--name=k8s-master01
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
这是关于Kubernetes集群中的etcd组件的配置信息,其中包含以下参数:
- --advertise-client-urls=https://192.168.32.118:2379:etcd客户端访问etcd服务的URL,这里是使用HTTPS协议,IP地址为192.168.32.118,端口为2379。
- --cert-file=/etc/kubernetes/pki/etcd/server.crt:etcd服务端证书的路径。
- --client-cert-auth=true:启用客户端证书认证。
- --data-dir=/var/lib/etcd:etcd数据存储的目录。
- --initial-advertise-peer-urls=https://192.168.32.118:2380:etcd节点之间通信的URL,这里是使用HTTPS协议,IP地址为192.168.32.118,端口为2380。
- --initial-cluster=k8s-master01=https://192.168.32.118:2380:etcd集群中的初始节点,这里只有一个节点,名称为k8s-master01,URL为https://192.168.32.118:2380。
- --key-file=/etc/kubernetes/pki/etcd/server.key:etcd服务端证书的私钥路径。
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.32.118:2379:etcd客户端监听的URL,这里是使用HTTPS协议,分别监听127.0.0.1和192.168.32.118上的2379端口。
- --listen-metrics-urls=http://127.0.0.1:2381:etcd监听的metrics URL,这里是使用HTTP协议,监听127.0.0.1上的2381端口。
- --listen-peer-urls=https://192.168.32.118:2380:etcd节点之间通信的URL,这里是使用HTTPS协议,IP地址为192.168.32.118,端口为2380。
- --name=k8s-master01:etcd集群中当前节点的名称。
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt:etcd节点之间通信使用的证书路径。
- --peer-client-cert-auth=true:启用节点之间通信的客户端证书认证。
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key:etcd节点之间通信使用的证书的私钥路径。
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt:节点之间通信使用的CA证书路径。
- --snapshot-count=10000:etcd数据的快照数量。
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt:etcd服务端证书的CA证书路径。
kubectl describe pod etcd-k8s-master02 -n kube-system
Name: etcd-k8s-master02
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: k8s-master02/192.168.32.184
Start Time: Thu, 03 Mar 2022 17:15:27 +0800
Labels: component=etcd
tier=control-plane
Annotations: kubernetes.io/config.hash: d4316f95efc48355acb3f803f3d95a67
kubernetes.io/config.mirror: d4316f95efc48355acb3f803f3d95a67
kubernetes.io/config.seen: 2022-03-03T17:15:27.225519413+08:00
kubernetes.io/config.source: file
Status: Running
IP: 192.168.32.184
IPs:
IP: 192.168.32.184
Controlled By: Node/k8s-master02
Containers:
etcd:
Container ID: docker://45e8486e1c64295b023c4877f8964dcce6e7e858dfccc825094d6223316ad14c
Image: harbor.example.com/k8s/etcd:3.4.3-0
Image ID: docker-pullable://harbor.example.com/k8s/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216
Port: <none>
Host Port: <none>
comand:
etcd
--advertise-client-urls=https://192.168.32.184:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://192.168.32.184:2380
--initial-cluster=k8s-master01=https://192.168.32.118:2380,k8s-master02=https://192.168.32.184:2380
--initial-cluster-state=existing
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379,https://192.168.32.184:2379
--listen-metrics-urls=http://127.0.0.1:2381
--listen-peer-urls=https://192.168.32.184:2380
--name=k8s-master02
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
State: Running
Started: Sat, 05 Mar 2022 10:46:30 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 04 Mar 2022 20:34:35 +0800
Finished: Sat, 05 Mar 2022 10:45:57 +0800
Ready: True
Restart Count: 2
Liveness: http-get http://127.0.0.1:2381/health delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/pki/etcd from etcd-certs (rw)
/var/lib/etcd from etcd-data (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki/etcd
HostPathType: DirectoryOrCreate
etcd-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/etcd
HostPathType: DirectoryOrCreate
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
kubectl describe pod etcd-k8s-master03 -n kube-system
Name: etcd-k8s-master03
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: k8s-master03/192.168.32.120
Start Time: Thu, 03 Mar 2022 17:16:16 +0800
Labels: component=etcd
tier=control-plane
Annotations: kubernetes.io/config.hash: 2212c0fbea168e53916ce8b43d634ef2
kubernetes.io/config.mirror: 2212c0fbea168e53916ce8b43d634ef2
kubernetes.io/config.seen: 2022-03-03T17:16:16.429580794+08:00
kubernetes.io/config.source: file
Status: Running
IP: 192.168.32.120
IPs:
IP: 192.168.32.120
Controlled By: Node/k8s-master03
Containers:
etcd:
Container ID: docker://2499992d276162b5964f8c23cb84f070a61b5e27fd4b0d398aa72ec51c6a3f33
Image: harbor.example.com/k8s/etcd:3.4.3-0
Image ID: docker-pullable://harbor.example.com/k8s/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216
Port: <none>
Host Port: <none>
comand:
etcd
--advertise-client-urls=https://192.168.32.120:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://192.168.32.120:2380
--initial-cluster=k8s-master01=https://192.168.32.118:2380,k8s-master03=https://192.168.32.120:2380,k8s-master02=https://192.168.32.184:2380
--initial-cluster-state=existing
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379,https://192.168.32.120:2379
--listen-metrics-urls=http://127.0.0.1:2381
--listen-peer-urls=https://192.168.32.120:2380
--name=k8s-master03
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
State: Running
Started: Sat, 05 Mar 2022 10:45:46 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 04 Mar 2022 20:34:08 +0800
Finished: Sat, 05 Mar 2022 10:45:08 +0800
Ready: True
Restart Count: 2
Liveness: http-get http://127.0.0.1:2381/health delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/pki/etcd from etcd-certs (rw)
/var/lib/etcd from etcd-data (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki/etcd
HostPathType: DirectoryOrCreate
etcd-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/etcd
HostPathType: DirectoryOrCreate
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
[root@k8s-master01 ~]# ll /etc/kubernetes/pki/
total 60
-rw-r--r--. 1 root root 1233 Mar 3 2022 apiserver.crt
-rw-r--r--. 1 root root 1090 Mar 3 2022 apiserver-etcd-client.crt
-rw-------. 1 root root 1675 Mar 3 2022 apiserver-etcd-client.key
-rw-------. 1 root root 1679 Mar 3 2022 apiserver.key
-rw-r--r--. 1 root root 1099 Mar 3 2022 apiserver-kubelet-client.crt
-rw-------. 1 root root 1679 Mar 3 2022 apiserver-kubelet-client.key
-rw-r--r--. 1 root root 1025 Mar 3 2022 ca.crt
-rw-------. 1 root root 1679 Mar 3 2022 ca.key
drwxr-xr-x. 2 root root 4096 Mar 3 2022 etcd
-rw-r--r--. 1 root root 1038 Mar 3 2022 front-proxy-ca.crt
-rw-------. 1 root root 1675 Mar 3 2022 front-proxy-ca.key
-rw-r--r--. 1 root root 1058 Mar 3 2022 front-proxy-client.crt
-rw-------. 1 root root 1675 Mar 3 2022 front-proxy-client.key
-rw-------. 1 root root 1675 Mar 3 2022 sa.key
-rw-------. 1 root root 451 Mar 3 2022 sa.pub
[root@k8s-master01 ~]# ll /etc/kubernetes/pki/etcd/
total 32
-rw-r--r--. 1 root root 1017 Mar 3 2022 ca.crt
-rw-------. 1 root root 1679 Mar 3 2022 ca.key
-rw-r--r--. 1 root root 1094 Mar 3 2022 healthcheck-client.crt
-rw-------. 1 root root 1675 Mar 3 2022 healthcheck-client.key
-rw-r--r--. 1 root root 1139 Mar 3 2022 peer.crt
-rw-------. 1 root root 1675 Mar 3 2022 peer.key
-rw-r--r--. 1 root root 1139 Mar 3 2022 server.crt
-rw-------. 1 root root 1679 Mar 3 2022 server.key
# wget https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
# tar -zxf etcd-v3.4.3-linux-amd64.tar.gz
# cp etcd-v3.4.3-linux-amd64/etcdctl /usr/local/bin
# chmod a+x /usr/local/bin/etcdctl
# rm -rf etcd-v3.4.3-linux-amd64
# alias etcdctl="ETCDCTL_API=3 /usr/local/bin/etcdctl \
--endpoints=192.168.32.118:2379,192.168.32.184:2379,192.168.32.120:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
--key=/etc/kubernetes/pki/etcd/healthcheck-client.key"
# etcdctl member list
6e09d29c207ffc94, started, k8s-master01, https://192.168.32.118:2380, https://192.168.32.118:2379, false
711b911203cfba91, started, k8s-master03, https://192.168.32.120:2380, https://192.168.32.120:2379, false
b04d0e2fbb23fcb9, started, k8s-master02, https://192.168.32.184:2380, https://192.168.32.184:2379, false
# etcdctl member list -w table
+------------------+---------+-------------+---------------------------+---------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------------+---------------------------+---------------------------+------------+
| 6e09d29c207ffc94 | started | k8s-master01 | https://192.168.32.118:2380 | https://192.168.32.118:2379 | false |
| 711b911203cfba91 | started | k8s-master03 | https://192.168.32.120:2380 | https://192.168.32.120:2379 | false |
| b04d0e2fbb23fcb9 | started | k8s-master02 | https://192.168.32.184:2380 | https://192.168.32.184:2379 | false |
+------------------+---------+-------------+---------------------------+---------------------------+------------+
# etcdctl endpoint status -w table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.32.118:2379 | 6e09d29c207ffc94 | 3.4.3 | 75 MB | false | false | 31 | 275288366 | 275288366 | |
| 192.168.32.184:2379 | b04d0e2fbb23fcb9 | 3.4.3 | 75 MB | false | false | 31 | 275288366 | 275288366 | |
| 192.168.32.120:2379 | 711b911203cfba91 | 3.4.3 | 75 MB | true | false | 31 | 275288366 | 275288366 | |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@master01 ~]# etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints https://192.168.32.118:2379 member list
+------------------+---------+-------------+---------------------------+---------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------------+---------------------------+---------------------------+------------+
| 6e09d29c207ffc94 | started | xt-master01 | https://192.168.32.118:2380 | https://192.168.32.118:2379 | false |
| 711b911203cfba91 | started | xt-master03 | https://192.168.32.120:2380 | https://192.168.32.120:2379 | false |
| b04d0e2fbb23fcb9 | started | xt-master02 | https://192.168.32.184:2380 | https://192.168.32.184:2379 | false |
+------------------+---------+-------------+---------------------------+---------------------------+------------+
[root@master01 ~]# etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints https://192.168.32.118:2379 endpoint status
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.32.118:2379 | 6e09d29c207ffc94 | 3.4.3 | 75 MB | false | false | 31 | 275791713 | 275791713 | |
| 192.168.32.184:2379 | b04d0e2fbb23fcb9 | 3.4.3 | 75 MB | false | false | 31 | 275791713 | 275791713 | |
| 192.168.32.120:2379 | 711b911203cfba91 | 3.4.3 | 75 MB | true | false | 31 | 275791713 | 275791713 | |
| https://192.168.32.118:2379 | 6e09d29c207ffc94 | 3.4.3 | 75 MB | false | false | 31 | 275791713 | 275791713 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
从etcdctl endpoint status输出可以知道 192.168.32.120:2379 是 etcd集群的leader节点。
etcdctl 有哪些查看etcd集群信息的常用命令?
etcdctl 是 etcd 自带的命令行工具,可以用于管理和操作 etcd 集群。以下是一些常用的 etcdctl 命令:
-
查看集群状态:
etcdctl endpoint status
该命令会输出当前 etcd 集群中每个节点的状态信息,包括节点 ID、地址、健康状态等。
-
查看集群成员:
etcdctl member list
该命令会输出当前 etcd 集群中的所有成员信息,包括成员 ID、地址、状态等。
-
查看某个 key 的值:
etcdctl get <key>
该命令会输出指定 key 的值。
-
查看某个目录下的所有 key:
etcdctl get <dir> --prefix
该命令会输出指定目录下的所有 key 和值。
-
设置某个 key 的值:
etcdctl put <key> <value>
该命令会将指定的 key 设置为指定的值。
-
删除某个 key:
etcdctl del <key>
该命令会删除指定的 key。
-
监听某个 key 的变化:
etcdctl watch <key>
该命令会监视指定 key 的变化,并在发生变化时输出变化信息。
-
查看当前 leader 节点:
etcdctl endpoint status --write-out=table
该命令会输出当前 etcd 集群中的 leader 节点信息。
以上是一些常用的 etcdctl 命令,还有很多其他的命令可以用于管理和操作 etcd 集群。
可以通过 etcdctl help
命令查看 etcdctl 的帮助信息。
五、 参考
kubernetes杂谈之静态Pod
https://blog.csdn.net/wzj_110/article/details/109036049
etcd怎么查询哪个节点是leader?
https://www.orchome.com/10139
Etcd 高可用集群与性能优化
https://blog.csdn.net/qq_34556414/article/details/125659853
Kubernetes控制平台组件-etcd
https://www.cnblogs.com/liconglong/p/16842924.htmla