kubeadm方式部署k8s集群(三主两从)

kubeadm部署k8s集群

一、kubeadm介绍

​ kubeadm是官方社区推出的用于快速部署kubernetes集群的工具。该工具通过两条指令完成kubernetes集群的部署:

#1.创建Master节点
kubeadm init
#2.将node节点加入到当前集群中
kubeadm join [master节点IP和端口]

1.1.安装要求

  1. 一台或多台机器,操作系统CentOS7.x-86_X64
  2. 硬件配置:CPU:2C以上,内存:4G以上,硬盘:30G以上
  3. 所有节点之间网络互通
  4. 可访问外网
  5. 禁止SWAP分区
  6. 可以使用"–ignore-preflight-errors=…"参数忽视,但建议不低于2c CPU的配置部署k8s

1.2.安装目标

  • 在所有节点上安装Docker、kubeadm、kubelet、kubectl
  • 部署Kubernetes Master
  • 部署容器网络插件
  • 部署Kubernetes Node,将节点加入Kubernetes集群中
  • 部署Dashboard Web页面,可视化查看Kubernetes资源

1.3.安装规划

角色 ip 组件
VIP 172.30.2.100 使用VIP进行kubeadm初始化master
k8s-master1 172.30.2.101 docker,kubeadm,kubelet,kubectl
k8s-master2 172.30.2.102 docker,kubeadm,kubelet,kubectl
k8s-master3 172.30.2.103 docker,kubeadm,kubelet,kubectl
k8s-worker1 172.30.2.201 docker,kubeadm,kubelet,kubectl
k8s-worker2 172.30.2.202 docker,kubeadm,kubelet,kubectl

二、kubernetes集群安装

2.1.所有节点操作系统优化

#1.关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#2.关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时
#3.关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久
#或
sed -ri '/.*swap.*/d' /etc/fstab
#4.DNS设置(根据实际环境)
cat >> /etc/resolv.conf << EOF
nameserver 114.114.114.114
nameserver 8.8.8.8
EOF
#5.将桥接的IPv4流量传递到iptables的链:
#桥接前确认br_netfilter模块是否加载,执行以下命令
lsmod | grep br_netfilter
modprobe br_netfilter
#然后执行下命令
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-iptables  = 1
EOF
sysctl --system #生效
#6.时间同步
yum install -y ntpdate wget
ntpdate time.windows.com

2.2.在master节点和worker节点操作

#1.所有节点设置主机名
hostnamectl set-hostname <hostname>
#2.在master节点添加hosts
cat >> /etc/hosts << EOF
172.30.2.101 k8s-master1
172.30.2.102 k8s-master2
172.30.2.103 k8s-master3
172.30.2.201 k8s-worker1
172.30.2.202 k8s-worker2
EOF
#3.在worker节点创建LVM
pvcreate /dev/sdb
vgcreate vg_node /dev/sdb
lvcreate -n lv_node -l 100%FREE vg_node
mkfs.xfs /dev/vg_node/lv_node
mount /dev/mapper/vg_node-lv_node /opt
sed -i '$a /dev/mapper/vg_node-lv_node /opt     xfs  defaults        0 0' /etc/fstab

2.3.所有节点安装docker/kubeadm/kubelet和kubectl

kubernetes部署采用yum安装默认版本

kubernetes需要用到容器运行时接口,本例采用docker容器运行时

容器运行时安装参考:https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/

#1.配置docker、kubernetes仓库
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast #更新yum缓存
#2.安装docker-ce kubeadm kubelet kubectl
#由kubeadm采用最新版本,目前kubernetes最新版本通过验证docker版本到19.03
yum list docker-ce --showduplicates | sort -r #检索docker19.03
yum install -y containerd.io-1.2.13 docker-ce-19.03.11 docker-ce-cli-19.03.11 kubelet kubeadm kubectl kubernetes-cni
# 创建 /etc/docker 目录
sudo mkdir /etc/docker
#3.配置镜像加载器及Cgroup Driver驱动采用system
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "insecure-registries": ["172.30.2.254"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
#"insecure-registries": ["172.30.2.254"],非https访问仓库配置
# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
#重启 Docker
sudo systemctl daemon-reload && sudo systemctl restart docker&& sudo systemctl enable docker
#查看Cgroup驱动是否为systemd
docker info | grep "Cgroup Driver"
# Cgroup Driver: systemd
#4.在worker节点修改Docker本地镜像与容器的存储位置的方法
#默认/opt/data/ 是大容量磁盘
docker info | grep "Docker Root Dir"
systemctl stop docker
mkdir -p /opt/data
mv /var/lib/docker /opt/data/
ln -s /opt/data/docker /var/lib/docker
#5.重启docker和kubelet设置开机启动
systemctl restart docker && systemctl enable --now kubelet
#6.查看版本
docker --version #查看版本
#Docker version 19.03.11, build 42e35e61f3
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:25:59Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

2.4.在所有master节点上建立高可用

​ 在master建立高可用,其实就是给所有的kube-apiserver做反向代理,可使用SLB或者使用一台独立虚拟服务器代理。本例是在所有master节点上部署nginx(upstream)+keepalived方式反向代理kube-apiserver。

2.4.1.kube-proxy开启IPVS配置

#ipvs称之为IP虚拟服务器(IP Virtual Server,简写为IPVS)
#1.在所有master节点执行以下命令
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
#2.查看IPVS模块加载情况
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#能看到ip_vs ip_vs_rr ip_vs_wrr  ip_vs_sh nf_conntrack_ipv4加载成功

2.4.2.部署nginx和keepalived

#1.在所有master节点安装nginx和keepalived
yum -y install nginx keepalived
systemctl start keepalived && systemctl enable keepalived
systemctl start nginx && systemctl enable nginx

2.4.3.配置Nginx的upstream反向代理

#1.在所有master节点配置nginx.conf
cat > /etc/nginx/nginx.conf <<EOF
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}
stream {
log_format proxy '\$remote_addr \$remote_port - [\$time_local] \$status \$protocol '
'"\$upstream_addr" "\$upstream_bytes_sent" "\$upstream_connect_time"' ;
access_log /var/log/nginx/nginx-proxy.log proxy;
# 修改为master的IP地址
upstream kubernetes_lb {
server 172.30.2.101:6443 weight=5 max_fails=3 fail_timeout=30s; 
server 172.30.2.102:6443 weight=5 max_fails=3 fail_timeout=30s;
server 172.30.2.103:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 30s;
proxy_timeout 30s;
proxy_pass kubernetes_lb;
}
}
EOF
#在其他master节点执行
scp -r 172.30.2.101:/etc/nginx/nginx.conf /etc/nginx/
#2.检查Nginx配置文件语法是否正常,后重新加载Nginx
nginx -t
#nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
#nginx: configuration file /etc/nginx/nginx.conf test is successful
nginx -s reload

2.4.4.keepalived配置

#1.在所有master节点配置keepalived.conf
cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
notification_email {
root@localhost
}
notification_email_from root@k8s.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1        #router_id每台机器设置不同
}

vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"          ## 检测 nginx 状态的脚本路径
interval 2                                       ## 检测时间间隔
weight -20                                       ## 如果条件成立,权重-20
}

vrrp_instance VI_1 {
state MASTER              #其他节点设置为BACKUP
interface ens32  #网卡设备名称,根据自己网卡信息进行更改 
virtual_router_id 88
advert_int 1
priority 110            #其他节点设置为109,108
authentication {
auth_type PASS
auth_pass 1234abcd
}

track_script {
chk_nginx                   #执行nginx监控
}

virtual_ipaddress {
172.30.2.100/22  # 这就是虚拟IP地址
}
}
EOF
#注释:
#1>修改interface ens32中的ens32改为服务模块节点实际的网卡名
#2>三个节点router_id分别修改为LVS_1,LVS_2,LVS_3
#3>三个节点state MASTER分别修改为:state MASTER、state BACKUP、state BACKUP
#4>三个节点priority 110 分别修改为:110,109,108
#2.创建nginx_check.sh脚本
cat > /etc/keepalived/nginx_check.sh <<EOF
#!/bin/bash
export LANG="en_US.UTF-8"
if [ ! -f "/run/nginx.pid" ]; then
    /usr/bin/systemctl restart nginx
    sleep 2
    if [ ! -f "/run/nginx.pid" ]; then
       /bin/kill -9 \$(head -n 1 /var/run/keepalived.pid)
    fi
fi
EOF
chmod a+x /etc/keepalived/nginx_check.sh
#在其他master节点执行
scp 172.30.2.101:/etc/keepalived/keepalived.conf /etc/keepalived/
scp 172.30.2.101:/etc/keepalived/nginx_check.sh /etc/keepalived/
#3.所有master节点重启keepalived
systemctl restart keepalived
#查看日志
journalctl -f -u keepalived
#4.在同网络任意节点验证keepalived是否畅通
ping 172.30.2.100
#5.在同网络任意节点验证nginx 的VIP:7443端口是否畅通
ssh -v -p 7443 172.30.2.100
#出现这个结果代表畅通
debug1: Connection established.

至此高可用VIP已经建立下面开始master初始化工作

2.5.在master1节点上进行kubeadm初始化

2.5.1.获取kubeadm-init.yaml文件

#1.初始化节点1
kubeadm config print init-defaults > kubeadm-init.yaml
#2.编辑kubeadm-init.yaml
cat > kubeadm-init.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.30.2.101      #指定本地ip地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1                   #指定本地主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "172.30.2.100:7443"   #增加kubeapiserver集群ip地址和端口,就是VIP
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #国外网址k8s.gcr.io受限换成国内
kind: ClusterConfiguration
kubernetesVersion: v1.20.0                              #修改实际kubernetes版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16                            #增加pod网络
scheduler: {}
---                                                   #增加kubeproxy代理配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
EOF
#3.先下载镜像
kubeadm config images pull --config kubeadm-init.yaml
image-20210114091926859.png

按照上图在其它master节点使用docker pull把以上镜像拉取下来

#4.kubeadm初始化
kubeadm init --config kubeadm-init.yaml
#以下初始化结果

[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown (id -u):(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350

#1.kubeadm初始化完成先本地执行命令
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 export KUBECONFIG=/etc/kubernetes/admin.conf
#2.加入master节点需要执行--在2.6.节操作
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350 \
    --control-plane
#3.加入node节点需要执行--在2.7.节操作
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350

2.6.在其他两个master节点执行相关操作

#1.在master2和master3节点复制相关证书
mkdir -p /etc/kubernetes/pki/etcd
scp -r 172.30.2.101:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/
scp -r 172.30.2.101:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/
scp -r 172.30.2.101:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/
scp -r 172.30.2.101:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/
scp -r 172.30.2.101:/etc/kubernetes/admin.conf /etc/kubernetes/
#2.master2和master3节点执行以下命令
kubeadm join 172.30.2.100:7443 --v=5 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350 \
    --control-plane \
    --ignore-preflight-errors=all   
#3.在master1节点上查看pod、svc状态,其中pod是否全部处于running状态
kubectl get pod,svc --all-namespaces -o wide
#4.在任意master节点上执行以下命令验证是否部署成功
kubectl get node
#返回结果如下
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   28m     v1.20.2
k8s-master2   NotReady   control-plane,master   4m13s   v1.20.2
k8s-master3   NotReady   control-plane,master   30s     v1.20.2
#以上NotReady等待CNI网络插件安装

2.7.安装CNI网络插件

#1.其中一个master节点上下载
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#遇见无法下载解决办法
yum provides dig #dig命令所属包
yum install -y bind-utils
dig@DNSIP raw.githubusercontent.com
#解析到ip放在hosts里
#2.执行命令
kubectl apply -f kube-flannel.yml
#3.coredns应用测试验证
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
/# nslookup kubernetes
/# ping kubernetes
/# nslookup 163.com
/# ping 163.com

#4.所有节点再验证
kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   37m     v1.20.2
k8s-master2   Ready    control-plane,master   13m     v1.20.2
k8s-master3   Ready    control-plane,master   9m59s   v1.20.2

2.8.加入worker节点

#1.在所有node节点上执行如下命令
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
#通过journalctl查看日志
journalctl -f -u kubelet
#2.在任意master节点执行如下命令进行验证node节点是否加入成功
kubectl get node -A | grep node
#返回结果如下
k8s-worker1   NotReady   <none>                 16s   v1.20.2
k8s-worker2   NotReady   <none>                 10s   v1.20.2
#node节点处于NotReady状态说明pod的kube-flannel、kube-proxy为部署完成,通过命令
kubectl -n kube-system get pods #查看
#再次验证返回结果都为Ready状态
kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   44m     v1.20.2
k8s-master2   Ready    control-plane,master   20m     v1.20.2
k8s-master3   Ready    control-plane,master   16m     v1.20.2
k8s-worker1   Ready    <none>                 3m29s   v1.20.2
k8s-worker2   Ready    <none>                 2m56s   v1.20.2

至此kubeadm集群部署完成。

2.9.遇见到问题解决

#问题1.error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher
#解决方法
kubeadm reset -f
docker rm -f $(docker ps -a -q )
rm -rf /var/lib/cni/
systemctl daemon-reload
systemctl restart kubelet
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

2.10.缩减worker节点

#删除worker节点
#1.在master节点执行
#将当前运行在该节点上的容器驱离
kubectl drain k8s-worker1 --ignore-daemonsets
#将该节点设置为不可调度模式
kubectl cordon  k8s-worker1
#删除worker节点
kubectl delete node k8s-worker1

#2.在k8s-worker1节点执行
kubeadm reset -f
docker rm -f $(docker ps -a -q )
rm -rf /var/lib/cni/
systemctl daemon-reload
systemctl restart kubelet
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
ip link delete cni0
ip link delete flannel.1

三、kubernetes应用部署

3.1.Dashboard部署及验证k8s集群

#1.下载Dashboard的yaml文件
#官方主页https://github.com/kubernetes/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
#2.默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
vim recommended.yaml
...
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort   #新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001   #新增
  selector:
    k8s-app: kubernetes-dashboard

---
...

kubectl apply -f recommended.yaml
#3.验证
kubectl -n kubernetes-dashboard get pod,svc
#pod状态处于Running说明部署成功
#4.通过网页访问使用worker节点任意ip访问
https://NodeIP:30001
#5.创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
#6.使用输出的token登录Dashboard
https://NodeIP:30001
#在设置项里可以修改语言

3.2.etcd-3.14.13使用

#1.在master1节点下载etcd程序包
wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
#2.解压程序etcd-v3.4.13-linux-amd64.tar.gz
tar -xzf etcd-v3.4.13-linux-amd64.tar.gz
cp etcd-v3.4.13-linux-amd64/etcdctl /usr/bin/
#3.etcdctl使用
#---验证集群状态
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints="https://172.30.2.101:2379,https://172.30.2.102:2379,https://172.30.2.103:2379" endpoint health
#绑定etcdctl环境变量使用
cat <<EOF | sudo tee ~/.bashrc 
export ETCDCTL_API=3
export ETCDCTL_DIAL_TIMEOUT=3s
export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key
EOF
source ~/.bashrc 
#1>.以表格形式查看集群状态
etcdctl --endpoints="https://172.30.2.101:2379" -w table endpoint --cluster status
#2>.查看所有的key
etcdctl --endpoints="https://172.30.2.101:2379" --keys-only=true get --from-key ''
#或
etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=true get /
#3>.查看拥有某个前缀的keys
etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=true get /registry/pods/

#4>.查看某个具体key的值以json格式输出
etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=false -w json get /registry/pods/kube-system/etcd-k8s-master1
#更多etcdctl操作命令:https://github.com/etcd-io/etcd/tree/master/etcdctl

四、kubernetes插件部署

4.1.在windows部署kubectl工具

#1.master节点下载windows版本kubectl工具
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/windows/amd64/kubectl.exe
#把kubectl.exe下载windows系统d:\kubectlv1.20.2目录里
#https://storage.googleapis.com/kubernetes-release/release/stable.txt
#2.创建.kube目录
#在windows系统运行-cmd进入当前用户目录创建
cd C:\Users\当用户目录
md .kube
#把master节点上$HOME/.kube/config拷贝到windows系统的.kube目录中
#3.windows系统的环境变量的path环境增加d:\kubectlv1.20.2
#4.确保windows系统跟k8s集群在同一网络里,并且打开cmd执行命令
kubectl get pod,svc --all-namespaces

参考:https://luyanan.com/article/info/19821386744192 加入新的master与worker节点

https://blog.csdn.net/liuyunshengsir/article/details/105149866 node节点扩缩容

https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/ 容器运行时

https://www.cnblogs.com/abner123/p/13324014.html

https://www.jianshu.com/p/dacfda2289eb?utm_campaign=haruki&utm_content=note&utm_medium=reader_share&utm_source=weixin

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,591评论 6 501
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,448评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,823评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,204评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,228评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,190评论 1 299
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,078评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,923评论 0 274
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,334评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,550评论 2 333
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,727评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,428评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,022评论 3 326
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,672评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,826评论 1 269
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,734评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,619评论 2 354

推荐阅读更多精彩内容