08-kubernets集群升级和添加master和node节点

一、kubernets集群升级
1、升级master
1.1.下载要升级版本的二进制包
github kubernetes二进制下载地址

1.1 先把要升级的master从负载均衡器里下架

root@k8s-node2:/etc/kube-lb/conf# cat /etc/kube-lb/conf/kube-lb.conf
user root;
worker_processes 1;

error_log  /etc/kube-lb/logs/error.log warn;

events {
    worker_connections  3000;
}

stream {
    upstream backend {
        server 172.31.7.101:6443    max_fails=2 fail_timeout=3s;
        #server 172.31.7.102:6443    max_fails=2 fail_timeout=3s;
    }

    server {
        listen 127.0.0.1:6443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
#systemctl restart kube-lb

停止服务

root@k8s-master1-etcd1:~/kubernetes_1.23.5/kubernetes/server/bin# systemctl stop  kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet

把下载下来的进二制文件master所用到的组件有kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet kubectlcopy到/usr/local/bin

cp kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet kubectl /usr/local/bin
#重启服务
 systemctl start  kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet

然后修改负载均衡器,用同样的方法升级另一个master节点
最后验证二台master已升级到了V1.23.5

root@k8s-master1-etcd1:~# kubectl get nodes
NAME           STATUS                     ROLES    AGE   VERSION
172.31.7.101   Ready,SchedulingDisabled   master   30h   v1.23.5
172.31.7.102   Ready,SchedulingDisabled   master   30h   v1.23.5
172.31.7.111   Ready                      node     30h   v1.23.1
172.31.7.112   Ready                      node     30h   v1.23.1

2、升级node节点
(1)先驱逐node1节点上的所有pod

root@k8s-master1-etcd1:~# kubectl drain  172.31.7.111
node/172.31.7.111 cordoned
error: unable to drain node "172.31.7.111" due to error:[cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-wlpl9, cannot delete Pods with local storage (use --delete-emptydir-data to override): kubernetes-dashboard/dashboard-metrics-scraper-799d786dbf-l52xb, velero-system/velero-6755cb8697-l2x99, cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/net-test1], continuing command...
There are pending nodes to be drained:
 172.31.7.111
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-wlpl9
cannot delete Pods with local storage (use --delete-emptydir-data to override): kubernetes-dashboard/dashboard-metrics-scraper-799d786dbf-l52xb, velero-system/velero-6755cb8697-l2x99
cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/net-test1
root@k8s-master1-etcd1:~# kubectl drain  172.31.7.111 --ignore-daemonsets --delete-emptydir-data --force
node/172.31.7.111 already cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/net-test1; ignoring DaemonSet-managed Pods: kube-system/calico-node-wlpl9
evicting pod velero-system/velero-6755cb8697-l2x99
evicting pod kube-system/coredns-79688b6cb4-kqpgs
evicting pod default/net-test1
evicting pod kubernetes-dashboard/dashboard-metrics-scraper-799d786dbf-l52xb
pod/velero-6755cb8697-l2x99 evicted
pod/dashboard-metrics-scraper-799d786dbf-l52xb evicted
pod/coredns-79688b6cb4-kqpgs evicted
pod/net-test1 evicted
node/172.31.7.111 drained
root@k8s-master1-etcd1:~# kubectl get nodes
NAME           STATUS                     ROLES    AGE   VERSION
172.31.7.101   Ready,SchedulingDisabled   master   30h   v1.23.5
172.31.7.102   Ready,SchedulingDisabled   master   30h   v1.23.5
172.31.7.111   Ready,SchedulingDisabled   node     30h   v1.23.1
172.31.7.112   Ready                      node     30h   v1.23.1

(2)停止node1节点的kubelet kube-proxy服务

root@k8s-node1:/etc/kube-lb/conf# systemctl stop kubelet kube-proxy.service

(3)复制二进制文件kubelete kube-proxy 到node1节点的/usr/local/bin目录下

root@k8s-master1-etcd1:~/kubernetes_1.23.5/kubernetes/server/bin# scp kubelet kube-proxy 172.31.7.111:/usr/local/bin

(4)node1上启动kubelet kube-proxy服务

root@k8s-node1:/etc/kube-lb/conf# systemctl start kubelet kube-proxy.service

(5)取消停止调度的策略

root@k8s-master1-etcd1:~/kubernetes_1.23.5/kubernetes/server/bin# kubectl uncordon 172.31.7.111
node/172.31.7.111 uncordoned

相同的方法升级node2节点
验证

root@k8s-master1-etcd1:~/kubernetes_1.23.5/kubernetes/server/bin# kubectl get nodes
NAME           STATUS                     ROLES    AGE   VERSION
172.31.7.101   Ready,SchedulingDisabled   master   30h   v1.23.5
172.31.7.102   Ready,SchedulingDisabled   master   30h   v1.23.5
172.31.7.111   Ready                      node     30h   v1.23.5
172.31.7.112   Ready                      node     30h   v1.23.5

3、通过kubeasz项目添加master节点
(1)把所需要的二进制文件复制到/etc/kubeasz/bin,和要添加的master节点做免密认证

\cp kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet kubectl /etc/kubeasz/bin/

(2)添加master

root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl --help
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
    list                             to list all of the managed clusters
    checkout    <cluster>            to switch default kubeconfig of the cluster
    new         <cluster>            to start a new k8s deploy with name 'cluster'
    setup       <cluster>  <step>    to setup a cluster, also supporting a step-by-step way
    start       <cluster>            to start all of the k8s services stopped by 'ezctl stop'
    stop        <cluster>            to stop all of the k8s services temporarily
    upgrade     <cluster>            to upgrade the k8s cluster
    destroy     <cluster>            to destroy the k8s cluster
    backup      <cluster>            to backup the cluster state (etcd snapshot)
    restore     <cluster>            to restore the cluster state from backups
    start-aio                        to quickly setup an all-in-one cluster with 'default' settings

Cluster ops:
    add-etcd    <cluster>  <ip>      to add a etcd-node to the etcd cluster
    add-master  <cluster>  <ip>      to add a master node to the k8s cluster
    add-node    <cluster>  <ip>      to add a work node to the k8s cluster
    del-etcd    <cluster>  <ip>      to delete a etcd-node from the etcd cluster
    del-master  <cluster>  <ip>      to delete a master node from the k8s cluster
    del-node    <cluster>  <ip>      to delete a work node from the k8s cluster

Extra operation:
    kcfg-adm    <cluster>  <args>    to manage client kubeconfig of the k8s cluster

Use "ezctl help <command>" for more information about a given command.
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl add-master k8s-cluster-01 172.31.7.103
2022-04-24 23:16:50 INFO add 172.31.7.103 into 'kube_master' group
2022-04-24 23:16:50 INFO start to add a master node:172.31.7.103 into cluster:k8s-cluster-01

(3)添加node

root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl --help
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
    list                             to list all of the managed clusters
    checkout    <cluster>            to switch default kubeconfig of the cluster
    new         <cluster>            to start a new k8s deploy with name 'cluster'
    setup       <cluster>  <step>    to setup a cluster, also supporting a step-by-step way
    start       <cluster>            to start all of the k8s services stopped by 'ezctl stop'
    stop        <cluster>            to stop all of the k8s services temporarily
    upgrade     <cluster>            to upgrade the k8s cluster
    destroy     <cluster>            to destroy the k8s cluster
    backup      <cluster>            to backup the cluster state (etcd snapshot)
    restore     <cluster>            to restore the cluster state from backups
    start-aio                        to quickly setup an all-in-one cluster with 'default' settings

Cluster ops:
    add-etcd    <cluster>  <ip>      to add a etcd-node to the etcd cluster
    add-master  <cluster>  <ip>      to add a master node to the k8s cluster
    add-node    <cluster>  <ip>      to add a work node to the k8s cluster
    del-etcd    <cluster>  <ip>      to delete a etcd-node from the etcd cluster
    del-master  <cluster>  <ip>      to delete a master node from the k8s cluster
    del-node    <cluster>  <ip>      to delete a work node from the k8s cluster

Extra operation:
    kcfg-adm    <cluster>  <args>    to manage client kubeconfig of the k8s cluster

Use "ezctl help <command>" for more information about a given command.
root@k8s-master1-etcd1:/etc/kubeasz# ./ezctl add-node k8s-cluster-01 172.31.7.113
2022-04-24 23:37:22 INFO add 172.31.7.113 into 'kube_node' group
2022-04-24 23:37:22 INFO start to add a work node:172.31.7.113 into cluster:k8s-cluster-01
#验证
root@k8s-master1-etcd1:/etc/kubeasz# kubectl get nodes
NAME           STATUS                     ROLES    AGE     VERSION
172.31.7.101   Ready,SchedulingDisabled   master   31h     v1.23.5
172.31.7.102   Ready,SchedulingDisabled   master   31h     v1.23.5
172.31.7.103   Ready,SchedulingDisabled   master   21m     v1.23.5
172.31.7.111   Ready                      node     31h     v1.23.5
172.31.7.112   Ready                      node     31h     v1.23.5
172.31.7.113   Ready                      node     2m24s   v1.23.5


最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 220,367评论 6 512
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,959评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,750评论 0 357
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,226评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,252评论 6 397
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,975评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,592评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,497评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 46,027评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,147评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,274评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,953评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,623评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,143评论 0 23
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,260评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,607评论 3 375
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,271评论 2 358

推荐阅读更多精彩内容