k8s连接connection to the server apiserver.cluster.local:6443 was refused

[root@k8s-master-node1 ~]# kubectl get nodes

The connection to the server apiserver.cluster.local:6443 was refused - did you specify the right host or port?
报错1:apiserver.cluster.local:6443 的连接被拒绝

查看服务状态

[root@k8s-master-node1 ~]# systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 11-cgroup.conf
Active: active (running) since Tue 2023-10-17 15:48:47 CST; 8min ago

[root@k8s-master-node1 ~]# systemctl status docker

● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2023-10-17 15:42:29 CST; 15min ago
Docs: https://docs.docker.com

日志报错

[root@k8s-master-node1 ~]# tail -f /var/log/messages

Oct 17 15:56:32 k8s-master-node1 kubelet: E1017 15:56:32.843485 9392 pod_workers.go:747] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-master-node1_kube-system(834e34c21cf001993a4bf8da41841c6c)"" pod="kube-system/kube-apiserver-k8s-master-node1" podUID=834e34c21cf001993a4bf8da41841c6c
Oct 17 15:56:32 k8s-master-node1 kubelet: E1017 15:56:32.941662 9392 kubelet.go:2407] "Error getting node" err="node "k8s-master-node1" not found"
Oct 17 15:56:33 k8s-master-node1 kubelet: E1017 15:56:33.042258 9392 kubelet.go:2407] "Error getting node" err="node "k8s-master-node1" not found"
Oct 17 15:56:33 k8s-master-node1 kubelet: E1017 15:56:33.142402 9392 kubelet.go:2407] "Error getting node" err="node "k8s-master-node1" not found"
Oct 17 15:56:33 k8s-master-node1 kubelet: E1017 15:56:33.243163 9392 kubelet.go:2407] "Error getting node" err="node "k8s-master-node1" not found"

解决1:
kube-apiserver.yaml文件被更改,原文件内容如下

[root@k8s-master-node1 ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.60.250:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:

  • command:
    • kube-apiserver
    • --advertise-address=192.168.60.250
    • --allow-privileged=true
    • --authorization-mode=Node,RBAC
    • --client-ca-file=/etc/kubernetes/pki/ca.crt
    • --enable-admission-plugins=NodeRestriction
    • --enable-bootstrap-token-auth=true
    • --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    • --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    • --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    • --etcd-servers=https://127.0.0.1:2379
    • --event-ttl=720h
    • --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    • --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    • --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    • --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    • --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    • --requestheader-allowed-names=front-proxy-client
    • --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    • --requestheader-extra-headers-prefix=X-Remote-Extra-
    • --requestheader-group-headers=X-Remote-Group
    • --requestheader-username-headers=X-Remote-User
    • --secure-port=6443
    • --service-account-issuer=https://kubernetes.default.svc.cluster.local
    • --service-account-key-file=/etc/kubernetes/pki/sa.pub
    • --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    • --service-cluster-ip-range=10.96.0.0/16
    • --service-node-port-range=1024-65535
    • --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    • --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
      image: k8s.gcr.io/kube-apiserver:v1.22.1
      imagePullPolicy: IfNotPresent
      livenessProbe:
      failureThreshold: 8
      httpGet:
      host: 192.168.60.250
      path: /livez
      port: 6443
      scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
      name: kube-apiserver
      readinessProbe:
      failureThreshold: 3
      httpGet:
      host: 192.168.60.250
      path: /readyz
      port: 6443
      scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
      resources:
      requests:
      cpu: 250m
      startupProbe:
      failureThreshold: 24
      httpGet:
      host: 192.168.60.250
      path: /livez
      port: 6443
      scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
      volumeMounts:
    • mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    • mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    • mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    • mountPath: /etc/localtime
      name: localtime
      readOnly: true
      hostNetwork: true
      priorityClassName: system-node-critical
      securityContext:
      seccompProfile:
      type: RuntimeDefault
      volumes:
  • hostPath:
    path: /etc/ssl/certs
    type: DirectoryOrCreate
    name: ca-certs
  • hostPath:
    path: /etc/pki
    type: DirectoryOrCreate
    name: etc-pki
  • hostPath:
    path: /etc/kubernetes/pki
    type: DirectoryOrCreate
    name: k8s-certs
  • hostPath:
    path: /etc/localtime
    type: File
    name: localtime
    status: {}

日志恢复

[root@k8s-master-node1 ~]# tail -f /var/log/messages

Oct 17 16:10:38 k8s-master-node1 docker-compose: harbor-portal | 172.18.0.10 - - [17/Oct/2023:08:10:38 +0000] "GET / HTTP/1.1" 200 1167 "-" "curl/7.78.0"
Oct 17 16:10:38 k8s-master-node1 docker-compose: nginx | 127.0.0.1 - "GET / HTTP/1.1" 200 1167 "-" "curl/7.78.0" 0.000 0.001 .
Oct 17 16:10:42 k8s-master-node1 docker-compose: harbor-portal | 172.18.0.2 - - [17/Oct/2023:08:10:42 +0000] "GET / HTTP/1.1" 200 532 "-" "Go-http-client/1.1"
Oct 17 16:10:42 k8s-master-node1 docker-compose: registry | 172.18.0.2 - - [17/Oct/2023:08:10:42 +0000] "GET / HTTP/1.1" 200 0 "" "Go-http-client/1.1"
Oct 17 16:10:42 k8s-master-node1 docker-compose: registryctl | 172.18.0.2 - - [17/Oct/2023:08:10:42 +0000] "GET /api/health HTTP/1.1" 200 9

验证

[root@k8s-master-node1 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION
k8s-master-node1 Ready control-plane,master,worker 16d v1.22.1
k8s-worker-node1 NotReady worker 16d v1.22.1

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,524评论 5 460
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,869评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,813评论 0 320
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,210评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,085评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,117评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,533评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,219评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,487评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,582评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,362评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,218评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,589评论 3 299
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,899评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,176评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,503评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,707评论 2 335

推荐阅读更多精彩内容