kuberenete二进制安装方法

1 规划

10.199.150.95 k8s-master
10.199.140.186 k8s-node

2 安装master节点

2.1 配置k8s安装yum源

[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@master yum.repos.d]# vim kubernetes.repo
[kubernetes]
name=Kubernete Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpkcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

yum install docker-ce kubelet kubeadm kubectl 安装三个组件,安装完成后分别启动

2.2 初始化

说明:由于k8s初始化需要从google仓库下载镜像,如果网络问题可以解决,直接初始化就行,否者可以从阿里云的网站下载镜像,然后docker tag修改标签
镜像下载脚本

#!/bin/bash

# 下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成kubeadm config images list命令获取到的版本

images=(

    kube-apiserver:v1.17.0

    kube-controller-manager:v1.17.0

    kube-scheduler:v1.17.0

    kube-proxy:v1.17.0

    pause:3.1

    etcd:3.4.3-0

    coredns:1.6.5

)

for imageName in ${images[@]} ; do

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName

    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName

    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName

done

然后直接执行kubeadm init --apiserver-advertise-address 10.199.150.95 --pod-network-cidr=10.244.0.0/16 初始化即可

注意:如果初始化通过--kubernetes-version v1.14.0指定版本号,这里一定要看一下版本号,因为 Kubeadm init 的时候 填写的版本号不能低于kuberenete版本。

[root@hz-sb-zrrwt-199-150-95 ~]# kubeadm init --apiserver-advertise-address 10.199.150.95 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers

W1231 11:18:50.951755   24547 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

W1231 11:18:50.952105   24547 version.go:102] falling back to the local client version: v1.17.0

W1231 11:18:50.952626   24547 validation.go:28] Cannot validate kube-proxy config - no validator is available

W1231 11:18:50.952636   24547 validation.go:28] Cannot validate kubelet config - no validator is available

[init] Using Kubernetes version: v1.17.0

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [hz-sb-zrrwt-199-150-95 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.199.150.95]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [hz-sb-zrrwt-199-150-95 localhost] and IPs [10.199.150.95 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [hz-sb-zrrwt-199-150-95 localhost] and IPs [10.199.150.95 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

W1231 11:22:02.406479   24547 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[control-plane] Creating static Pod manifest for "kube-scheduler"

W1231 11:22:02.407516   24547 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.

[apiclient] All control plane components are healthy after 41.046149 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node hz-sb-zrrwt-199-150-95 as control-plane by adding the label "node-role.kubernetes.io/master=''"

[mark-control-plane] Marking the node hz-sb-zrrwt-199-150-95 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: s8xdrc.s5shirapkgokv0d9

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.199.150.95:6443 --token s8xdrc.s5shirapkgokv0d9 \

    --discovery-token-ca-cert-hash sha256:fb5f66850477f6e680b24f75e5d50c8c9c0fc85193f68750860b08fcdd593a9d

此时没有安装网络插件,可以看到node状态为NotReady

[root@hz-sb-zrrwt-199-150-95 ~]# kubectl get node

NAME STATUS AGE

hz-sb-zrrwt-199-150-95 NotReady 31m

2.3 安装网络插件flannel

docker pull quay.io/coreos/flannel:v0.11.0-amd64

网络插件安装
将yml文件下载到服务器
https://github.com/coreos/flannel/blob/701c2e8749714022758d5360fbe627005901349c/Documentation/kube-flannel.yml
使用如下命令运行,发现有报错
kubectl create -f kube-flannel-legacy.yml
error: error validating "kube-flannel-legacy.yml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
修改yml文件中的rbac.authorization.k8s.io/v1beta1为rbac.authorization.k8s.io/v1,再次运行
[root@hz-sb-zrrwt-199-150-95 ~]# kubectl create -f kube-flannel-legacy.yml --validate=false

podsecuritypolicy "psp.flannel.unprivileged" created

clusterrole "flannel" created

clusterrolebinding "flannel" created

serviceaccount "flannel" created

configmap "kube-flannel-cfg" created

daemonset "kube-flannel-ds-amd64" created

daemonset "kube-flannel-ds-arm64" created

daemonset "kube-flannel-ds-arm" created

daemonset "kube-flannel-ds-ppc64le" created

daemonset "kube-flannel-ds-s390x" created

2.4 检查

[root@hz-sb-zrrwt-199-150-95 ~]# kubectl get node
NAME                     STATUS    AGE
hz-sb-zrrwt-199-150-95   Ready     1h
[root@hz-sb-zrrwt-199-150-95 ~]# kubectl get pod -n kube-system
NAME                                             READY     STATUS             RESTARTS   AGE
coredns-9d85f5447-ct5c8                          0/1       CrashLoopBackOff   9          1h
coredns-9d85f5447-vhtc5                          0/1       CrashLoopBackOff   9          1h
etcd-hz-sb-zrrwt-199-150-95                      1/1       Running            0          1h
kube-apiserver-hz-sb-zrrwt-199-150-95            1/1       Running            0          1h
kube-controller-manager-hz-sb-zrrwt-199-150-95   1/1       Running            0          1h
kube-flannel-ds-amd64-8l2x6                      1/1       Running            0          19m
kube-proxy-tr9qf                                 1/1       Running            0          1h
kube-scheduler-hz-sb-zrrwt-199-150-95            1/1       Running            0          1h

dns的组件出现问题,查看日志

[root@hz-sb-zrrwt-199-150-95 ~]# kubectl describe pod coredns-9d85f5447-ct5c8 -n kube-system
Name:           coredns-9d85f5447-ct5c8
Namespace:      kube-system
Node:           hz-sb-zrrwt-199-150-95/10.199.150.95
Start Time:     Tue, 31 Dec 2019 16:26:53 +0800
Labels:         k8s-app=kube-dns
                pod-template-hash=9d85f5447
.
.
  FirstSeen     LastSeen        Count   From                                    SubObjectPath                   Type            Reason                  Message
  ---------     --------        -----   ----                                    -------------                   --------        ------                  -------
  1h            28m             27      {default-scheduler }                                                    Warning         FailedScheduling        0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  27m           27m             1       {default-scheduler }                                                    Normal          Scheduled               Successfully assigned kube-system/coredns-9d85f5447-ct5c8 to hz-sb-zrrwt-199-150-95
  26m           26m             5       {kubelet hz-sb-zrrwt-199-150-95}        spec.containers{coredns}        Warning         Unhealthy               Liveness probe failed: Get http://172.17.67.2:8080/health: dial tcp 172.17.67.2:8080: connect: no route to host
  26m           26m             1       {kubelet hz-sb-zrrwt-199-150-95}        spec.containers{coredns}        Normal          Killing                 Container coredns failed liveness probe, will be restarted
  27m           26m             2       {kubelet hz-sb-zrrwt-199-150-95}        spec.containers{coredns}        Normal          Pulled                  Container image "registry.aliyuncs.com/google_containers/coredns:1.6.5" already present on machine
  27m           26m             2       {kubelet hz-sb-zrrwt-199-150-95}        spec.containers{coredns}        Normal          Created                 Created container coredns
  27m           26m             2       {kubelet hz-sb-zrrwt-199-150-95}        spec.containers{coredns}        Normal          Started                 Started container coredns
  15m           7m              29      {kubelet hz-sb-zrrwt-199-150-95}        spec.containers{coredns}        Warning         BackOff                 Back-off restarting failed container
  27m           2m              103     {kubelet hz-sb-zrrwt-199-150-95}        spec.containers{coredns}        Warning         Unhealthy               Readiness probe failed: Get http://172.17.67.2:8181/ready: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

3 安装node节点

yum install docker-ce kubelet kubeadm kubectl #首先安装组件
然后将被管节点加入master中

[root@hz-sb-zdcs-199-140-186 yum.repos.d]# kubeadm join 10.199.150.95:6443 --token s8xdrc.s5shirapkgokv0d9  --discovery-token-ca-cert-hash sha256:fb5f66850477f6e680b24f75e5d50c8c9c0fc85193f68750860b08fcdd593a9d
W1231 17:20:56.950832    8155 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

已经成功join到集群中,有个warn警告,因为docker服务没有设置开机自启动,执行systemctl enable kubelet.service即可

到master主节点执行kubectl get nodes检查,已经加入,但是状态还没转换为ready,等待状态转换完成即可

[root@hz-sb-zrrwt-199-150-95 ~]# kubectl get nodes
NAME                     STATUS     AGE
hz-sb-zdcs-199-140-186   NotReady   7m
hz-sb-zrrwt-199-150-95   Ready      1h
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 214,922评论 6 497
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,591评论 3 389
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 160,546评论 0 350
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,467评论 1 288
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,553评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,580评论 1 293
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,588评论 3 414
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,334评论 0 270
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,780评论 1 307
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,092评论 2 330
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,270评论 1 344
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,925评论 5 338
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,573评论 3 322
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,194评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,437评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,154评论 2 366
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,127评论 2 352