k8安装

版本:1.10.4

参考官方:
https://kubernetes.io/docs/tasks/tools/install-kubectl/

https://www.kubernetes.org.cn/3808.html

使用阿里云源

Kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装 kubelet kubeadm kubectl

yum install -y kubectl --enablerepo=kubernetes 
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet 
systemctl start kubelet

配置iptables

Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

在master节点上配置cgroup driver

需要确保kubelet 和docker使用的 cgroup driver 是一致的 。
查看docker的:docker info | grep -i cgroup


[root@node205 sysctl.d]# docker info | grep -i cgroup
Cgroup Driver: cgroupfs

查看kubelet的配置 :

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

加载并重启配置 :

systemctl daemon-reload
systemctl restart kubelet

命令补全

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

初始化master

因为后续准备使用Flannel 网络 , 所以 初始化时给出网络参数
命令:
kubeadm init --pod-network-cidr=10.244.0.0/16

当初始化操作运行卡在了 :

[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.

原因: 访问的地址是 k8s.gcr.io , 在墙外。
网上搜索有两种解决方案:

  1. 国外代理
  2. 使用国内docker仓库,将相关镜像下载下来

这里使用第二种 , 特别感谢 QQ 雨夜的大神 的脚本 , 提供了一种方式将相关版本的镜像啦回本地 :

REGISTRY_NAME=registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd


IMAGES_NAME="kube-proxy-amd64:v1.10.4
kube-controller-manager-amd64:v1.10.4
kube-scheduler-amd64:v1.10.4
kube-apiserver-amd64:v1.10.4
etcd-amd64:3.1.12
k8s-dns-dnsmasq-nanny-amd64:1.14.8
k8s-dns-sidecar-amd64:1.14.8
k8s-dns-kube-dns-amd64:1.14.8
pause-amd64:3.1
heapster-amd64:v1.5.3
kubernetes-dashboard-amd64:v1.8.3
heapster-influxdb-amd64:v1.3.3"

for i in $IMAGES_NAME
do
  docker pull $REGISTRY_NAME/$i
  docker tag $REGISTRY_NAME/$i k8s.gcr.io/$i
done

docker pull $REGISTRY_NAME/flannel:v0.10.0-amd64
docker tag $REGISTRY_NAME/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

运行该脚本, docker 会将用到的相关镜像拉倒本地并重命名:
使用docker images 查看如下:

REPOSITORY                                                                     TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy-amd64                                                    v1.10.4             3f9ff47d0fca        2 weeks ago         97.1MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-proxy-amd64                v1.10.4             3f9ff47d0fca        2 weeks ago         97.1MB
k8s.gcr.io/kube-controller-manager-amd64                                       v1.10.4             1a24f5586598        2 weeks ago         148MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-controller-manager-amd64   v1.10.4             1a24f5586598        2 weeks ago         148MB
k8s.gcr.io/kube-apiserver-amd64                                                v1.10.4             afdd56622af3        2 weeks ago         225MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-apiserver-amd64            v1.10.4             afdd56622af3        2 weeks ago         225MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-scheduler-amd64            v1.10.4             6fffbea311f0        2 weeks ago         50.4MB
k8s.gcr.io/kube-scheduler-amd64                                                v1.10.4             6fffbea311f0        2 weeks ago         50.4MB
k8s.gcr.io/heapster-amd64                                                      v1.5.3              f57c75cd7b0a        7 weeks ago         75.3MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/heapster-amd64                  v1.5.3              f57c75cd7b0a        7 weeks ago         75.3MB
hello-world                                                                    latest              e38bc07ac18e        2 months ago        1.85kB
k8s.gcr.io/etcd-amd64                                                          3.1.12              52920ad46f5b        3 months ago        193MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/etcd-amd64                      3.1.12              52920ad46f5b        3 months ago        193MB
k8s.gcr.io/kubernetes-dashboard-amd64                                          v1.8.3              0c60bcf89900        4 months ago        102MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kubernetes-dashboard-amd64      v1.8.3              0c60bcf89900        4 months ago        102MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/flannel                         v0.10.0-amd64       f0fad859c909        4 months ago        44.6MB
quay.io/coreos/flannel                                                         v0.10.0-amd64       f0fad859c909        4 months ago        44.6MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/k8s-dns-dnsmasq-nanny-amd64     1.14.8              c2ce1ffb51ed        5 months ago        40.9MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64                                         1.14.8              c2ce1ffb51ed        5 months ago        40.9MB
k8s.gcr.io/k8s-dns-sidecar-amd64                                               1.14.8              6f7f2dc7fab5        5 months ago        42.2MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/k8s-dns-sidecar-amd64           1.14.8              6f7f2dc7fab5        5 months ago        42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64                                              1.14.8              80cc5ea4b547        5 months ago        50.5MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/k8s-dns-kube-dns-amd64          1.14.8              80cc5ea4b547        5 months ago        50.5MB
k8s.gcr.io/pause-amd64                                                         3.1                 da86e6ba6ca1        6 months ago        742kB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/pause-amd64                     3.1                 da86e6ba6ca1        6 months ago        742kB
k8s.gcr.io/heapster-influxdb-amd64                                             v1.3.3              577260d221db        9 months ago        12.5MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/heapster-influxdb-amd64         v1.3.3              577260d221db        9 months ago        12.5MB

重新执行初始化命令 :
kubeadm init --pod-network-cidr=10.244.0.0/16

注意: 重新初始化前需要执行 kubeadm reset , 否则会报错。

显示成功信息:


Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.30.16.205:6443 --token ngaatd.59490lbqvjl68dul --discovery-token-ca-cert-hash sha256:3ad459ffdd0e92008304864a56f3ed19938a4ce2603cfbecac060f60d0358d0b

安装配置 网络 Flannel

命令 :kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

报错

The connection to the server localhost:8080 was refused - did you specify the right host or port?

使用 kubectl get node 也报同样错误 。

原因:
将/etc/kubernetes/admin.conf 加入环境变量 , 上面初始化master结果中有提示 。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

或
export  KUBECONFIG=/etc/kubernetes/admin.conf

重新执行 安装网络

[root@node205 ~]# kubectl apply -f kube-flannel-v0.10.0.yml 
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created

验证查看master 节点

至此, 完成master节点的部署。
可以通过kubectl 命令查看集群状态:


[root@node205 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node205   Ready     master    1h        v1.10.4

[root@node205 ~]# kubectl cluster-info
Kubernetes master is running at https://10.30.16.205:6443
KubeDNS is running at https://10.30.16.205:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@node205 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-node205                      1/1       Running   0          1h
kube-system   kube-apiserver-node205            1/1       Running   0          1h
kube-system   kube-controller-manager-node205   1/1       Running   0          1h
kube-system   kube-dns-86f4d74b45-lzds4         3/3       Running   0          1h
kube-system   kube-flannel-ds-2k866             1/1       Running   0          2m
kube-system   kube-proxy-7flzg                  1/1       Running   0          1h
kube-system   kube-scheduler-node205            1/1       Running   0          1h

安装配置nodes节点

  1. yum安装相关包
yum install -y kubelet kubeadm kubectl
  1. 开机自启动
systemctl enable kubelet
  1. 使用kubeadm join 命令添加节点
kubeadm join 10.30.16.205:6443 --token ngaatd.59490lbqvjl68dul --discovery-token-ca-cert-hash sha256:3ad459ffdd0e92008304864a56f3ed19938a4ce2603cfbecac060f60d0358d0b

该命令来自master初始化输出结果 。

[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "10.30.16.205:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.30.16.205:6443"
[discovery] Requesting info from "https://10.30.16.205:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.30.16.205:6443"
[discovery] Successfully established connection with API Server "10.30.16.205:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

节点上使用 kubectl get nodes

报错 The connection to the server localhost:8080 was refused - did you specify the right host or port?

需要将master 节点上的 admin.conf拷贝过来 。

节点上 cgroup dirver的修改 , 参见master

kubectl get nodes

[root@node206 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node205   Ready     master    2h        v1.10.4
node206   Ready     <none>    2m        v1.10.4

安装配置 dashboard

  1. 下载配置文件 https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml
  2. 修改service 类型为 NodePort 对外提供服务 :
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443
  selector:
    k8s-app: kubernetes-dashboard
  1. 创建 之 kubectl create -f kubernetes-dashboard.yaml

  2. 查看状态
    kubectl get pods --all-namespaces

  3. 查看日志
    kubectl describe po kubernetes-dashboard --namespace=kube-system
    kubectl logs -f kubernetes-dashboard-latest-3243398-thc7k -n kube-system

  4. 删除
    kubectl delete -f kubernetes-dashboard.yaml
    在这个版本 , 直接上面一条命令即可删除。
    若出现重复创建pod的情况 ,使用下面这条命令。
    kubectl delete deployment kubernetes-dashboard --namespace=kube-system

  5. 使dashboard有更高的权限管理集群
    创建 rbac文件

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

使用 kubectl create -f 创建 之 。

查看 :

[root@node205 dashboard]# kubectl get secret -o wide --all-namespaces | grep dash
kube-system   kubernetes-dashboard-admin-token-k68zs           kubernetes.io/service-account-token   3         20s
kube-system   kubernetes-dashboard-certs                       Opaque                                0         10m
kube-system   kubernetes-dashboard-key-holder                  Opaque                                2         9d
kube-system   kubernetes-dashboard-token-fs5q4                 kubernetes.io/service-account-token   3         10m

kubernetes-dashboard-admin-token-k68zs 是具有admin权限的token , 查看其token值。

kubectl describe secret kubernetes-dashboard-admin-token-k68zs -n kube-system

复制token , 在登陆界面选择token ,输入token值登陆进去。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,125评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,293评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,054评论 0 351
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,077评论 1 291
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,096评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,062评论 1 295
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,988评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,817评论 0 273
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,266评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,486评论 2 331
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,646评论 1 347
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,375评论 5 342
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,974评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,621评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,796评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,642评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,538评论 2 352

推荐阅读更多精彩内容