要说国内安装K8S最大的问题,可能就是gcr的镜像问题,因为众所周知的一些原因,我们在安装K8S的时候,需要提前准备相应的gcr镜像, 否则安装过程无法进行下去. 好在包括aliyun在内的很多网站提供了gcr的镜像,我们可以从这些网站pull镜像到本地后,再tag一下
1. 首先安装CRI
这里使用Docker作为runtime, 安装过程不再详述,ubuntu和centos可参考官方文档
或者使用脚本方式快速安装 :
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
2. 准备gcr镜像
这里我们使用aliyun的镜像,并使用load.sh脚本来读取images.list来pull并tag:
images.list:
kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1
kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.12.1
etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
kube-controller-manager:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1
coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
kube-proxy:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1
pause:3.1 k8s.gcr.io/pause:3.1
kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.12.1
tiller:v2.11.0 gcr.io/kubernetes-helm/tiller:v2.11.0
load.sh:
#/bin/bash
while read line
do
echo $line
arr=($line)
src_image=registry.cn-hangzhou.aliyuncs.com/google_containers/${arr[0]}
target_image=${arr[1]}
docker pull $src_image
docker tag $src_image $target_image
docker rmi $src_image
done < "images.list"
在master节点和worker节点执行load.sh以后, docker images 应该包含如下类似内容
k8s.gcr.io/kube-proxy v1.12.1 61afff57f010 5 weeks ago 96.6MB
k8s.gcr.io/kube-scheduler v1.12.1 d773ad20fd80 5 weeks ago 58.3MB
k8s.gcr.io/kube-controller-manager v1.12.1 aa2dd57c7329 5 weeks ago 164MB
k8s.gcr.io/kube-apiserver v1.12.1 dcb029b5e3ad 5 weeks ago 194MB
gcr.io/kubernetes-helm/tiller v2.11.0 ac5f7ee9ae7e 6 weeks ago 71.8MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 7 weeks ago 220MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 2 months ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.12.1 0dab2435c100 2 months ago 122MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 10 months ago 742k
apiserver, scheduler等可以不用在worker节点pull, 另外如果对应其它K8S版本kubeadm, 那么可以使用对应版本的kubeadm
kubeadm config images list
来查看具体需要的镜像版本(这个命令需要连googleapi服务器,连不上可以多试几次)
至此,gcr的镜像我们已经准备完毕,其它镜像因为不是gcr的,无需翻.... 所以可以在安装过程中下载, 当然你也可以提前准备好.
3. 安装工具
到这一步,我们需要在master和worker节点上安装kubeadm, kubelet, kubectl 3个工具, 这里我们同样使用aliyun的镜像作为apt的source
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt update
apt install -V kubeadm=1.12.1-00 kubectl=1.12.1-00 kubelet=1.12.1-00
apt-mark hold kubelet kubeadm kubectl
注意后面的apt-mark是必要的,否则在apt更新的时候,以上3个工具会更新, 导致版本不一致, 成功后执行kubeadm version确认版本
安装 kubernetes-cni:
apt-cache madison kubernetes-cni
kubernetes-cni | 0.7.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubernetes-cni | 0.6.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubernetes-cni | 0.5.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubernetes-cni | 0.3.0.1-07a8a2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
apt-get install kubernetes-cni=0.6.0-00
4. Master初始化
在master节点, 执行如下命令
kubeadm init --pod-network-cidr=10.244.0.0/16
因为下文我们使用的CNI是flannel网络,所以参数 --pod-network-cidr 使用 10.244.0.0/16, 其它CNI需要使用对应的参数,成功执行后,记录下最后一行的提示, 类似形式:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
这行命令是worker加入节点时使用的,如果忘记了,或者之后过期了也没关系(token 24小时有效), 可以通过
kubeadm token list
找回token,或者创建一个新的token
kubeadm token create
hashing值可以通过在master执行以下命令获取
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
如果kubeadm init执行失败,或者参数有误,那么可以通过kubeadm reset重置后(reset后最好重启下系统), 再重新init, 错误原因需要根据具体提示信息查找.
下面, 我们设置环境变量, 在用户目录下打开 .bashrc文件, 在最后添加以下3行
export KUBECONFIG=/etc/kubernetes/admin.conf
alias kc=kubectl
source <(kubectl completion bash | sed s/kubectl/kc/g)
保存文件后退出,并执行
source .bashrc
这些脚本的目的是设置KUBECONFIG变量指向admin.conf文件,让kubectl读取该配置y以访问apiserver, 同时设置kubectl的别名为kc, 并设置bash completion,即自动拼写完成. 成功后,你可以通过以下命令来查看pod的运行状况:
kc get pod --all-namespace -o wide
5. 安装flannel网络插件
下面我们apply cni网络,如前所述,这里使用flannel网络, 首先执行
sysctl net.bridge.bridge-nf-call-iptables=1
作用是传递桥接的IPv4流量到iptables chains, 然后执行以下命令应用cni网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
6. 安装kubernetes-dashboard
下载kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.12.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
7. 配置kubernetes-dashboard权限
dashboard-admin.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-dashboard
namespace: kube-system
ClusterRoleBinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
8. 获取kubernetes-dashboard token
kubectl get secret -n kube-system:
kubernetes-dashboard-certs Opaque 0 107m
kubernetes-dashboard-key-holder Opaque 2 107m
kubernetes-dashboard-token-qbjfd kubernetes.io/service-account-token 3 107m
kubectl describe secret kubernetes-dashboard-token-qbjfd -n kube-system:
Name: kubernetes-dashboard-token-qbjfd
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 79549b62-9747-11e9-a6f0-000c292ece1b
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1xYmpmZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc5NTQ5YjYyLTk3NDctMTFlOS1hNmYwLTAwMGMyOTJlY2UxYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.0xm0TkFTJgzGmAUiTDRomaAnsA8fvSolGbkeUVMNVdTLBxmSAb0kLsmobgEqVvQkFeYGu4pisZ0hPkdPf7xtYDk4QVqnW2ov553dnuYUorhQRtqf-jA28u_-j9apfpSRkGSl30bjpJlmEXlilccfeKDitBTjMvKqRy51eRpqiseGhLwLxAoDZgRLM7g1mkpuzLGritI90AVEZFXJAwPmZU8G31s8EWR69Yv5yDcxAjKGDhf85q6UdCwS9Xkl10GMSeHrdwpem478FGriLWzsdmYUYwiNfE8E7ijwW3xit7z1NoeTUOtPRDakV7YGHHjHAuFZfo6hK_13-OxO9M5bUw