[toc]
1. 基本集群搭建
1.1 环境说明
3台服务器:
2核心 2G内存 100G硬盘 无swap分区
部署架构:
1台Master节点: kube-scheduler, kube-apiserver, kube-controller-manager, etcd
2台Node节点: kubelet, kubeproxy, docker
1.2 服务器配置
1.2.1安装前准备
关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
关闭selinux:
sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config
setenforce 0
关闭swap:
swapoff -a $ 临时
vim /etc/fstab $ 永久
添加主机名与IP对应关系(记得设置主机名):
cat <<EOF >> /etc/hosts
192.168.30.130 master.alec.com master
192.168.30.131 node01.alec.com node01
192.168.30.132 node02.alec.com node02
EOF
将桥接的IPv4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
时间同步:
......
1.2.2 安装yum源
curl https://mirrors.aliyun.com/repo/Centos-7.repo > /etc/yum.repos.d/Centos-7.repo
curl https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo > /etc/yum.repos.d/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
curl https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg > rpm-package-key.gpg
rpm --import rpm-package-key.gpg
1.2.3 在所有服务器安装docker
yum -y install docker-ce-18.06.1.ce-3.el7
配置docker加速器
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://rfj1yucr.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
1.3 Master节点安装
1.3.1 安装kubeadm,kubelet和kubectl
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 kubernetes-cni-0.6.0
systemctl enable kubelet
初始化Master节点
kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.13.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
安装:
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master.alec.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.30.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master.alec.com localhost] and IPs [192.168.30.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master.alec.com localhost] and IPs [192.168.30.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.509734 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.alec.com" as an annotation
[mark-control-plane] Marking the node master.alec.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master.alec.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ddapmo.80e0jspn862598ls
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.30.130:6443 --token ddapmo.80e0jspn862598ls --discovery-token-ca-cert-hash sha256:c673cef0087763b18296a9b471e87641e2907c0d5a4308b4c7d0575c4ab34641
增加配置文件:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装pod网络插件:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
查看node状态:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.alec.com NotReady master 101m v1.13.3
node01.alec.com NotReady <none> 100m v1.13.3
node02.alec.com NotReady <none> 100m v1.13.3
显示都是NotReady是由于flannel未安装;
查看flannel pod状态
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-qxb6z 0/1 Pending 0 3h27m
coredns-78d4cf999f-s8gqd 0/1 Pending 0 3h27m
etcd-master.alec.com 1/1 Running 0 3h26m
kube-apiserver-master.alec.com 1/1 Running 0 3h26m
kube-controller-manager-master.alec.com 1/1 Running 0 3h26m
kube-flannel-ds-amd64-ffhn9 0/1 Init:ImagePullBackOff 0 14m
kube-proxy-7tg8j 1/1 Running 0 3h27m
kube-scheduler-master.alec.com 1/1 Running 0 3h26m
flannel状态为Init:ImagePullBackOff ,kubectl get node 显示为 NotReady;
解决办法:手动pull flannel镜像
docker pull quay.io/coreos/flannel:v0.11.0-amd64
从官方的点可能拉取不成功,从国内的站点拉:
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
拉取后修改tag和官方一致:
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
再次查看pods状态,已更新为Running,node状态更新为Ready;
1.4 Node节点安装
1.4.1 安装
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubernetes-cni-0.6.0
systemctl enable kubelet
1.4.2 添加Node节点到集群
从init的输出信息中获取加入命令,执行:
kubeadm join 192.168.30.130:6443 --token ddapmo.80e0jspn862598ls --discovery-token-ca-cert-hash sha256:c673cef0087763b18296a9b471e87641e2907c0d5a4308b4c7d0575c4ab34641
加入完成:
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[discovery] Trying to connect to API Server "192.168.30.130:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.30.130:6443"
[discovery] Requesting info from "https://192.168.30.130:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.30.130:6443"
[discovery] Successfully established connection with API Server "192.168.30.130:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1.alec.com" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
从Master查看Node节点:
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master.alec.com Ready master 4h28m v1.13.3
node01.alec.com NotReady <none> 50s v1.13.3
等待一会,node节点会从master节点拉取flannel镜像,之后就会更新为Ready;
1.5 测试k8s集群
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc -o wide
浏览器访问http://任一节点IP:PORT
1.6 部署DashBoard
默认kubernetes-dashboard.yaml中的镜像地址在国外,国内获取不到;
默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部;
- 可以按照下面的方式部署;
- 也可以先搭建本地docker仓库,然后修改kubernetes-dashboard.yaml中"image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"为本地仓库路径;
在所有节点,从阿里云把镜像拉取到本地:
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
获取kubernetes-dashboard.yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
修改kubernetes-dashboard.yaml,在Service中增加type: NodePort,将pod端口暴露出来;
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
再执行 kubectl apply -f kubernetes-dashboard.yaml 生成pod;
检查:
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
......
kubernetes-dashboard-57df4db6b-np4gl 1/1 Running 0 11m
[root@master ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
......
kubernetes-dashboard NodePort 10.1.23.57 <none> 443:32692/TCP 5m21s
浏览器访问http://任一节点IP:PORT
谷歌浏览器不能访问的问题:
[root@master ~]# mkdir key && cd key
# 生成证书
[root@master key]# openssl genrsa -out dashboard.key 2048
Generating RSA private key, 2048 bit long modulus
.....................+++
...............................................+++
e is 65537 (0x10001)
[root@master key]# openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.30.130'
[root@master key]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Signature ok
subject=/CN=192.168.30.130
Getting Private key
#删除原有的证书secret
[root@master key]# kubectl delete secret kubernetes-dashboard-certs -n kube-system
# 创建新的证书secret
[root@master key]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
# 查看pod
[root@master key]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
......
kubernetes-dashboard-57df4db6b-np4gl 1/1 Running 0 21m
# 重启pod
[root@master key]# kubectl delete pod kubernetes-dashboard-57df4db6b-np4gl -n kube-system
创建service account并绑定默认cluster-admin管理员集群角色(给dashbord生成令牌):
[root@master key]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@master key]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
查看secret:
[root@master key]# kubectl -n kube-system get secret
NAME TYPE DATA AGE
......
dashboard-admin-token-9f6xv kubernetes.io/service-account-token 3 2m1s
......
查看token:
[root@master key]# kubectl describe secrets -n kube-system dashboard-admin-token-9f6xv