安装前准备
1. 一台或多台主机,这里准备三台机器
主节点 192.168.0.10 master 2核4G CentOS Linux release 7.7.1908 (Core)
工作节点 192.168.0.11 node1 2核4G CentOS Linux release 7.7.1908 (Core)
工作节点 192.168.0.12 node2 2核4G CentOS Linux release 7.7.1908 (Core)
2. 验证MAC地址和product_uuid对于每个节点都是唯一的
您可以使用命令ip link或ifconfig -a获取网络接口的MAC地址
可以使用 sudo cat /sys/class/dmi/id/product_uuid 查看product_uuid
3. 检查所需端口
主节点
子节点
4. 所有节点同步时间
yum install ntpdate -y
ntpdate 0.asia.pool.ntp.org
5. 修改主机名,分别在三个节点下执行
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node3
6.在三个节点下修改hosts文件
192.168.0.10 k8s-master
192.168.0.11 k8s-node1
192.168.0.12 k8s-node3
7.关闭所有节点的防火墙及SELinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
8.CentOS7需要确保确保 net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1,在所有节点执行
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
9. 关闭SWAP
swapoff -a
注释掉/etc/fstab中的swap挂载那一行
安装Docker CE
在所有节点下执行
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
systemctl start docker
systemctl enable docker
安装kubeadm,kubelet和kubectl
kubeadm:引导群集的命令。
kubelet:在群集中的所有计算机上运行的组件,并执行诸如启动pod和容器之类的操作。
kubectl:用于与群集通信的命令行util。
1. 编辑版本库
在每个节点执行
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2. 安装,并设置kubelet开机自启动
yum install -y kubelet-1.17.4 kubectl-1.17.4 kubeadm-1.17.4 kubernetes-cni-1.17.4 --skip-broken
systemctl enable kubelet && systemctl start kubelet
从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:
官方从Kubernetes 1.9开始就将cni依赖升级到了0.6.0版本,在当前1.12中仍然是这个版本
socat是kubelet的依赖
cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具
创建集群
kubeadm init \
> --kubernetes-version=v1.13.0 \
> --pod-network-cidr=10.244.0.0/16 \
> --apiserver-advertise-address=192.168.0.10
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.0.10 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.506947 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: k9bohf.6zl3ovmlkf4iwudg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.0.10:6443 --token 57edgs.6a7xok9dhrh9oiyj \
--discovery-token-ca-cert-hash sha256:ac17b6ef5bf693314453274eb05c6a78c5e99b44c8d2f2d1b64f3b75a0aeb6ff
要使kubectl为非root用户工作,请运行以下命令,这些命令也是kubeadm init输出的一部分
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
或者,如果您是root用户,则可以运行:
1export KUBECONFIG=/etc/kubernetes/admin.conf
2. 安装pod网络附加组件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安装了pod网络后,您可以通过运行下面命令检查CoreDNS pod是否正常工作
kubectl get pods --all-namespaces -o wide
3. 加入工作节点
在工作节点执行初始化时日志的输出。
kubeadm join 192.168.0.10:6443 --token 57edgs.6a7xok9dhrh9oiyj \
--discovery-token-ca-cert-hash sha256:ac17b6ef5bf693314453274eb05c6a78c5e99b44c8d2f2d1b64f3b75a0aeb6ff
更新令牌
默认情况下令牌24小时过期,使用下面方法创建新的令牌
kubeadm token create
kubeadm token list
4. 查看节点的状态
状态为Ready表示和Master节点正常通信
[root@master ~]# kubectl get nodes
5. 将主节点加入工作负载(可选操作)
默认情况下,出于安全原因,您的群集不会在主服务器上安排pod。如果您希望能够在主服务器上安排pod,例如,对于用于开发的单机Kubernetes集群,请运行:
kubectl describe node master | grep Taint
kubectl taint nodes --all node-role.kubernetes.io/master-node/master untainted
6. 从主服务器以外的计算机控制您的群集(可选操作)
在工作节点上运行
scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
移除集群
在控制节点上运行
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
然后,在要删除的节点上,重置所有kubeadm安装状态:
kubeadm reset
重置过程不会重置或清除iptables规则或IPVS表。如果您想重置iptables,必须手动执行:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
如果要重置IPVS表,则必须运行以下命令:
ipvsadm -C