centos7 k8s集群部署
设置时间:timedatectl set-timezone Asia/Shanghai
安装k8s集群前期准备:
网络环境:
节点 | 主机名称 | ip |
---|---|---|
master | k8s_master | 192.168.192.128 |
node1 | k8s_client1 | 192.168.192.129 |
node2 | k8s_client2 | 192.168.192.130 |
centos7版本:
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
关闭firewalld:
关闭:service iptables stop
开启:service iptables start
1. 三台主机基础服务安装:
[root@localhost ~]#yum -y update
[root@localhost ~]#yum -y install net-tools wget vim ntp
[root@localhost ~]#systemctl enable ntpd
[root@localhost ~]#systemctl start ntpd
2. 分别在三台主机,设置主机名:
master
hostnamectl --static set-hostname k8s_master
node1
hostnamectl --static set-hostname k8s_client1
node2
hostnamectl --static set-hostname k8s_client2
3. 设置hosts,分别再三台主机执行:
cat <<EOF > /etc/hosts
192.168.192.128 k8s_master
192.168.192.129 k8s_client1
192.168.192.130 k8s_client2
EOF
4. 部署master操作:
[root@localhost ~]#yum -y install etcd
编辑配置文件 /etc/etcd/etcd.conf
[root@k8s_master kubernetes]# cat /etc/etcd/etcd.conf | grep -v "^#"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_NAME="master"
ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"
设置开机启动并验证状态:
[root@k8s_master kubernetes]#systemctl enable etcd
[root@k8s_master kubernetes]#systemctl start etcd
etcd检查:
[root@k8s_master kubernetes]# etcdctl -C http://k8s_master:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
cluster is healthy
[root@k8s_master kubernetes]# etcdctl -C http://k8s_master:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
cluster is healthy
5. 安装kubernetes
[root@k8s_master ~]# yum install kubernetes
修改apiserver服务配置文件:(master节点)
[root@k8s_master kubernetes]# cat /etc/kubernetes/apiserver | grep -v "^#"
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.192.128:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""
修改config配置文件:(master节点)
[root@k8s_master kubernetes]# cat /etc/kubernetes/config | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.192.128:8080"
设置开机启动,开启服务:(master节点)
[root@k8s_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler
[root@k8s_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler
查看服务端口:(master节点)
[root@k8s_master kubernetes]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 22398/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6641/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 6899/master
tcp6 0 0 :::10251 :::* LISTEN 24193/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 24128/kube-apiserve
tcp6 0 0 :::2379 :::* LISTEN 22398/etcd
tcp6 0 0 :::10252 :::* LISTEN 24143/kube-controll
tcp6 0 0 :::8080 :::* LISTEN 24128/kube-apiserve
tcp6 0 0 :::22 :::* LISTEN 6641/sshd
tcp6 0 0 :::23 :::* LISTEN 1/systemd
tcp6 0 0 ::1:25 :::* LISTEN 6899/master
tcp6 0 0 :::4001 :::* LISTEN 22398/etcd
6. Node节点主机做以下配置:
修改config与kubelet配置:
[root@k8s_client1 ~]# cat /etc/kubernetes/config | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.192.128:8080"
[root@k8s_client1 ~]# cat /etc/kubernetes/kubelet | grep -v "^#"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=192.168.192.129"
KUBELET_API_SERVER="--api-servers=http://192.168.192.128:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
设置开机启动、开启服务:
[root@k8s_client1 ~]# systemctl enable kubelet kube-proxy
[root@k8s_client1 ~]# systemctl start kubelet kube-proxy
查看端口:
[root@k8s_client1 ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 97902/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 97633/kube-proxy
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 22376/etcd
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 96328/kube-apiserve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6651/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 6869/master
tcp6 0 0 :::10250 :::* LISTEN 97902/kubelet
tcp6 0 0 :::10251 :::* LISTEN 96430/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 96328/kube-apiserve
tcp6 0 0 :::2379 :::* LISTEN 22376/etcd
tcp6 0 0 :::10252 :::* LISTEN 96419/kube-controll
tcp6 0 0 :::10255 :::* LISTEN 97902/kubelet
tcp6 0 0 :::22 :::* LISTEN 6651/sshd
tcp6 0 0 :::23 :::* LISTEN 1/systemd
tcp6 0 0 ::1:25 :::* LISTEN 6869/master
tcp6 0 0 :::4001 :::* LISTEN 22376/etcd
tcp6 0 0 :::4194 :::* LISTEN 97902/kubelet
7.查看状态
Master上查看集群中的节点及节点状态
[root@k8s_master kubernetes]# kubectl get node
NAME STATUS AGE
192.168.192.129 Ready 2h
192.168.192.130 Ready 2h
8. 安装fiannel
flannel是CoreOS提供用于解决Dokcer集群跨主机通讯的覆盖网络工具。它的主要思路是:预先留出一个网段,每个主机使用其中一部分,然后每个容器被分配不同的ip;让所有的容器认为大家在同一个直连的网络,底层通过UDP/VxLAN等进行报文的封装和转发。
master/node上flannel安装:
[root@k8s_master ~]#yum install flannel
flannel配置:(master node使用master的ip相同)
[root@k8s_master kubernetes]# cat /etc/sysconfig/flanneld | grep -v "^#"
FLANNEL_ETCD_ENDPOINTS="http://192.168.192.128:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
添加网络:(docker默认网段172.17.0.0)master node都要执行
[root@k8s_master ~]# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
设置开机启动:
启动时候删除docker0网络:sudo ip link delete docker0
[root@k8s_master ~]# systemctl enable flanneld
[root@k8s_master ~]# systemctl start flanneld
master/node节点重启服务:
master:
for
SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler;
do systemctl restart $SERVICES ;
done
node:
for
SERVICES in kube-proxy kubelet docker flanneld;
do systemctl restart $SERVICES;
systemctl enable $SERVICES;
systemctl status $SERVICES;
done