k8s三机环境搭建

网络环境:

节点 主机名 ip


Master k8s_master 192.168.56.216

Node1 k8s_node1 192.168.56.217

Node2 k8s_node2 192.168.56.219

centos7版本:


[root@k8s_master ~]# cat /etc/redhat-release

CentOS Linux release 7.6.1810 (Core)

关闭firewalld命令


systemctl stop firewalld

systemctl disable firewalld

三台服务器基础服务安装:


[root@k8s_master ~]#yum -y update

[root@k8s_master ~]#yum -y install net-tools wget vim ntpd

[root@k8s_master ~]#systemctl enable ntpd

[root@k8s_master ~]#systemctl start ntpd

分别给三台服务器,设置名称:

Master


hostnamectl --static set-hostname k8s_master

Node1


hostnamectl --static set-hostname k8s_client1

Node2


hostnamectl --static set-hostname k8s_client2

设置hosts,配置映射,分别再三台服务器上执行:


cat <<EOF > /etc/hosts

192.168.56.217 k8s_client1

192.168.56.219 k8s_client2

192.168.56.216 k8s_master

EOF

安装docker服务,分别再三台主机执行:


[root@k8s_master ~]# yum -y install docker

设置开机启动,开启服务:


[root@k8s_master ~]#systemctl enable docker

[root@k8s_master ~]#systemctl start docker

查看docker版本:


[root@k8s_master ~]# docker version

Client:

Version:        1.13.1

API version:    1.26

Package version: docker-1.13.1-103.git7f2769b.el7.centos.x86_64

Go version:      go1.10.3

Git commit:      7f2769b/1.13.1

Built:          Sun Sep 15 14:06:47 2019

OS/Arch:        linux/amd64

Server:

Version:        1.13.1

API version:    1.26 (minimum version 1.12)

Package version: docker-1.13.1-103.git7f2769b.el7.centos.x86_64

Go version:      go1.10.3

Git commit:      7f2769b/1.13.1

Built:          Sun Sep 15 14:06:47 2019

OS/Arch:        linux/amd64

Experimental:    false

部署Master操作:

安装etcd服务:


[root@k8s_master ~]# yum -y install etcd

编辑配置文件 /etc/etcd/etcd.conf

[root@k8s_master ~]# cat /etc/etcd/etcd.conf | grep -v "^#"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

ETCD_NAME="master"

ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"

设置开机启动并验证状态

[root@k8s_master ~]#systemctl enable etcd

[root@k8s_master ~]#systemctl start etcd

etcd检查


[root@k8s_master ~]# etcdctl -C http://k8s_master:4001 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379

cluster is healthy

[root@k8s_master ~]# etcdctl -C http://k8s_master:2379 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379

cluster is healthy

安装kubernetes

[root@k8s_master ~]# yum install kubernetes

在master上需要运行以下组件:

    Kubernets API Server

    Kubernets Controller Manager

    Kubernets Scheduler

修改apiserver服务配置文件:

[root@k8s_master ~]# cat /etc/kubernetes/apiserver | grep -v "^#"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.216:2379"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_API_ARGS=""



修改config配置文件:

[root@k8s_master ~]# cat /etc/kubernetes/config | grep -v "^#"

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://192.168.56.216:8080"

设置开机启动,开启服务

[root@k8s_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler

[root@k8s_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler

查看服务端口:

[root@k8s_master ~]# netstat -tnlp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address          Foreign Address        State      PID/Program name   

tcp        0      0 127.0.0.1:2380          0.0.0.0:*              LISTEN      973/etcd           

tcp        0      0 0.0.0.0:22              0.0.0.0:*              LISTEN      970/sshd           

tcp        0      0 127.0.0.1:25            0.0.0.0:*              LISTEN      1184/master       

tcp6      0      0 :::6443                :::*                    LISTEN      1253/kube-apiserver

tcp6      0      0 :::2379                :::*                    LISTEN      973/etcd           

tcp6      0      0 :::10251                :::*                    LISTEN      675/kube-scheduler 

tcp6      0      0 :::10252                :::*                    LISTEN      674/kube-controller

tcp6      0      0 :::8080                :::*                    LISTEN      1253/kube-apiserver

tcp6      0      0 :::22                  :::*                    LISTEN      970/sshd           

tcp6      0      0 ::1:25                  :::*                    LISTEN      1184/master       

tcp6      0      0 :::4001                :::*                    LISTEN      973/etcd

部署Node:

安装docker

安装kubernetes

参考上面安装方法(使用以下配置)

配置、启动kubernetes

两个node节点上需要运行一下组件

kubelet kube-proxy

Node节点做以下配置:

config:

[root@k8s_node1 ~]# cat /etc/kubernetes/config | grep -v "^#"

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://192.168.56.216:8080"

kubelet:

[root@k8s_node1 ~]# cat /etc/kubernetes/kubelet | grep -v "^#"

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_HOSTNAME="--hostname-override=192.168.56.217"

KUBELET_API_SERVER="--api-servers=http://192.168.56.216:8080"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

KUBELET_ARGS=""

设置开机启动、开启服务

[root@k8s_node1 ~]# systemctl enable kubelet kube-proxy

[root@k8s_node1 ~]# systemctl start kubelet kube-proxy

查看端口:

[root@k8s_node1 ~]# netstat -ntlp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address          Foreign Address        State      PID/Program name   

tcp        0      0 0.0.0.0:22              0.0.0.0:*              LISTEN      942/sshd           

tcp        0      0 127.0.0.1:25            0.0.0.0:*              LISTEN      2258/master       

tcp        0      0 127.0.0.1:10248        0.0.0.0:*              LISTEN      17932/kubelet     

tcp        0      0 127.0.0.1:10249        0.0.0.0:*              LISTEN      17728/kube-proxy   

tcp6      0      0 :::10250                :::*                    LISTEN      17932/kubelet     

tcp6      0      0 :::10255                :::*                    LISTEN      17932/kubelet     

tcp6      0      0 :::22                  :::*                    LISTEN      942/sshd           

tcp6      0      0 ::1:25                  :::*                    LISTEN      2258/master       

tcp6      0      0 :::4194                :::*                    LISTEN      17932/kubelet

Master上查看集群中的节点及节点状态

[root@k8s_master ~]# kubectl get node

NAME            STATUS    AGE

127.0.0.1      NotReady  1d

192.168.56.217  Ready      1d

192.168.56.219  Ready      1d

[root@k8s_master ~]# kubectl -s http://k8s_master:8080 get node

NAME            STATUS    AGE

127.0.0.1      NotReady  1d

192.168.56.217  Ready      1d

192.168.56.219  Ready      1d

flannel安装

Master/Node上flannel安装:

[root@k8s_master ~]#yum install flannel

flannel配置:

Master/Node上修改/etc/sysconfig/flanneld

Master:

[root@k8s_master ~]# cat /etc/sysconfig/flanneld | grep -v "^#"

FLANNEL_ETCD_ENDPOINTS="http://192.168.56.216:2379"

FLANNEL_ETCD_PREFIX="/atomic.io/network"

Node:

[root@k8s_client1 ~]# cat /etc/sysconfig/flanneld | grep -v "^#"

FLANNEL_ETCD_ENDPOINTS="http://192.168.56.216:2379"

FLANNEL_ETCD_PREFIX="/atomic.io/network"

添加网络:

[root@k8s_master ~]#etcdctl mk //atomic.io/network/config '{"Network":"172.8.0.0/16"}'

Master/Node设置服务开机启动

[root@k8s_master ~]# systemctl enable flanneld

[root@k8s_master ~]# systemctl start flanneld

Master/Node节点重启服务:

Master:

for SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES ; done

Node:

[root@k8s_client1 ~]#systemctl restart kube-proxy kubelet docker

查看flannel网络:

Master节点:

[root@k8s_master ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

      valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

      valid_lft forever preferred_lft forever

2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 00:0c:29:9b:44:3a brd ff:ff:ff:ff:ff:ff

    inet 192.168.56.133/24 brd 192.168.56.255 scope global noprefixroute dynamic ens33

      valid_lft 1680sec preferred_lft 1680sec

    inet6 fe80::ff3:cfd5:d17f:2ed6/64 scope link noprefixroute

      valid_lft forever preferred_lft forever

3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000

    link/ether 52:54:00:bf:f3:02 brd ff:ff:ff:ff:ff:ff

    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

      valid_lft forever preferred_lft forever

4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000

    link/ether 52:54:00:bf:f3:02 brd ff:ff:ff:ff:ff:ff

5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default

    link/ether 02:42:9c:40:42:3c brd ff:ff:ff:ff:ff:ff

    inet 172.8.41.1/24 scope global docker0

      valid_lft forever preferred_lft forever

7: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500

    link/none

    inet 172.8.22.0/16 scope global flannel0

      valid_lft forever preferred_lft forever

    inet6 fe80::7024:a587:f1ca:b212/64 scope link flags 800

      valid_lft forever preferred_lft forever

Node节点:

[root@k8s_client1 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

      valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

      valid_lft forever preferred_lft forever

2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 00:0c:29:8d:2f:af brd ff:ff:ff:ff:ff:ff

    inet 192.168.56.134/24 brd 192.168.56.255 scope global noprefixroute dynamic ens33

      valid_lft 1770sec preferred_lft 1770sec

    inet6 fe80::a382:4358:305f:b3f4/64 scope link noprefixroute

      valid_lft forever preferred_lft forever

3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000

    link/ether 52:54:00:e1:97:87 brd ff:ff:ff:ff:ff:ff

4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000

    link/ether 52:54:00:e1:97:87 brd ff:ff:ff:ff:ff:ff

5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default

    link/ether 02:42:a9:b8:f9:e3 brd ff:ff:ff:ff:ff:ff

    inet 172.8.56.1/24 scope global docker0

      valid_lft forever preferred_lft forever

7: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500

    link/none

    inet 172.8.76.0/16 scope global flannel0

      valid_lft forever preferred_lft forever

    inet6 fe80::ed73:14db:8cae:99ca/64 scope link flags 800

      valid_lft forever preferred_lft forever
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容