11.1.1 Kubernetes 概述
Kubernetes 是Google 开源的容器集群管理系统,基于Dokcer 构建一个容器的调度服务,提供资源调度,均衡容灾,服务注册,动态扩容等功能套件,基于容器的云平台
Kubernetes 基于docker 容器的云平台,简写成 k8s
OpenStack 基于kvm 虚拟机品平台
官网 https://kubernetes.io/
把软件把上传到linux
方法1
[root@xuegod1 ~]# ls
anaconda-ks.cfg initial-setup-ks.cfg k8s-package.tar.gz
[root@xuegod1 ~]# tar -xf k8s-package.tar.gz
自己做yum 源
vim /etc/yum.repos.d/k8s-package.repo
写入
[k8s-package]
name=k8s-package
baseurl=file:///root/k8s-package
enable=1
gpgcheck=0
配置本地yum 源
挂载 mount /dev/cdrom /mnt/
[root@xuegod63 ~]# vim /etc/yum.repos.d/centos7.repo
[centos7]
name=CentOS7
baseurl=file:///mnt
enable=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
~
2 ) 移除 yum 源 mv /etc/yum.repos.d/CentOS-Base.repo /opt/
[root@xuegod1 ~]# scp /etc/yum.repos.d/k8s-package.repo 192.168.24.63:/etc/yum.repos.d/
两台都要复制
复制k8s 的源码包到另外两台linux服务器上
[root@xuegod1 ~]# scp -r /root/k8s-package 192.168.24.63:/root/
11.2.2 在各个节点安装k8s 组件
注意 : kubernetes 目前不能支持Docker 的最新版,如果本地已经安装了Docker 先删除 Docker
配置68 为master 和etcd
3 ) 安装包
root@xuegod1 ~]# yum install -y kubernetes etcd flannel ntp
yum install -y kubernetes flannel ntp 62 服务器 和63 服务器
每个minion 都要安装一样到此安装成功
开始配置kubernetes
11.3 配置etcd 和 master 节点 IP地址和hosts 文件
修改主机名
68服务器
echo master > /etc/hostname
[root@xuegod2 ~]# echo node1 >/etc/hostname
修改hosts 文件
三台机子修改hosts
写入以下内容
192.168.24.68 xuegod1
192.168.24.62 xuegod2
192.168.24.63 xuegod3
scp /etc/hosts 192.168.24.62 :/etc/
开始配置 etcd 配置文件
[root@master ~]# vim /etc/etcd/etcd.conf
9 行
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,"
修改为ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.24.68:2379"
20 行
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
修改为 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.24.68:2379"
过滤查看
[root@master ~]# grep -v ^# /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.24.68:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.24.68:2379"
3 开始启动服务
systemctl enable etcd
systemctl start etcd
查看集群的状态
[root@master ~]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.24.68:2379
cluster is healthy
检查成员列表 目前只有一台
[root@master ~]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.24.68:2379 isLeader=true
到此etcd 到此成功
11.3.3 配置master
[root@master ~]# vim /etc/kubernetes/config
22 KUBE_MASTER="--master=http://127.0.0.1:8080"
修改为 22 KUBE_MASTER="--master=http://192.168.24.68:8080"
过滤查看一下
[root@master ~]# grep -v ^# !$
grep -v ^# /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.24.68:8080"
11.3.4 apiserver 修稿配置文件
[root@master ~]# vim /etc/kubernetes/apiserver
8 KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
修改为
8 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.24.68:2379"
23 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota "
23 KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit "
[root@master ~]# grep -v ^# !$
grep -v ^# /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11.3.5 修改 kube-controller -manager 配置文件
修改调度配置文件
[root@master ~]# vim /etc/kubernetes/scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
修改flanned 文件
[root@node1 ~]# vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
改为 FLANNEL_ETCD_ENDPOINTS="http://192.168.24.68:2379"
#FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_ETCD_PREFIX="/k8s/network"
FLANNEL_OPTIONS="--iface=ens33" 指定网卡
开始启动
[root@master ~]# etcdctl set /k8s/network/config '{"Network": "10.255.0.0/16"}'
{"Network": "10.255.0.0/16"}
[root@master ~]# etcdctl get /k8s/network/config
{"Network": "10.255.0.0/16"}
[root@node1 ~]# systemctl start flanneld
62 服务器
[root@node1 ~]# vim /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.24.68:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
#FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_ETCD_PREFIX="/k8s/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="iface=ens33"
做备份
[root@node1 ~]# cp /etc/sysconfig/flanneld /opt/
启动
11.4.2 配置node1 kube-proxy
kube-proxy 的作用主要是负责service 的实现,具体来说,就是实现内部从pod 到service
[root@node1 ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://127.0.0.1:8080" 改为
KUBE_MASTER="--master=http://192.168.24.68:8080"
[root@node1 ~]# grep -v ^# !$
这个是默认
[root@node1 ~]# grep -v ^# /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
11.4.3 配置node1 kubelet
Kubelet 运行在minion 节点上,Kubelet 组件管理Pod pod 中容器的景象和卷等信息
[root@node1 ~]# vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=127.0.0.1"
修改为KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
修改为KUBELET_HOSTNAME="--hostname-override=node1"
KUBELET_API_SERVER="--api-servers=http://192.168.24.68:8080"
注意 https://access.redhat.com/containers/ 是红帽的容器的下载站点
11.4.4 启动服务
systemctl restart flanneld kube-proxy kubelet docker
[root@node1 ~]# systemctl enable flanneld kube-proxy kubelet docker
查看kube-proxy
此时能通信了
把node1 的配置文件复制到63 服务器上
[root@node1 ~]# scp /etc/sysconfig/flanneld 192.168.24.63:/etc/sysc
sysconfig/ sysctl.conf sysctl.d/
[root@node1 ~]# scp /etc/sysconfig/flanneld 192.168.24.63:/etc/sysconfig/
root@192.168.24.63's password:
查看
[root@node2 ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.24.68:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
#FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_ETCD_PREFIX="/k8s/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="--iface=ens33"
在64 服务器上
[root@node2 ~]# systemctl restart flanneld.service
[root@node1 ~]# scp /etc/kubernetes/config 192.168.24.63:/etc/kubernetes/
root@192.168.24.63's password:
config 100% 659 28.5KB/s 00:00
[root@node1 ~]# scp /etc/kubernetes/proxy 192.168.24.63:/etc/kubernetes/
root@192.168.24.63's password:
proxy
启动
复制到63服务器 上要修改一下
[root@node1 ~]# scp /etc/kubernetes/kubelet 192.168.24.63:/etc/kubernetes/
root@192.168.24.63's password:
kubelet
63服务器上
启动[root@node2 ~]# systemctl start kubelet.service
已经建立好联系
63服务器上 设置开机自启动
[root@node2 ~]# systemctl restart flanneld kube-proxy kubelet docker
[root@node2 ~]# systemctl enable flanneld kube-proxy kubelet docker
[root@node2 ~]# systemctl status flanneld kube-proxy kubelet docker
登录集群查看 在68服务器运行状态
kubectl get nodes
[root@master ~]# kubectl get nodes
NAME STATUS AGE
node1 Ready 40m
node2 Ready 10m