部署安装:
1)原密码编译安装,golang编译环境
2)二进制安装 文档 全程手动,ansible版,saltstak版
3)kubeadm 安装 网络要求. 1.0~1.14
4)minikube 开发者学习
5)yum 安装 1.5.2
本文采用kubeadm 安装、
一、Docker安装(在 master节点和 node节点都要执行)
1. 安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
2. 设置Docker源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3、安装Docker
#卸载旧版本
# 在 master 节点和 worker 节点都要执行
yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
安装新的版本
yum install -y docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
4、 启动Docker
systemctl enable docker
systemctl start docker
5. 命令补全
5.1 安装bash-completion
yum -y install bash-completion
5.2 加载bash-completion
source /etc/profile.d/bash_completion.sh
6、配置docker镜像加速
mkdir -p /etc/docker
tee /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://mq61c1ps.mirror.aliyuncs.com"]
}
EOF
#重启
systemctl daemon-reload
systemctl restart docker
6、验证、测试
docker --version
docker run hello-world
7、Docker构建私有registry
7.1、启动registry
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/registry:/var/lib/registry registry
#--restart=always docker重启后,直接启动对应的容器
7.2、修改配置文件 (mastar和node都操作)
vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://68rmyzg7.mirror.aliyuncs.com"],
"insecure-registries": ["172.16.214.210:5000"]
}
7.3、重启docker (mastar和node都操作)
systemctl daemon-reload
systemctl restart docker
7.4、上传本地镜像到registry
[root@docker tools]# docker load -i heapster-amd64.tgz
312079130a88: Loading layer [==================================================>] 75.04MB/75.04MB
f7000a0c5705: Loading layer [==================================================>] 281.1kB/281.1kB
Loaded image ID: sha256:f57c75cd7b0aa80b70947ea614c29ad04617dade823ec9b25fcadbed38ddce1c
[root@docker tools]# docker tag f57c75cd7b0a 172.16.214.210:5000/zhangxl/heapster:v1.8.3
[root@docker tools]# docker push 172.16.214.210:5000/zhangxl/heapster:v1.8.3
The push refers to repository [172.16.214.210:5000/zhangxl/heapster]
f7000a0c5705: Pushed
312079130a88: Pushed
v1.8.3: digest: sha256:fc33c690a3a446de5abc24b048b88050810a58b9e4477fa763a43d7df029301a size: 739
7.5、测试下载
[root@backup ~]# docker pull 172.16.214.210:5000/zhangxl/heapster:v1.8.3
v1.8.3: Pulling from zhangxl/heapster
c81d386f75aa: Pull complete
eae62af09b36: Pull complete
Digest: sha256:fc33c690a3a446de5abc24b048b88050810a58b9e4477fa763a43d7df029301a
Status: Downloaded newer image for 172.16.214.210:5000/zhangxl/heapster:v1.8.3
[root@backup ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
172.16.214.210:5000/zhangxl/heapster v1.8.3 f57c75cd7b0a 2 years ago 75.3MB
二、(在 master 节点和 node 节点都要执行)k8s安装准备工作;
1. 配置主机名
1.1修改主机名
hostnamectl set-hostname master
hostnamectl set-hostname node01
hostnamectl set-hostname node02
1.2 添加host文件
cat >> /etc/hosts << EOF
172.27.9.131 master
172.27.9.135 node01
172.27.9.136 node02
EOF
2. 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
3. 关闭 防火墙、SeLinux、swap
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
#临时禁用
swapoff -a
#永久禁用
#若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap
sed -i.bak '/swap/s/^/#/' /etc/fstab
4. 修改 内核参数 /etc/sysctl.conf
#向其中加入以下参数
cat > /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#执行命令以应用
sysctl -p
5、更新缓存
yum clean all
yum -y makecache
三、k8s安装 (在 master 节点和 node 节点都要执行)
1. 安装kubelet、kubeadm和kubectl
1.1 安装三个包
#版本查看
yum list kubelet --showduplicates | sort -r
#安装对应的版本
yum -y install kubeadm-1.14.2 kubectl-1.14.2 kubelet-1.14.2 kubernetes-cni-0.7.5
1.2 安装包说明
- kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
- kubeadm 用于初始化集群,启动集群的命令工具
- kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
1.3 启动kubelet并设置开机启动
systemctl enable kubelet && systemctl start kubelet
1.4 kubelet命令补全
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source ~/.bash_profile
2. 镜像脚本编写以及镜像下载
Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。
vim image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.14.2
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
url为阿里云镜像仓库地址,version为安装的kubernetes版本。镜像下载,运行脚本image.sh,下载指定版本的镜像,运行脚本前先赋权。
chmod u+x image.sh
./image.sh
docker images
四、Master-k8s初始化
1. 初始化
kubeadm init --apiserver-advertise-address 172.16.214.210 --pod-network-cidr=10.244.0.0/16
#(apiserver-advertise-address指定master的interface,pod-network-cidr指定Pod网络的范围,这里使用flannel网络方案。)
#记录kubeadm join的输出,后面需要这个命令将各个节点加入集群中。
2. 加载环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
#本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
五. 安装pod网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
六、Node节点 加入集群
1. 在node节点上分别执行如下操作
kubeadm join 172.16.214.210:6443 --token 2rsan2.km04r9m1idhrk96s --discovery-token-ca-cert-hash sha256:0350d7a8d4b9acdfb0aa8054caa9a790f57a5335b8c32f810035c0fa4e2d0eaf
2、在Master检查对应的node
kubectl get nodes
3. 如果对应的令牌失效,可以创建新的令牌
3.1 查看令牌
[root@master ~]# kubeadm token list
发现之前初始化时的令牌已过期
3.3. 生成新的令牌
[root@master ~]# kubeadm token create1zl3he.fxgz2pvxa3qkwxln
3.3. 生成新的加密串
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \openssl dgst -sha256 -hex | sed's/^.* //'
七、验证集群功能
1. 检查节点状态
[root@master kubelet]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 18h v1.14.2
node01 Ready <none> 18h v1.14.2
node02 Ready <none> 18h v1.14.2
2. 创建测试文件
at > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
name: nginx-ds
labels:
app: nginx-ds
spec:
type: NodePort
selector:
app: nginx-ds
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ds
labels:
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
app: nginx-ds
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
EOF
3. 执行测试
kubectl create -f nginx-ds.yml
4. 检查各节点的 Pod IP 连通性
$ kubectl get pods -o wide -l app=nginx-ds
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-cls6k 1/1 Running 0 8m16s 10.244.2.16 node02 <none> <none>
nginx-ds-cppft 1/1 Running 0 8m16s 10.244.1.15 node01 <none> <none>
在所有 Node 上分别 ping 上面三个 Pod IP,看是否连通:
[root@master kubelet]# ping 10.244.2.16
PING 10.244.2.16 (10.244.2.16) 56(84) bytes of data.
64 bytes from 10.244.2.16: icmp_seq=1 ttl=63 time=2.46 ms
64 bytes from 10.244.2.16: icmp_seq=2 ttl=63 time=0.633 ms
^C
--- 10.244.2.16 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.633/1.551/2.469/0.918 ms
[root@master kubelet]# ping 10.244.1.15
PING 10.244.1.15 (10.244.1.15) 56(84) bytes of data.
64 bytes from 10.244.1.15: icmp_seq=1 ttl=63 time=0.884 ms
64 bytes from 10.244.1.15: icmp_seq=2 ttl=63 time=0.544 ms
64 bytes from 10.244.1.15: icmp_seq=3 ttl=63 time=0.784 ms
64 bytes from 10.244.1.15: icmp_seq=4 ttl=63 time=0.868 ms
64 bytes from 10.244.1.15: icmp_seq=5 ttl=63 time=0.523 ms
5. 检查服务 IP 和端口可达性
$ kubectl get svc -l app=nginx-ds
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ds NodePort 10.106.44.99 <none> 80:30789/TCP 10m
可见:
- Service Cluster IP:10.106.44.99
- 服务端口:80
- NodePort 端口:30789
6. 检查服务的 NodePort 可达性
在所有 Node 上执行:
外网访问对应的master+Port 正常