一、docker镜像构建
1.1、Dockerfile文件命令
FROM 基础镜像
MAINTAINER 作者信息
LABEL 镜像的元数据
RUN 运行的命令,多个命令间加&&
COPY 源文件 目录位置 ;这个命令只考文件,不解压
ADD 源文件 目录位置 ;复制文件,同时解压,如果是从网络上下载并不解压
EXPOSE 业务端口
CMD ["启动镜像时执行的命令或脚本"],例如CMD ["/apps/nginx/sbin/nginx","-g","daemon off;"]
WORKDIR 指定工作目录
USER 指定容器执行操作的用户
VOLUME 创建一个挂载点,拥有挂载外部存储
ENTRYPOINT 和CMD功能类似,为容器执行运行的服务或参数,也可以和CMD混用,不过当指定了ENTRYPOINT后CMD的含义就发生了改变,不再是直接运行命令,而是将CMD的内容作为参数传给ENTRYPOINT指令,再由ENTRYPOINT执行
ENTRYPOINT ["/apps/nginx/sbin/nginx"]
CMD ["-g","daemon off;"]
上述表示执行/apps/nginx/sbin/nginx -g daemon off;命令
二、nginx镜像构建
2.1、Dockerfile文件
cat Dockerfile
FROM centos:7.9.2009
maintainer "dongxikui"
RUN yum -y install epel-release && yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&useradd nginx -u 2001
ADD nginx-1.18.0.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.18.0 && ./configure --prefix=/apps/nginx --with-http_sub_module && make && make install
ADD nginx.conf /apps/nginx/conf/nginx.conf #配置文件修改过加了daemon off;
ADD code.tar.gz /data/nginx/html
ADD run_nginx.sh /apps/nginx/sbin/
RUN chmod a+x /apps/nginx/sbin/run_nginx.sh
EXPOSE 80 443
CMD ["/apps/nginx/sbin/run_nginx.sh"]
2.2、nginx启动脚本
cat run_nginx.sh
#!/bin/bash
echo "223.5.5.5" >> /etc/hosts
/apps/nginx/sbin/nginx
2.3、镜像构建
docker build -t reg.fchiaas.local/nginx-18:v2 ./
Sending build context to Docker daemon 7.638MB
Step 1/11 : FROM centos:7.9.2009
---> eeb6ee3f44bd
Step 2/11 : maintainer "dongxikui"
---> Using cache
---> d7e456fd95cf
Step 3/11 : RUN yum -y install epel-release && yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&useradd nginx -u 2001
---> Using cache
---> 2834a890f1dd
Step 4/11 : ADD nginx-1.18.0.tar.gz /usr/local/src/
---> Using cache
---> a6d1ac56319b
Step 5/11 : RUN cd /usr/local/src/nginx-1.18.0 && ./configure --prefix=/apps/nginx --with-http_sub_module && make && make install
---> Using cache
---> dd0e7464a1db
Step 6/11 : ADD nginx.conf /apps/nginx/conf/nginx.conf
---> Using cache
---> cc8e9d500708
Step 7/11 : ADD code.tar.gz /data/nginx/html
---> Using cache
---> b1df48e10565
Step 8/11 : ADD run_nginx.sh /apps/nginx/sbin/
---> Using cache
---> 3e4be150267a
Step 9/11 : RUN chmod a+x /apps/nginx/sbin/run_nginx.sh
---> Using cache
---> 6ee59cac7ed5
Step 10/11 : EXPOSE 80 443
---> Using cache
---> 554949fa23a3
Step 11/11 : CMD ["/apps/nginx/sbin/run_nginx.sh"]
---> Using cache
---> 6021b711445b
Successfully built 6021b711445b
Successfully tagged reg.fchiaas.local/nginx-18:v2
2.4、生成容器测试
docker run --rm -p 8081:80 reg.fchiaas.local/nginx-18:v2
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7c088919e3f reg.fchiaas.local/nginx-18:v2 "/apps/nginx/sbin/ru…" 21 seconds ago Up 20 seconds 443/tcp, 0.0.0.0:8081->80/tcp, :::8081->80/tcp naughty_tereshkova
三、容器的cpu和内存的资源限制
--cpus 核数:限制容器使用的CPU核数
-m xxM/G:限制使用的内衬的大小
CPU测试
测试前
top - 21:03:48 up 58 min, 3 users, load average: 1.07, 1.05, 0.48
Tasks: 214 total, 1 running, 213 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3907.9 total, 2889.6 free, 324.1 used, 694.3 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 3349.5 avail Mem
测试
root@cncf-docker:~# docker run -it --cpus 1.2 lorel/docker-stress-ng --cpu 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 cpu
top - 21:10:04 up 1:04, 3 users, load average: 0.08, 0.33, 0.32
Tasks: 222 total, 4 running, 218 sleeping, 0 stopped, 0 zombie
%Cpu(s): 60.0 us, 0.2 sy, 0.0 ni, 39.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3907.9 total, 2833.6 free, 357.0 used, 717.4 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 3313.6 avail Mem
内存测试
测试
root@cncf-docker:~# docker run -it lorel/docker-stress-ng --vm 2
root@cncf-docker:~# docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
5085d3b6638d agitated_rosalind 198.52% 1.006GiB / 3.8GiB 26.47% 946B / 0B 0B / 0B 9
root@cncf-docker:~# docker run -it -m 20m lorel/docker-stress-ng --vm 4
root@cncf-docker:~# docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c7f76bb3e9b9 bold_chatelet 146.01% 20MiB / 20MiB 100.00% 876B / 0B 0B / 0B 9
四、kuberntes各个主要组件
4.1 kube-apiserver
k8s API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统的数据总线和数据中心。
kubernetes API Server的功能:
提供了集群管理的REST API接口(包括认证授权、数据校验以及集群状态变更);
提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作etcd);
是资源配额控制的入口;
拥有完备的集群安全机制.
4.2 kube-controller-manager
Controller Manager作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状
4.3 kube-scheduler
Kubernetes 调度器是一个控制面进程,负责将 Pods 指派到节点上。 调度器基于约束和可用资源为调度队列中每个 Pod 确定其可合法放置的节点。 调度器之后对所有合法的节点进行排序,将 Pod 绑定到一个合适的节点。 在同一个集群中可以使用多个不同的调度器;kube-scheduler 是其参考实现。 参阅调度 以获得关于调度和 kube-scheduler 组件的更多信息。
4.4 kubelet
kubelet 是在每个 Node 节点上运行的主要 “节点代理”。它可以使用以下之一向 apiserver 注册: 主机名(hostname);覆盖主机名的参数;某云驱动的特定逻辑。
kubelet 是基于 PodSpec 来工作的。每个 PodSpec 是一个描述 Pod 的 YAML 或 JSON 对象。 kubelet 接受通过各种机制(主要是通过 apiserver)提供的一组 PodSpec,并确保这些 PodSpec 中描述的容器处于运行状态且运行状况良好。 kubelet 不管理不是由 Kubernetes 创建的容器。
4.5 kube-proxy
Kubernetes 网络代理在每个节点上运行。网络代理反映了每个节点上 Kubernetes API 中定义的服务,并且可以执行简单的 TCP、UDP 和 SCTP 流转发,或者在一组后端进行 循环 TCP、UDP 和 SCTP 转发。 当前可通过 Docker-links-compatible 环境变量找到服务集群 IP 和端口, 这些环境变量指定了服务代理打开的端口。 有一个可选的插件,可以为这些集群 IP 提供集群 DNS。 用户必须使用 apiserver API 创建服务才能配置代理。
4.6 etcd
etcd 是兼具一致性和高可用性的键值数据库,作为保存 Kubernetes 所有集群数据的后台数据库。
4.7 网络组件
CNI 插件:遵守容器网络接口(Container Network Interface,CNI) 规范,其设计上偏重互操作性。
Kubernetes 遵从 CNI 规范的 v0.4.0 版本。
Kubenet 插件:使用 bridge 和 host-local CNI 插件实现了基本的 cbr0。
五、kubernetes集群二进制安装
5.1、IP地址信息
10.0.0.21 master1 master1.fchiaas.local
10.0.0.22 master2 master2.fchiaas.local
10.0.0.23 master3 master3.fchiaas.local
10.0.0.24 node1 node1.fchiaas.local
10.0.0.25 node2 node2.fchiaas.local
10.0.0.26 node3 node3.fchiaas.local
10.0.0.27 haproxy1 haproxy1.fchiaas.local
10.0.0.28 haproxy2 haproxy2.fchiaas.local
10.0.0.29 etcd01 etcd01.fchiaas.local
10.0.0.30 etcd02 etcd02.fchiaas.local
10.0.0.31 etcd03 etcd03.fchiaas.local
10.0.0.20 myk8s-api.fchiaas.local #VIP
10.0.0.240 reg.fchiaas.local
5.2、软件下载
安装软件的地址:https://github.com/easzlab/kubeasz/
wget https://github.com/easzlab/kubeasz/releases/download/3.1.1/ezdown
chmod +x ezdown
修改ezdown中docker版本
DOCKER_VER=19.03.15
执行下载所有软件并安装docker等基础软件
./ezdown -D
cd /etc/kubeasz
root@master1:/etc/kubeasz# ./ezctl new k8-cluster1
2022-01-03 01:03:38 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8-cluster1
2022-01-03 01:03:38 DEBUG set version of common plugins
2022-01-03 01:03:38 DEBUG cluster k8-cluster1: files successfully created.
2022-01-03 01:03:38 INFO next steps 1: to config '/etc/kubeasz/clusters/k8-cluster1/hosts'
2022-01-03 01:03:38 INFO next steps 2: to config '/etc/kubeasz/clusters/k8-cluster1/config.yml'
root@master1:/etc/kubeasz#cd /etc/kubeasz/clusters/k8-cluster1
修改hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
10.0.0.29
10.0.0.30
10.0.0.31
# master node(s)
[kube_master]
10.0.0.21
10.0.0.22
# work node(s)
[kube_node]
10.0.0.24
10.0.0.25
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
10.0.0.28 LB_ROLE=backup EX_APISERVER_VIP=10.0.0.20 EX_APISERVER_PORT=6443
10.0.0.29 LB_ROLE=master EX_APISERVER_VIP=10.0.0.20 EX_APISERVER_PORT=6443
# [optional] ntp server for the cluster
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.2000.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-40000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="fchiaas.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/bin"
修改config.yml
# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
- "ntp1.aliyun.com"
# - "time1.cloud.tencent.com"
# - "0.cn.pool.ntp.org"
# node节点最大pod 数
MAX_PODS: 400
# coredns 自动安装
dns_install: "no"
ENABLE_LOCAL_DNS_CACHE: no
# metric server 自动安装
metricsserver_install: "no"
# dashboard 自动安装
dashboard_install: "no"
修改01.prepare.yml
# [optional] to synchronize system time of nodes with 'chrony'
- hosts:
- kube_master
- kube_node
- etcd
# - ex_lb
#- chrony
执行下面的安装
./ezctl setup k8s-cluster1 01
./ezctl setup k8s-cluster1 02
在etcd的主机上执行下马的命令测试
root@etcd03:~# NODE_IPS="10.0.0.29 10.0.0.30 10.0.0.31"
root@etcd03:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
https://10.0.0.29:2379 is healthy: successfully committed proposal: took = 8.875052ms
https://10.0.0.30:2379 is healthy: successfully committed proposal: took = 9.369287ms
https://10.0.0.31:2379 is healthy: successfully committed proposal: took = 7.231524ms
继续安装
./ezctl setup k8s-cluster1 03
./ezctl setup k8s-cluster1 04
查看MASTER节点状态
root@master1:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
10.0.0.21 Ready,SchedulingDisabled master 42s v1.22.2
10.0.0.22 Ready,SchedulingDisabled master 42s v1.22.2
继续安装
./ezctl setup k8s-cluster1 05
./ezctl setup k8s-cluster1 06
安装完成06后,完成网络的安装,初步安装完成
测试网络
root@master1:/etc/kubeasz# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.0.0.22 | node-to-node mesh | up | 04:17:41 | Established |
| 10.0.0.24 | node-to-node mesh | up | 04:17:42 | Established |
| 10.0.0.25 | node-to-node mesh | up | 04:17:40 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
测试容器
# kubectl run test1 --image=alpine sleep 500000
kubectl get pods
NAME READY STATUS RESTARTS AGE
test1 1/1 Running 0 20s
kubectl run test2 --image=centos sleep 500000
kubectl run test3 --image=centos sleep 500000
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test1 1/1 Running 0 4m37s 10.200.166.130 10.0.0.24 <none> <none>
test2 1/1 Running 0 60s 10.200.166.131 10.0.0.24 <none> <none>
test3 1/1 Running 0 45s 10.200.104.1 10.0.0.25 <none> <none>
ping外网测试
root@master1:/etc/kubeasz# kubectl exec -it test1 -- /bin/sh
sh-4.4# ping 223.5.5.5
PING 223.5.5.5 (223.5.5.5) 56(84) bytes of data.
64 bytes from 223.5.5.5: icmp_seq=1 ttl=113 time=11.1 ms
64 bytes from 223.5.5.5: icmp_seq=2 ttl=113 time=10.6 ms
^C
--- 223.5.5.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 10.558/10.830/11.103/0.291 ms
sh-4.4# exit
exit
root@master1:/etc/kubeasz# kubectl exec -it test3 -- /bin/sh
sh-4.4# ping 223.5.5.5
PING 223.5.5.5 (223.5.5.5) 56(84) bytes of data.
64 bytes from 223.5.5.5: icmp_seq=1 ttl=113 time=11.5 ms
64 bytes from 223.5.5.5: icmp_seq=2 ttl=113 time=10.2 ms
^C
--- 223.5.5.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 10.163/10.830/11.498/0.675 ms