1.工作原理
master节点(Control Plane【控制面板】):master节点控制整个集群
master节点
上有一些核心组件:
Controller Manager
:控制管理器
etcd
:键值数据库(redis)【记账本,记事本】
scheduler
:调度器
api server
:api网关(所有的控制都需要通过api-server)
node节点(worker工作节点):
kubelet(监工)
:每一个node节点上必须安装的组件。
kube-proxy
:代理。代理网络
部署一个应用?
程序员:调用CLI告诉master,我们现在要部署一个tomcat应用
程序员的所有调用都先去master节点
的网关api-server
。这是matser的唯一入口(mvc模式中的c层)
收到的请求先交给master的api-server。由api-server交给controller-mannager进行控制
controller-mannager 进行 应用部署
controller-mannager 会生成一次部署信息。 tomcat --image:tomcat6 --port 8080 ,真正不部署应用
部署信息被记录在etcd中
scheduler调度器从etcd数据库中,拿到要部署的应用,开始调度。看哪个节点合适,
scheduler把算出来的调度信息再放到etcd中
每一个node节点的监控kubelet,随时和master保持联系的(给api-server发送请求不断获取最新数据),所有节点的kubelet就会从master
假设node2的kubelet最终收到了命令,要部署。
kubelet就自己run一个应用在当前机器上,随时给master汇报当前应用的状态信息,分配ip
node和master是通过master的api-server联系的
每一个机器上的kube-proxy能知道集群的所有网络。只要node访问别人或者别人访问node,node上的kube-proxy网络代理自动计算进行流量转发
工作节点
Pod:
docker run 启动的是一个
container
(容器),容器是docker的基本单位
,一个应用是一个容器
kubelet run 启动的一个应用称为一个Pod
;Pod是k8s的基本单位
。
Pod是容器的一个再封装
一个容器往往代表不了一个基本应用。博客(php+mysql合起来完成)
准备一个Pod 可以包含多个 container;一个Pod代表一个基本的应用。
IPod(看电影、听音乐、玩游戏)【一个基本产品,原子】;
Pod(music container、movie container)【一个基本产品,原子的】
Kubelet:监工,负责交互master的api-server以及当前机器的应用启停等,在master机器就是master的小助手。每一台机器真正干活的都是这个 Kubelet
Kube-proxy:
其他:
-
集群交互原理
想让k8s部署一个tomcat?
0、开机默认所有节点的kubelet、master节点的scheduler(调度器)、controller-manager(控制管理器)一直监听master的api-server发来的事件变化(for ::)
1、程序员使用命令行工具: kubectl ; kubectl create deploy tomcat --image=tomcat8(告诉master让集群使用tomcat8镜像,部署一个tomcat应用)
2、kubectl命令行内容发给api-server,api-server保存此次创建信息到etcd
3、etcd给api-server上报事件,说刚才有人给我里面保存一个信息。(部署Tomcat[deploy])
4、controller-manager监听到api-server的事件,是 (部署Tomcat[deploy])
5、controller-manager 处理这个 (部署Tomcat[deploy])的事件。controller-manager会生成Pod的部署信息【pod信息】
6、controller-manager 把Pod的信息交给api-server,再保存到etcd
7、etcd上报事件【pod信息】给api-server。
8、scheduler专门监听 【pod信息】 ,拿到 【pod信息】的内容,计算,看哪个节点合适部署这个Pod【pod调度过后的信息(node: node-02)】,
9、scheduler把 【pod调度过后的信息(node: node-02)】交给api-server保存给etcd
10、etcd上报事件【pod调度过后的信息(node: node-02)】,给api-server
11、其他节点的kubelet专门监听 【pod调度过后的信息(node: node-02)】 事件,集群所有节点kubelet从api-server就拿到了 【pod调度过后的信息(node: node-02)】 事件
12、每个节点的kubelet判断是否属于自己的事情;node-02的kubelet发现是他的事情
13、node-02的kubelet启动这个pod。汇报给master当前启动好的所有信息
-
k8s集群安装
准备3台机器
实例 | 公网ip | 内网ip | 节点类型 |
---|---|---|---|
i-wz9gesw0qtscfefs2xyl | 47.106.12.54 | 172.27.228.2 | master |
i-wz9gesw0qtscfefs2xyn | 120.79.96.89 | 172.27.209.121 | worker1 |
i-wz9gesw0qtscfefs2xym | 120.24.212.245 | 172.27.209.122 | worker2 |
安装方式
二进制安装(建议生产环境使用)
kebuadm引导方式(官方推荐)
大致流程:准备3台服务器内网互通
安装Docker容器化环境
安装kebernetes: 3台机器安装核心组件(kebuadm
(创建集群的引导工具)、kubelet
)、kubectl
(程序员用的命令行)
安装前置环境
关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
systemctl stop firewalld
systemctl disable firewalld
修改 hostname
ip | hostName |
---|---|
47.106.12.54 |
hostnamectl set-hostname k8s-01
|
120.79.96.89 |
hostnamectl set-hostname k8s-02
|
120.24.212.245 |
hostnamectl set-hostname k8s-03
|
查看修改结果
[root@iZwz9gesw0qtscfefs2xynZ ~]# hostnamectl status
Static hostname: k8s-02
Icon name: computer-vm
Chassis: vm
Machine ID: 20200914151306980406746494236010
Boot ID: 6b72c6ee16094f48b50f681a7f0110b5
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1127.19.1.el7.x86_64
Architecture: x86-64
设置hostname解析
echo "127.0.0.1 $(hostname)" >> /etc/hosts
关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
关闭 swap:
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
允许 iptables 检查桥接流量
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%85%81%E8%AE%B8-iptables-%E6%A3%80%E6%9F%A5%E6%A1%A5%E6%8E%A5%E6%B5%81%E9%87%8F
开启br_netfilter
sudo modprobe br_netfilter
确认下
lsmod | grep br_netfilter
安装docker
sudo yum remove docker*
sudo yum install -y yum-utils
配置docker yum 源
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装docker 19.03.9
yum install -y docker-ce-3:19.03.9-3.el7.x86_64 docker-ce-cli-3:19.03.9-3.el7.x86_64 containerd.io
安装docker 19.03.9 docker-ce 19.03.9
yum install -y docker-ce-19.03.9-3 docker-ce-cli-19.03.9 containerd.io
启动服务
systemctl start docker
systemctl enable docker
配置加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
安装k8s
配置K8S的yum源
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
1.卸载旧版本
yum remove -y kubelet kubeadm kubectl
2.查看可以安装的版本
yum list kubelet --showduplicates | sort -r
3.# 安装kubelet、kubeadm、kubectl 指定版本
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
4.开机启动kubelet
systemctl enable kubelet && systemctl start kubelet
5.查看kubelet状态
[root@k8s-01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since 五 2023-10-06 10:17:05 CST; 8s ago
Docs: https://kubernetes.io/docs/
Process: 3854 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 3854 (code=exited, status=1/FAILURE)
10月 06 10:17:05 k8s-01 systemd[1]: Unit kubelet.service entered failed state.
10月 06 10:17:05 k8s-01 systemd[1]: kubelet.service failed.
初始化matsre节点(master执行)
1.查看需要的镜像
[root@k8s-01 ~]# kubeadm config images list
W1006 10:22:19.236666 4076 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1006 10:22:19.236766 4076 version.go:103] falling back to the local client version: v1.21.0
k8s.gcr.io/kube-apiserver:v1.21.0
k8s.gcr.io/kube-controller-manager:v1.21.0
k8s.gcr.io/kube-scheduler:v1.21.0
k8s.gcr.io/kube-proxy:v1.21.0
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
2.配置镜像
vi images.sh
#!/bin/bash
images=(
kube-apiserver:v1.21.0
kube-proxy:v1.21.0
kube-controller-manager:v1.21.0
kube-scheduler:v1.21.0
coredns:v1.8.0
etcd:3.4.13-0
pause:3.4.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
- 执行脚本
chmod +x images.sh && ./images.sh
4.查看下载的images
[root@k8s-01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-apiserver v1.21.0 4d217480042e 2 years ago 126MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-proxy v1.21.0 38ddd85fe90e 2 years ago 122MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-controller-manager v1.21.0 09708983cc37 2 years ago 120MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-scheduler v1.21.0 62ad3129eca8 2 years ago 50.6MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause 3.4.1 0f8457a4c2ec 2 years ago 683kB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns v1.8.0 296a6d5035e2 2 years ago 42.5MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/etcd 3.4.13-0 0369cf4303ff 3 years ago 253MB
注意1.21.0版本的k8s coredns镜像比较特殊,结合阿里云需要特殊处理,重新打标签
docker tag registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns/coredns:v1.8.0
########kubeadm init 一个master########################
########kubeadm join 其他worker########################
kubeadm init \ --apiserver-advertise-address=172.27.228.2 \ --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ --kubernetes-version v1.21.0 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16
注意:pod-cidr与service-cidr
cidr 无类别域间路由(Classless Inter-Domain Routing、CIDR)
指定一个网络可达范围 pod的子网范围+service负载均衡网络的子网范围+本机ip的子网范围不能有重复域
安装完成提示
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.27.228.2:6443 --token aw37ip.t9nsblzyxe49tsco \
--discovery-token-ca-cert-hash sha256:3a74d9f5336c804276f1b7bc494027b26b7a498ae1a5a396b35a92ce0b3411a1
按照如上提示操作
1.init完成后,复制相关文件夹
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.导出环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
3.部署一个pod网络
kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml
4.获取集群中所有部署好的应用pod
[root@k8s-01 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-594649bd75-64s9q 0/1 CrashLoopBackOff 1 3m8s
kube-system calico-node-bz9f8 0/1 Init:2/3 0 3m20s
kube-system coredns-b98666c6d-fz7cr 1/1 Running 0 34h
kube-system coredns-b98666c6d-g64zs 1/1 Running 0 34h
kube-system etcd-k8s-01 1/1 Running 0 34h
kube-system kube-apiserver-k8s-01 1/1 Running 0 34h
kube-system kube-controller-manager-k8s-01 1/1 Running 0 34h
kube-system kube-proxy-tjsrb 1/1 Running 0 34h
kube-system kube-scheduler-k8s-01 1/1 Running 0 34h
[root@k8s-01 ~]#
5.查看集群所有机器的状态
[root@k8s-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-01 Ready control-plane,master 34h v1.21.
初始化worker节点
1.在master节点执行命令
kubeadm token create --print-join-command
[root@k8s-01 ~]# kubeadm token create --print-join-command
kubeadm join 172.27.228.2:6443 --token r1hj55.nllrkk4irqwkgpl2 --discovery-token-ca-cert-hash sha256:3a74d9f5336c804276f1b7bc494027b26b7a498ae1a5a396b35a92ce0b3411a1
2.将输出结果拿到worker节点执行
[root@k8s-02 ~]# kubeadm join 172.27.228.2:6443 --token r1hj55.nllrkk4irqwkgpl2 --discovery-token-ca-cert-hash sha256:3a74d9f5336c804276f1b7bc494027b26b7a498ae1a5a396b35a92ce0b3411a1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
3.在master节点查看nodes节点(验证集群是否成功)
[root@k8s-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-01 Ready control-plane,master 34h v1.21.0
k8s-02 Ready <none> 7m29s v1.21.0
k8s-03 Ready <none> 5m16s v1.21.0
4.给节点打标签
k8s中万物皆对象,node:机器 pod:应用容器
[root@k8s-01 ~]# kubectl label node k8s-03 node-role.kubernetes.io/worker3='worker-03'
node/k8s-03 labeled
[root@k8s-01 ~]# kubectl label node k8s-02 node-role.kubernetes.io/worker2='worker-02'
node/k8s-02 labeled
[root@k8s-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-01 Ready control-plane,master 34h v1.21.0
k8s-02 Ready worker2 23m v1.21.0
k8s-03 Ready worker3 21m v1.21.0
5.设置ipvs模式
k8s集群,机器重启后会自动加入集群,master重启会自动再加入即去控制中心。k8s集群默认是iptables,性能下降(kube-proxy会在集群之间同步iptables的内容)
获取集群中的所有资源
[root@k8s-01 ~]# kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-594649bd75-64s9q 1/1 Running 5 75m
kube-system pod/calico-node-59gnt 1/1 Running 0 53m
kube-system pod/calico-node-89b8t 1/1 Running 0 55m
kube-system pod/calico-node-bz9f8 1/1 Running 0 75m
kube-system pod/coredns-b98666c6d-fz7cr 1/1 Running 0 35h
kube-system pod/coredns-b98666c6d-g64zs 1/1 Running 0 35h
kube-system pod/etcd-k8s-01 1/1 Running 0 35h
kube-system pod/kube-apiserver-k8s-01 1/1 Running 0 35h
kube-system pod/kube-controller-manager-k8s-01 1/1 Running 0 35h
kube-system pod/kube-proxy-74clv 1/1 Running 0 53m
kube-system pod/kube-proxy-rfth6 1/1 Running 0 55m
kube-system pod/kube-proxy-tjsrb 1/1 Running 0 35h
kube-system pod/kube-scheduler-k8s-01 1/1 Running 0 35h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 35h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 3 3 3 3 3 kubernetes.io/os=linux 34h
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 35h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 34h
kube-system deployment.apps/coredns 2/2 2 2 35h
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-594649bd75 1 1 1 75m
kube-system replicaset.apps/calico-kube-controllers-5d4b78db86 0 0 0 34h
kube-system replicaset.apps/coredns-b98666c6d 2 2 2 35h
修改kube-proxy的配置文件,修改mode为ipvs。默认为iptables,但是集群大了之后就很慢
kubectl edit cm kube-proxy -n kube-system
, 修改mode="ipvs"
重启kube-proxy(杀死旧的,会自动加入新配置的kube-proxy)
[root@k8s-01 ~]# kubectl delete pod kude-proxy-74clv -n kube-system
Error from server (NotFound): pods "kude-proxy-74clv" not found
[root@k8s-01 ~]# kubectl delete pod kube-proxy-74clv -n kube-system
pod "kube-proxy-74clv" deleted
[root@k8s-01 ~]# kubectl delete pod kube-proxy-rfth6 -n kube-system
pod "kube-proxy-rfth6" deleted
[root@k8s-01 ~]# kubectl delete pod kube-proxy-tjsrb -n kube-system
pod "kube-proxy-tjsrb" deleted
[root@k8s-01 ~]# kubectl get pods -A | grep kube-proxy
kube-system kube-proxy-gcxvl 1/1 Running 0 3m22s
kube-system kube-proxy-gqkcg 1/1 Running 0 2m49s
kube-system kube-proxy-jzj9p 1/1 Running 0 3m4s
查看k8s下所有的资源
kubectl api-resources --namespace=true
[root@k8s-01 ~]# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
k8s输出详细描述信息
[root@k8s-01 ~]# kubectl describe pod my-nginx
Name: my-nginx-6b74b79f57-grk69
Namespace: default
Priority: 0
Node: k8s-02/172.27.209.121
Start Time: Sat, 07 Oct 2023 22:45:36 +0800
Labels: app=my-nginx
pod-template-hash=6b74b79f57
Annotations: cni.projectcalico.org/containerID: ab0f6852275d9cdf43745b7b6af6bb846714b7ccdb874febe7544a8091bdd171
cni.projectcalico.org/podIP: 192.168.179.1/32
cni.projectcalico.org/podIPs: 192.168.179.1/32
Status: Running
IP: 192.168.179.1
IPs:
IP: 192.168.179.1
Controlled By: ReplicaSet/my-nginx-6b74b79f57
Containers:
nginx:
Container ID: docker://54cfa0f83b91f298427a8e4371ebdfcd7f9580bad4f7e4b65e4c36c1361db276
Image: nginx
Image ID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Port: <none>
Host Port: <none>
State: Running
Started: Sat, 07 Oct 2023 22:45:46 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9zms (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-w9zms:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
创建(部署)一个k8s应用
[root@k8s-01 ~]# kubectl create deploy my-nginx --image=nginx
deployment.apps/my-nginx created
[root@k8s-01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-6b74b79f57-grk69 1/1 Running 0 16s 192.168.179.1 k8s-02 <none> <none>
[root@k8s-01 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-6b74b79f57-grk69 1/1 Running 0 2m16s 192.168.179.1 k8s-02 <none> <none>
[root@k8s-01 ~]# kubectl exec -it my-nginx-6b74b79f57-grk69 -- /bin/bash
k8s基础
docker是每一个worker节点的运行时环境
kubelet负责控制所有容器的启动停止,保证节点正常工作,以及帮助节点交互master
master节点
的关键组件1.kubelet(监工):所有节点必备,控制这个节点的所有pod的生命周期以及与api-server交互等工作
2.kube-api-server:负责接收所有请求。集群内对集群的任何修改都是通过命令行、UI把请求发给api-server才能执行的。api-server是整个集群对内、对外的唯一入口。不包含后来我们部署应用暴露端口的方式
3.kube-proxy:整个节点的网络流量负责
4.cri: 容器运行时环境
worker节点
的关键组件1.kubelet(监工):所有节点必备,控制这个节点的所有pod的生命周期以及与api-server交互等工作
2.kube-proxy:整个节点的网络流量负责
3.cri: 容器运行时环境
应用部署
1.
kubectl create deploy xxxxxx
:命令行会给api-server发送要部署xxx的请求
2.api-server
把这个请求保存到etcd
kubectl create 帮我们创建k8s集群中的一些对象
kubectl create --help
kubectl create deployment 这次部署的名字 --image=应用的镜像
最终在一个机器上有pod、这个pod其实本质里面就是一个容器k8s_nginx_my-nginx-6b74b79f57-snlr4_default_dbeac79e-1ce9-42c9-bc59-c8ca0412674b_0
k8s_
镜像(nginx
)pod名(my-nginx-6b74b79f57-snlr4
)容器名(default_dbeac79e-1ce9-42c9-bc59-c8ca0412674b_0
)
Create a deployment with command
kubectl create deployment my-nginx --image=nginx -- date
Create a deployment named my-nginx that runs the nginx image with 3 replicas
kubectl create deployment my-nginx --image=nginx --replicas=3
Create a deployment named my-nginx that runs the nginx image and expose port 80.
kubectl create deployment my-nginx --image=nginx --port=80
Deployment(部署)
1.在k8s中,通过发布 Deployment,可以创建应用程序 (docker image) 的实例 (docker container),这个实例会被包含在称为 Pod 的概念中,Pod 是 k8s 中最小可管理单元
2.在 k8s 集群中发布 Deployment 后,Deployment 将指示 k8s 如何创建和更新应用程序的实例,master 节点将应用程序实例调度到集群中的具体的节点上
3.创建应用程序实例后,Kubernetes Deployment Controller 会持续监控这些实例。如果运行实例的 worker 节点关机或被删除,则 Kubernetes Deployment Controller 将在群集中资源最优的另一个 worker 节点上重新创建一个新的实例。这提供了一种自我修复机制来解决机器故障或维护问题
4.在容器编排之前的时代,各种安装脚本通常用于启动应用程序,但是不能够使应用程序从机器故障中恢复。通过创建应用程序实例并确保它们在集群节点中的运行实例个数,Kubernetes Deployment 提供了一种完全不同的方式来管理应用程序
5.Deployment 处于 master 节点上,通过发布 Deployment,master 节点会选择合适的 worker 节点创建 Container(即图中的正方体),Container 会被包含在 Pod (即蓝色圆圈)里
k8s扩缩容
[root@k8s-01 ~]# kubectl get deploy,pod
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 36h
NAME READY STATUS RESTARTS AGE
pod/my-nginx-6b74b79f57-grk69 1/1 Running 0 36h
[root@k8s-01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-6b74b79f57-grk69 1/1 Running 0 36h 192.168.179.1 k8s-02 <none> <none>
[root@k8s-01 ~]# kubectl scale --replicas=3 deploy my-nginx
deployment.apps/my-nginx scaled
[root@k8s-01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-6b74b79f57-6lllw 1/1 Running 0 67s 192.168.179.2 k8s-02 <none> <none>
my-nginx-6b74b79f57-grk69 1/1 Running 0 36h 192.168.179.1 k8s-02 <none> <none>
my-nginx-6b74b79f57-w2gtf 1/1 Running 0 67s 192.168.165.193 k8s-03 <none> <none>
[root@k8s-01 ~]#
[root@k8s-01 ~]# watch -n 1 kubectl get deploy,pod
Every 1.0s: kubectl get deploy,pod Mon Oct 9 11:48:52 2023
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 3/3 3 3 37h
NAME READY STATUS RESTARTS AGE
pod/my-nginx-6b74b79f57-6lllw 1/1 Running 0 10m
pod/my-nginx-6b74b79f57-grk69 1/1 Running 0 37h
pod/my-nginx-6b74b79f57-w2gtf 1/1 Running 0 10m
# 缩容
[root@k8s-01 ~]# kubectl scale --replicas=1 deploy my-nginx
[root@k8s-01 ~]# watch -n 1 kubectl get deploy,pod
Every 1.0s: kubectl get deploy,pod Mon Oct 9 11:50:53 2023
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 37h
NAME READY STATUS RESTARTS AGE
pod/my-nginx-6b74b79f57-w2gtf 1/1 Running 0 12m
service和label
Service是通过一组Pod路由通信,Service是一种抽象,它允许pod死亡并在kubernetes中复制,而不会影响应用程序,在依赖的Pod之间进行发现和路由是由Kubernetes Service处理的。Service匹配一组Pod是使用
标签(Label)
和选择器(Selector)
,他们是允许对Kubernetes中的对象进行逻辑操作的一种分组原语。标签(Label)是附加在对象上的键值对,可以以多种方式使用:指定用于开发、测试和生产的对象
嵌入版本标签
使用Label将对象分类
[root@k8s-01 ~]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-nginx-6b74b79f57-w2gtf 1/1 Running 0 64m app=my-nginx,pod-template-hash=6b74b79f57
kubectl expose
1.type=ClusterIP方式
[root@k8s-01 ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-6b74b79f57-5lv6s 1/1 Running 0 11m
pod/my-nginx-6b74b79f57-p8rld 1/1 Running 0 11m
pod/my-nginx-6b74b79f57-w2gtf 1/1 Running 0 82m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d1h
service/my-nginx ClusterIP 10.96.239.231 <none> 8081/TCP 22s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 3/3 3 3 38h
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-6b74b79f57 3 3 3 38h
[root@k8s-01 ~]# curl 10.96.239.231:8081
[root@k8s-01 ~]# kubectl delete service/my-nginx
service "my-nginx" deleted
2.type=NodePort
[root@k8s-01 ~]# kubectl expose deploy my-nginx --port=8081 --target-port=80 --type=NodePort
service/my-nginx exposed
[root@k8s-01 ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-6b74b79f57-5lv6s 1/1 Running 0 29m
pod/my-nginx-6b74b79f57-p8rld 1/1 Running 0 29m
pod/my-nginx-6b74b79f57-w2gtf 1/1 Running 0 100m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d2h
service/my-nginx NodePort 10.96.97.250 <none> 8081:31819/TCP 42s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 3/3 3 3 38h
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-6b74b79f57 3 3 3 38h
[root@k8s-01 ~]# netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 7201/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 9838/kube-proxy
tcp 0 0 0.0.0.0:31819 0.0.0.0:* LISTEN 9838/kube-proxy
[root@k8s-01 ~]# netstat -nlpt | grep 31819
tcp 0 0 0.0.0.0:31819 0.0.0.0:* LISTEN 9838/kube-proxy
[root@k8s-02 ~]# netstat -nlpt | grep 31819
tcp 0 0 0.0.0.0:31819 0.0.0.0:* LISTEN 23576/kube-proxy
[root@k8s-03 ~]# netstat -nlpt | grep 31819
tcp 0 0 0.0.0.0:31819 0.0.0.0:* LISTEN 12999/kube-proxy
[root@k8s-01 ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
my-nginx 3/3 3 3 38h
[root@k8s-01 ~]#
访问方式:公网ip+端口
http://47.106.12.54:31819/
扩缩容
目标
用kubectl扩缩应用程序
扩缩一个Deployment我们创建一个Deployment,然后通过服务提供访问Pod的方式,我们发布的Deployment只创建了一个Pod来运行我们的应用程序,当流量增加时,我们需要对应用程序进行伸缩操作以满足系统性能需求
扩展
扩容的Pod会自动加入到它之前存在的Service(负载均衡网络)
kubectl scale --replicas=3 deployment tomcat6
持续观测结果
watch kubectl get pods -o wide
执行滚动升级
目标: 使用kubectl执行滚动更新
滚动更新允许通过使用新的实例逐步更新Pod实例从而实现Deployments更新,停机时间为零
应用升级: tomcat:alpine、tomcat:jre8-alpine
kubectl set image deployment/my-nginx2 nginx=nginx:1.9.1
联合jenkins 形成持续集成,灰度发布功能
kubectl set image deployment.apps/tomcat6 tomcat=tomcat:jre8-alpine
可以携带--record参数,记录变更
回滚升级
查看历史记录
kubectl rollout history deployment.apps/tomcat6
kubectl rollout history deploy tomcat6
回滚到指定版本
kubectl rollout undo deployment.apps/tomcat6 --to-revision=1
kubectl rollout undo deploy tomcat6 --to-revision=1
为image做升级
[root@k8s-01 ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
my-nginx 3/3 3 3 2d12h
[root@k8s-01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-nginx-6b74b79f57-5lv6s 1/1 Running 0 22h
my-nginx-6b74b79f57-p8rld 1/1 Running 0 22h
my-nginx-6b74b79f57-w2gtf 1/1 Running 0 24h
[root@k8s-01 ~]# kubectl get pod my-nginx-6b74b79f57-w2gtf -o yaml | grep container
cni.projectcalico.org/containerID: 00ec829ae6d6db562b71761b2035e9b0fc47128cb22e7237622f01686fe8bef5
containers:
containerStatuses:
- containerID: docker://428d2e4fb3c615f46b0d15fd323de19d70a7b24507e131b3e6f759cfa9c8f116
[root@k8s-01 ~]# kubectl get pod my-nginx-6b74b79f57-w2gtf -o yaml | grep name
name: my-nginx-6b74b79f57-w2gtf
namespace: default
name: my-nginx-6b74b79f57
name: nginx
name: kube-api-access-lz4mb
- name: kube-api-access-lz4mb
name: kube-root-ca.crt
fieldPath: metadata.namespace
path: namespace
name: nginx
[root@k8s-01 ~]# kubectl get pod my-nginx-6b74b79f57-w2gtf -o yaml | grep image
- image: nginx
imagePullPolicy: Always
image: nginx:latest
imageID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
[root@k8s-01 ~]# kubectl set image deploy my-nginx nginx=nginx:1.9.2 --record
deployment.apps/my-nginx image updated
watch kubectl get pod
k8s回滚到以前的版本
1.查看历史记录
[root@k8s-01 ~]# kubectl rollout history deploy my-nginx
deployment.apps/my-nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 kubectl set image deploy my-nginx nginx=nginx:1.9.2 --record=true
2.回滚到指定版本
[root@k8s-01 ~]# kubectl rollout undo deploy my-nginx --to-revision=1
deployment.apps/my-nginx rolled back
k8s对象描述文件
声明式API, 对象描述文件的方式(Pod -> yaml, Deploy -> yaml, Service -> yaml), kubectl apply -f xxx.yaml
部署一个Deploy
apiVersion: apps/v1 #与k8s集群版本有关,使用 kubectl api-versions 即可查看当前集群支持的版本
kind: Deployment #该配置的类型,我们使用的是 Deployment
metadata: #译名为元数据,即 Deployment 的一些基本属性和信息
name: nginx-deployment #Deployment 的名称
labels: #标签,可以灵活定位一个或多个资源,其中key和value均可自定义,可以定义多组,目前不需要理解
app: nginx #为该Deployment设置key为app,value为nginx的标签
spec: #这是关于该Deployment的描述,可以理解为你期待该Deployment在k8s中如何使用
replicas: 1 #使用该Deployment创建一个应用程序实例
selector: #标签选择器,与上面的标签共同作用,目前不需要理解
matchLabels: #选择包含标签app:nginx的资源
app: nginx
template: #这是选择或创建的Pod的模板
metadata: #Pod的元数据
labels: #Pod的标签,上面的selector即选择包含标签app:nginx的Pod
app: nginx
spec: #期望Pod实现的功能(即在pod中部署)
containers: #生成container,与docker中的container是同一种
- name: nginx #container的名称
image: nginx:1.7.9 #使用镜像nginx:1.7.9创建container,该container默认80端口可访问
通过yaml部署前先删除旧的部署
[root@k8s-01 ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-6b74b79f57-2qftn 1/1 Running 0 7h25m
pod/my-nginx-6b74b79f57-7hs8g 1/1 Running 0 7h25m
pod/my-nginx-6b74b79f57-9q5q5 1/1 Running 0 7h25m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d8h
service/my-nginx NodePort 10.96.97.250 <none> 8081:31819/TCP 30h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 3/3 3 3 2d20h
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-697c5bb596 0 0 0 7h47m
replicaset.apps/my-nginx-6b74b79f57 3 3 3 2d20h
replicaset.apps/my-nginx-f56756f49 0 0 0 7h39m
[root@k8s-01 ~]# kubectl delete deployment.apps/my-nginx
deployment.apps "my-nginx" deleted
[root@k8s-01 ~]# kubectl delete service/my-nginx
service "my-nginx" deleted
[root@k8s-01 ~]# kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d8h
[root@k8s-01 ~]# kubectl apply -f deploy.yaml
deployment.apps/nginx-deployment created
[root@k8s-01 ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-746fbb99df-z8tnb 1/1 Running 0 50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d8h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 1/1 1 1 51s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-746fbb99df 1 1 1 51s
修改deploy.yaml副本数=3
[root@k8s-01 ~]# kubectl apply -f deploy.yaml
deployment.apps/nginx-deployment configured
[root@k8s-01 ~]# kubectl get deploy,pod
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2/3 3 2 12m
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-746fbb99df-bcbkb 1/1 Running 0 24s
pod/nginx-deployment-746fbb99df-r9p6t 1/1 Running 0 24s
pod/nginx-deployment-746fbb99df-z8tnb 1/1 Running 0 12m
删除副本数
[root@k8s-01 ~]# kubectl delete -f deploy.yaml
deployment.apps "nginx-deployment" deleted
[root@k8s-01 ~]# kubectl get deploy,pod
No resources found in default namespace.
k8s部署DashBoard
1.部署Dashboard UI
``