kubernetes 安装
[root@master ~]# curl -O ftp://172.100.0.11/2-iso/chinaskills_cloud_paas_v2.0.2.iso
[root@master ~]# mount -o loop chinaskills_cloud_paas_v2.0.2.iso /mnt/
[root@master ~]# cp -rvf /mnt/* /opt/
[root@master ~]# umount /mnt/
[root@master ~]# mv /opt/kubeeasy /usr/bin/
[root@master ~]# kubeeasy install dependencies --host 172.100.0.22,172.100.0.23 --user root --password 000000 --offline-file /opt/dependencies/base-rpms.tar.gz # 安装依赖
[root@master ~]# kubeeasy check ssh --host 172.100.0.22,172.100.0.23 --user root --password 000000 # 测试连通性
[root@master ~]# kubeeasy create ssh-keygen --master 172.100.0.22 --worker 172.100.0.23 --user root --password 000000 # 配置免密钥
[root@master ~]# kubeeasy install kubernetes --master 172.100.0.22 --worker 172.100.0.23 --user root --password 000000 --version 1.22.1 --offline-file /opt/kubernetes.tar.gz # 安装Kubernetes
[root@master ~]# kubeeasy add --virt kubevirt # 安装Kubevirt
[root@master ~]# kubeeasy add --istio istio # 安装Istio
[root@master ~]# kubeeasy add --registry harbor # 安装harbor、docker-compose、helm
# 添加节点
[root@master ~]# kubeeasy install depend --host 172.100.0.25 --user root --password 000000 --offline-file /opt/dependencies/base-rpms.tar.gz
[root@master ~]# kubeeasy add --worker 172.100.0.25 --user root --password 000000 --offline-file /opt/kubernetes.tar.gz
kubernetes 运维
1. 使用 kubectl 自带排序功能,列出集群内所有的 Pod,并以 name 字段排序。
[root@k8s-master-node1 ~]# kubectl get pod -A --sort-by=.metadata.name
2. 集群部署完成后,检查集群所有证书过期时间。
[root@k8s-master-node1 ~]# kubeadm certs check-expiration
3. 集群部署完成后,查看所有节点的运行状态及标签
[[root@k8s-master-node1 ~]# kubectl get nodes -o wide --show-labels
4. 集群部署完成,后查看集群所有的资源信息。
[root@k8s-master-node1 ~]# kubectl get all -A
5. 集群部署完成后,查看整个集群的配置信息。
[root@k8s-master-node1 ~]# kubectl cluster-info
6. 集群部署完成后,为集群创建一个永久时效的 Token。
[root@k8s-master-node1 ~]# kubeadm token create --ttl=0
[root@k8s-master-node1 ~]# kubeadm token list
7. 集群部署完成后,查看当前集群支持的所有 API resources。
[root@k8s-master-node1 ~]# kubectl api-resources --namespaced=true
8. 集群部署完成后,查看命名空间 kube-system 内所有资源的信息。
[root@k8s-master-node1 ~]# kubectl get all -n kube-system
9. 使用 kubectl 自带排序功能列出集群内所有的 Service 并以 name 字段排序。
[root@k8s-master-node1 ~]# kubectl get svc -A --sort-by=.metadata.name
10.调度master节点
[root@k8s-master-node1 ~]# kubectl describe nodes k8s-master-node1
[root@k8s-master-node1 ~]# kubectl taint node k8s-master-node1 node-role.kubernetes.io/master-
[root@k8s-master-node1 ~]# kubectl describe nodes k8s-master-node1
[root@k8s-master-node1 ~]# kubectl taint node k8s-master-node node-role.kubernetes.ip/master:NoSchedule
NoSchedule :表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上
PreferNoSchedule :表示 k8s 将尽量避免将 Pod 调度到具有该污点的 Node 上
NoExecute :表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上,同时会将 Node 上已经存在的 Pod 驱逐出去
11.Pod动态扩容和缩放
[root@k8s-master-node1 ~]# kubectl get deployment
[root@k8s-master-node1 ~]# kubectl scale deployment nginx --replicas=2
12.滚动升级和回滚
[root@k8s-master-node1 ~]# kubectl apply -f deployment.yaml --record
[root@k8s-master-node1 ~]# kubectl rollout history deployment nginx-app
[root@k8s-master-node1 ~]# kubectl set image deployment/nginx-app container=172.100.0.22/library/nginx:latest
[root@k8s-master-node1 ~]# kubectl rollout status deployment/nginx-app
[root@k8s-master-node1 ~]# kubectl rollout undo deployment/nginx-app
[root@k8s-master-node1 ~]# kubectl rollout undo deployment/nginx-app --to-revision=3
更改yaml
[root@k8s-master-node1 ~]# kubectl rolling-update redis -f redis-rc.update.yaml
[root@k8s-master-node1 ~]# kubeclt rolling-update redis --image=redis-2.0
13.Node的隔离与恢复
隔离操作
[root@k8s-master-node1 ~]# kubectl cordon node
[root@k8s-master-node1 ~]# kubectl get nodes
恢复node的调度状态
[root@k8s-master-node1 ~]# kubectl uncordon node
[root@k8s-master-node1 ~]# kubectl get nodes
驱逐node
[root@k8s-master-node1 ~]# kubectl get pods -o wide node节点运行有一个pod
[root@k8s-master-node1 ~]# kubectl drain node
14.将Pod调度到指定的Node
[root@k8s-master-node1 ~]# kubectl get nodes
[root@k8s-master-node1 ~]# kubectl label nodes node test=123
[root@k8s-master-node1 ~]# kubectl label nodes node test-
[root@k8s-master-node1 ~]# kubectl get node --show-labels
[root@k8s-master-node1 ~]# vim deploy-httpv1.yaml
spec.nodeSelector: //在pod标签下
test: 123
[root@k8s-master-node1 ~]# kubectl apply -f deploy-httpv1.yaml
[root@k8s-master-node1 ~]# kubectl get pods -o wide
15.Kubernetes 默认每个节点只能启动 110 个 Pod,由于业务需要,将每个节点默认限制的Pod 数量改为 200。
k8s-master-node1节点:
[root@k8s-master-node1 ~]# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
添加以下内容:
Environment="KUBELET_NODE_MAX_PODS=--max-pods=200"
文末添加:
$KUBELET_CONFIG_MAX_PODS
[root@k8s-master-node1 ~]# systemctl daemon-reload
[root@k8s-master-node1 ~]# systemctl restart kubelet
[root@k8s-master-node1 ~]# kubectl describe node master node|grep -w pods|grep 200|wc -l
k8s-worker-node1节点:
[root@k8s-worker-node1 ~]# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
添加以下内容:
Environment="KUBELET_NODE_MAX_PODS=--max-pods=200"
文末添加:
$KUBELET_CONFIG_MAX_PODS
[root@k8s-worker-node1 ~]# systemctl daemon-reload
[root@k8s-worker-node1 ~]# systemctl restart kubelet
16.Kubernetes 以 NodePort 方式暴露服务,默认的端口范围为 30000-32767,将 NodePort的端口范围修改为 20000-65535。
k8s-master-node1节点:
[root@k8s-master-node1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-node-port-range=20000-65535
[root@k8s-master-node1 ~]# kubectl describe pod $apiserver_pods -n kube-system |grep service
17.升级系统内核:
[root@k8s-master-node1 ~]# yum localinstall -y kernel-lt-devel-5.4.193-1.el7.elrepo.x86_64.rpm kernel-lt-5.4.193-1.el7.elrepo.x86_64.rpm
[root@k8s-master-node1 ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg # 查看内核排序
[root@k8s-master-node1 ~]# vim /etc/default/grub # 修改内核启动参数为
GRUB_DEFAULT=0
[root@k8s-master-node1 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg # 使用grub2-mkconfig命令来重新创建内核配置
[root@k8s-master-node1 ~]# reboot
[root@k8s-master-node1 ~]# uname -r
18.切换网络类型:
[root@k8s-master-node1 ~]# cat /etc/cni/net.d/10-flannel.conflist # 查看当前网络类型
[root@k8s-master-node1 ~]# kubectl delete -f /tmp/kubernetes/manifests/kube-flannel.yaml
[root@k8s-master-node1 ~]# ip link delete cni0 # master节点删除网卡cni0
[root@k8s-master-node1 ~]# ip link delete flannel.1 # master节点删除网络flannel.1
[root@k8s-worker-node1 ~]# ip link delete cni0 # worker节点删除网卡cni0
[root@k8s-worker-node1 ~]# ip link delete flannel.1 # worker节点删除网络flannel.1
[root@k8s-master-node1 ~]# vim /opt/kubernetes/manifests/calico.yaml # 更改镜像地址
19.开启k8s的ipvs模式
[root@k8s-master-node1 ~]# kubectl get cm -n kube-system # 查看配置文件注释中心存放的文件列表
[root@k8s-master-node1 ~]# kubectl edit cm -n kube-system kube-proxy # 修改kube-proxy
mode: "ipvs"
[root@k8s-master-node1 ~]# kubectl get pod -n kube-system | grep kube-proxy
[root@k8s-master-node1 ~]# kubectl delete pod -n kube-system kube-proxy-8ghd7 kube-proxy-bsjjs
查看是否修改成功
[root@k8s-master-node1 ~]# kubectl get pod -n kube-system | grep kube-proxy kube-proxy-9sxnn
[root@k8s-master-node1 ~]# kubectl logs -n kube-system kube-proxy-9sxnn
I0121 07:56:56.645925 1 server_others.go:259] Using ipvs Proxier. //代表成功
自动创建LVS集群 以前模式是采用IPtable防火墙来实现
[root@k8s-master-node1 ~]# ipvsadm -Ln
Kubernetes
[root@k8s-master-node1 ~]# kubectl top nodes --use-protocol-buffers # 查看节点负载情况
[root@k8s-master-node1 ~]# kubectl get pod <pod-name> -n <namespace> -o custom-columns=NAME:.metadata.name,"ANNOTATIONS":.metadata.annotations # 以自定义列显示Pod信息
[root@k8s-master-node1 ~]# kubectl get pods <pod-name> -o=custom-columns-file=template.txt # 基于文件的自定义列名输出
Istio:
可视化:
Grafana http://master_IP:33000
Prometheus http://master_IP:30090
Jaeger http://master_IP:30686
Kiali http://master_IP:20001
[root@k8s-master-node1 ~]# istioctl profile list # 查看istioctl可以访问到的Istio配置档的名称
[root@k8s-master-node1 ~]# istioctl profile dump dem # 查看配置档的配置信息
[root@k8s-master-node1 ~]# istioctl profile diff default demo # 查看配置文件的差异
[root@k8s-master-node1 ~]# istioctl proxy-status # 概览服务网格
[root@k8s-master-node1 ~]# istioctl proxy-config cluster <pod-name> [flags] # 检索特定Pod中Envoy实例的集群配置的信息:
[root@k8s-master-node1 ~]# istioctl proxy-config bootstrap <pod-name> [flags] # 检索特定Pod中Envoy实例的bootxtrap配置的信息:
[root@k8s-master-node1 ~]# istioctl proxy-config listener <pod-name> [flags] # 检索特定Pod中Envoy实例的监听器配置的信息:
[root@k8s-master-node1 ~]# istioctl proxy-config route <pod-name> [flags] # 检索特定Pod中Envoy实例的路由配置的信息:
[root@k8s-master-node1 ~]# istioctl proxy-config endpoints <pod-name> [flags] # 检索特定Pod中Envoy实例的端点配置的信息:
Harbor:
[root@k8s-master-node1 ~]# systemctl status harbor # 查看harbor状态
Helm:
[root@k8s-master-node1 ~]# helm version # 查看版本信息
[root@k8s-master-node1 ~]# helm list # 查看当前安装的Charts
[root@k8s-master-node1 ~]# helm search <chart-name> # 查找Charts
[root@k8s-master-node1 ~]# helm status redis # 查看Charts状态
[root@k8s-master-node1 ~]# helm delete --purge <chart-name> # 删除Charts
[root@k8s-master-node1 ~]# helm create helm_charts # 创建Charts
[root@k8s-master-node1 ~]# helm lint # 测试Charts语法
[root@k8s-master-node1 ~]# cd helm_charts && helm package ./ # 打包Charts
[root@k8s-master-node1 ~]# helm template helm_charts-xxx.tgz # 查看生成的yaml文件
container
pod
1. 在 master 节点/root 目录下编写 YAML 文件 pod-init.yaml 创建 Pod,具体要求如下:
(1)Pod 名称:pod-init;
(2)镜像:busybox;
(3)添加一个 Init Container,Init Container 的作用是创建一个空文件;
(4)Pod 的 Containers 判断文件是否存在,不存在则退出。
[root@k8s-master-node1 ~]# vim pod-init.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-init
spec:
containers:
- name: con
image: 172.100.0.22/library/busybox:latest
command: ["test", "-e", "/tmp/test"]
initContainers:
- name: init-con
image: 172.100.0.22/library/busybox:latest
command: ["/bin/sh", "-c", "touch /tmp/test"]
restartPolicy: Never
2.在 master 节点/root 目录下编写 YAML 文件 pod-live.yaml 创建 Pod,具体要求如下:
(1)Pod 名称:liveness-exec;
(2)镜像:busybox;
(3)启动命令:/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;
(4)在容器内执行命令“cat /tmp/healthy”来进行存活探测,每 5 秒执行一次。
(5)启动后延时 5 秒开始运行检测
[root@k8s-master-node1 ~]# vim pod-live-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-exec
spec:
containers:
- name: con
image: 172.100.0.22/library/busybox:latest
command: ["/bin/sh", "-c", "touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy"]
livenessProbe:
exec:
command: ["test", "-e", "/tmp/healthy"]
initialDelaySeconds: 5
periodSeconds : 5
3.在master节点/root目录下编写yaml文件liveness_httpget.yaml,具体要求如下:
(1)Pod名称:liveness-http;
(2)命名空间:default;
(3)镜像:nginx;端口:80;
(4)容器启动时运行命令“echo Healty > /usr/share/nginx/html/healthz”;
(5)httpGet请求的资源路径为/healthz,地址默认为Pod IP,端口使用容器中定义的端口名称HTTP;
(6)启动后延时30秒开始运行检测;
(7)每隔3秒执行一次liveness probe
[root@k8s-master-node1 ~]# vim pod-live-get.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-http
namespace: default
spec:
containers:
- name: con
image: 172.100.0.22/library/nginx:latest
ports:
- name: http
containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Heslty > /usr/share/nginx/html/healthz"]
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 3
4.在 master 节点/root 目录下编写 YAML 文件创建 Pod,具体要求如下:
(1)Pod 名称:pod-volume;
(2)镜像:nginx;
(3)Volume 名称为 cache-volume,将其/data 目录挂载到宿主机/data 目录下。
[root@k8s-master-node1 ~]# vim pod-volume.yaml # pod调度到哪个节点挂载的就是哪个节点的目录
apiVersion: v1
kind: Pod
metadata:
name: pod-volume
spec:
containers:
- name: con
image: 172.100.0.22/library/nginx:latest
volumeMounts:
- name: cache-volume
mountPath: /data
volumes:
- name: cache-volume
hostPath:
path: /data
[root@master ~]# kubectl exec -it pod-volume -- /bin/bash
5.在 master 节点/root 目录下编写 YAML 文件 pod.yaml 创建 Pod,具体要求如下:
(1)Pod 名称:nginx;
(2)镜像:nginx:latest;
(3)要求该 Pod 以 Guaranteed QoS 类运行,其 requests 值等于 limits 值。
[root@k8s-master-node1 ~]# vim pod-qos.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-qos
spec:
containers:
- name: con
image: 172.100.0.22/library/nginx:latest
resources:
limits:
cpu: 700m
memory: 200Mi
requests:
cpu: 700m
memory: 200Mi
6.在 master 节点/root 目录下编写 YAML 文件 pod-host.yaml 创建 Pod,具体要求如下:
(1)Pod 名称:hostaliases-pod;
(2)为该 Pod 配置 HostAliases,向 hosts 文件添加额外的条目,将 foo.local、bar.local解析为 127.0.0.1,将 foo.remote、bar.remote 解析为 10.1.2.3。
[root@k8s-master-node1 ~]# vim pod-host.yaml
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
containers:
- name: con
image: 172.100.0.22/library/busybox:latest
command: ["/bin/sh", "-c", "cat /etc/hosts"]
hostAliases:
- ip: '127.0.0.1'
hostnames:
- 'fool.local'
- 'bar.local'
- ip: '10.1.2.3'
hostnames:
- 'fool.remote'
- 'bar.remote'
restartPolicy: Never
7.在 master 节点/root 目录下编写 YAML 文件 nginx.yaml 创建 Pod,具体要求如下:
(1)Pod 名称:nginx-pod;
(2)镜像:nginx;
(3)镜像拉取策略:IfNotPresent;
(4)启用进程命名空间共享。
[root@k8s-master-node1 ~]# vim pod-NameSpaceShare.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: 172.100.0.22/library/nginx:latest
- name: shell
image: 172.100.0.22/library/busybox:latest
securityContext:
capabilities:
add:
- SYS_PTRACE
stdin: true # 相当于i,交互
tty: true # 相当于t,终端
shareProcessNamespace: true
[root@master ~]# kubectl attach -it nginx -c shell
/ # ps ax
/ # kill -HUP 7
/ # ps ax
/ # head /proc/7/root/etc/nginx/nginx.conf
8.在 master 节点/root 目录下编写 YAML 文件 pod-redis-nginx.yaml 创建 Pod,具体要求如下:
(1)命名空间:default;
(2)Pod 名称:pod-redis-nginx;
(3)该 Pod 包含 2 个容器:redis 和 nginx,分别使用镜像 redis 和 nginx。
[root@k8s-master-node1 ~]# vim pod-redis-nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-redis-nginx
namespace: default
spec:
containers:
- name: redis
image: 172.100.0.22/library/redis:latest
ports:
- containerPort: 6379
- name: nginx
image: 172.100.0.22/library/nginx:latest
ports:
- containerPort: 80
CronJob
1.在 master 节点/root 目录下编写 YAML 文件 cronjob.yaml 创建的 CronJob,具体要求如下:
(1)Cronjob 名称:cronjob;
(2)镜像:busybox;
(3)要求该 CronJob 的.spec 配置文件每分钟打印出当前时间信息。
[root@k8s-master-node1 ~]# vim cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: 172.100.0.22/library/busybox:latest
command: ["/bin/sh", "-c", "date"]
restartPolicy: OnFailure
ReplicaSet
1. 在 master 节点/root 目录下编写 YAML 文件 replicaset.yaml 创建 ReplicaSet。具体要求如下:
(1)ReplicaSet 名称:nginx;
(2)命名空间:default;
(3)副本数:3;
(4)镜像:nginx。
[root@k8s-master-node1 ~]# vim replicaSet.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
name: app
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.100.0.22/library/nginx:latest
ports:
- containerPort: 80
DaemonSet
1.在 master 节点/root 目录下编写 YAML 文件 daemonset.yaml 创建 DaemonSet,具体要求如下:
(1)DaemonSet 名称:nginx;
(2)镜像:nginx:latest;
(3)确保其在集群的每个节点上运行一个 Pod,且不覆盖当前环境中的任何 traints。
[root@k8s-master-node1 ~]# vim daemonSet.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
name: pod
labels:
app: nginx
spec:
containers:
- name: con
image: 172.100.0.22/library/nginx:latest
tolerations:
- operator: "Exists"
Deployment
1.为 master 节点打上标签“disktype=ssd”和标签“exam=chinaskill”,然后在 master节点/root 目录下编写 YAML 文件 deployment.yaml 创建 Deployment,具体要求如下:
(1)Deployment 名称:nginx-deployment;
(2)要求 Pod 只能调度到具有标签“disktype=ssd”的节点上;
(3)具有标签“exam=chinaskill”的节点优先被调度。
[root@k8s-master-node1 ~]# kubectl label nodes k8s-master-node1 disktype=ssd
[root@k8s-master-node1 ~]# kubectl label nodes k8s-master-node1 exam=chinaskill
[root@k8s-master-node1 ~]# kubectl label nodes k8s-master-node1 exam-
[root@k8s-master-node1 ~]# kubectl get nodes --show-labels
[root@k8s-master-node1 ~]# vim deployment-affinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 172.100.0.22/library/nginx:latest
ports:
- containerPort: 80
nodeSelector:
disktype: ssd
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: exam
operator: In
values:
- chinaskill
2.在 master 节点/root 目录下编写 YAML 文件 nginx-deployment.yaml 创建 Deployment,具体要求如下:
(1)Deployment 名称:nginx;
(2)保证其副本在每个节点上运行,且不覆盖节点原有的 Tolerations。
完成后使用该 YAML 文件创建 Deployment。
[root@k8s-master-node1 ~]# vim deployment-tolerations.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
name: myapp
labels:
app: nginx
spec:
containers:
- name: con
image: 172.100.0.22/library/nginx:latest
ports:
- containerPort: 80
3.在 master 节点/root 目录下编写 YAML 文件 deployment.yaml 创建 Deployment,具体要求如下:
(1)Deployment 名称:nginx-app;
(2)包含 3 个副本;
(3)镜像使用 nginx:1.15.4
完成后使用该 YAML 文件创建 Deployment。然后通过滚动升级的方式更新镜像版本为1.16.0,并记录这个更新,最后,回滚该更新到之前的 1.15.4 版本。
[root@k8s-master-node1 ~]# vim deployment-rollout.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
name: pod
labels:
app: myapp
spec:
containers:
- name: container
image: 172.100.0.22/library/nginx:1.15.4
ports:
- containerPort: 80
[root@k8s-master-node1 ~]# kubectl apply -f deployment-rollout.yaml --record
[root@k8s-master-node1 ~]# kubectl rollout history deployment nginx-app
[root@k8s-master-node1 ~]# kubectl set image deployment/nginx-app container=172.100.0.22/library/nginx:1.16.0
[root@k8s-master-node1 ~]# kubectl set image deployment/nginx-app container=172.100.0.22/library/nginx:latest
[root@k8s-master-node1 ~]# kubectl rollout status deployment/nginx-app
[root@k8s-master-node1 ~]# kubectl rollout undo deployment/nginx-app
[root@k8s-master-node1 ~]# kubectl rollout undo deployment/nginx-app --to-revision=1
4.在 master 节点创建 Deployment,具体要求如下:
(1)Deployment 名称:exam2022;
(2)镜像:redis:latest:
(3)副本数:7;
(4)label:app_enb_stage=dev。
[root@k8s-master-node1 ~]# vim deployment-redis.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: exam2022
labels:
app_enb_stage: dev
spec:
selector:
matchLabels:
app_enb_stage: dev
replicas: 7
template:
metadata:
labels:
app_enb_stage: dev
spec:
containers:
- name: redis
image: 172.100.0.22/library/redis:latest
ports:
- containerPort: 6379
5. 在 master 节点/root 目录下编写 YAML 文件 deployment-nginx.yaml 创建 Deployment,具体要求如下:
(1)Deployment 名称:nginx-deployment;
(2)镜像:nginx;
(3)副本数:2;
(4)网络:hostNetwork;
(5)容器端口:80。
[root@k8s-master-node1 ~]# vim deployment-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
hostNetwork: true
containers:
- name: nginx-con
image: 172.100.0.22/library/nginx:latest
ports:
- containerPort: 80
Service
[root@k8s-master-node1 ~]# vim deployment-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
name: pod
labels:
app: nginx
spec:
containers:
- name: container
image: 172.100.0.22/library/nginx:latest
ports:
- name: http
containerPort: 80
1. 在 master 节点/root 目录下编写 YAML 文件 service-cluster.yaml 创建 Service,具体要求如下:
(1)Service 名称:exam-service;
(2)集群内部访问端口:80;targetPort: 81;
(3)使用 TCP 协议;
(4)服务类型:ClusterIP
[root@k8s-master-node1 ~]# vim service-ClusterIP.yaml
apiVersion: v1
kind: Service
metadata:
name: service-clusterip
namespace: default
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 81
# targetPort设置为80才可以使用service
2.在 master 节点/root 目录下编写 YAML 文件 service-nodeport.yaml,具体要求如下:
(1)Service 名称:nginx-service;
(2)关联名为 nginx 的 Deployment;
(3)以 NodePort 方式将其 80 端口对外暴露为 30080。
[root@k8s-master-node1 ~]# vim service-NodePort.yaml
apiVersion: v1
kind: Service
metadata:
name: service-nodeport
namespace: default
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30001
ConfigMap
1.在 master 节点/root 目录下编写 YAML 文件创建 Pod 并使用 ConfigMap,具体要求如下:
(1)Pod 名称:exam;
(2)镜像:busybox;
(3)使 用 ConfigMap , 并 设 置 变 量 “ DB_HOST=localhost ” 和“DB_PORT=3306”。
[root@k8s-master-node1 ~]# vim configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mycm
data:
host: 'localhost'
port: '3306'
---
apiVersion: v1
kind: Pod
metadata:
name: exam
spec:
containers:
- name: con
image: 172.100.0.22/library/busybox:latest
command: ["/bin/sh", "-c", "sleep 3600"]
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: mycm
key: host
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: mycm
key: port
envFrom:
- configMapRef:
name: mycm
restartPolicy: Never
2.在 master 节点/root 目录下编写 YAML 文件创建 Pod 并使用 ConfigMap,具体要求如下:
(1)Pod 名称:exam;
(2)镜像:busybox;
(3)在 数 据 卷 里 面 使 用 ConfigMap , 并 设 置 变 量 “ DB_HOST=localhost ” 和“DB_PORT=3306”。
[root@k8s-master-node1 ~]# vim configmap-volume.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mycm
data:
host: 'localhost'
port: '3306'
---
apiVersion: v1
kind: Pod
metadata:
name: exam
spec:
containers:
- name: con
image: 172.100.0.22/library/busybox:latest
command: ["/bin/sh", "-c", "sleep 3600"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: mycm
restartPolicy: Never
Secret
1.在 master 节点/root 目录下编写 YAML 文件 secret.yaml 创建 Secret 和 Pod,具体要求如下:
(1)Secret 名称:mysecret;
(2)包含一个 password 字段(手动 base64 加密);
(3)第一个 Pod test1 使用 env 引用 mysecret;
(4)第二个 Pod test2 使用 volume 引用 mysecret。
[root@k8s-master-node1 ~]# echo -n admin | base64 # YWRtaW4=
[root@k8s-master-node1 ~]# echo -n password | base64 # cGFzc3dvcmQ=
[root@k8s-master-node1 ~]# echo -n YWRtaW4= | base64 -d # admin
[root@k8s-master-node1 ~]# echo -n cGFzc3dvcmQ= | base64 -d # password
[root@k8s-master-node1 ~]# vim secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: default
type: Opaque
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
stringData:
name: admin # secret自动编码,pod使用时自动解码
---
apiVersion: v1
kind: Pod
metadata:
name: test1
spec:
containers:
- name: con
image: 172.100.0.22/library/nginx:latest
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: mysecret
---
apiVersion: v1
kind: Pod
metadata:
name: test2
spec:
containers:
- name: con
image: 172.100.0.22/library/nginx:latest
env:
- name: passwd
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: name
valueFrom:
secretKeyRef:
name: mysecret
key: name
[root@k8s-master-node1 ~]# kubectl exec -it test1 -- cat /etc/secrets/name # admin
[root@k8s-master-node1 ~]# kubectl exec -it test1 -- cat /etc/secrets/password # password
[root@k8s-master-node1 ~]# kubectl exec -it test2 -- /bin/sh
# echo $password
password
# echo $name
admin
PV
1.在 master 节点/root 目录下编写 YAML 文件 pv.yaml 创建 PV,具体要求如下:
(1)PV 名称:app-pv;
(2)容量为 10Gi;
(3)访问模式为 ReadWriteMany;
(4)volume 的类型为 hostPath,位置为/src/app-config。
[root@k8s-master-node1 ~]# vim pv-hostpath.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: /src/app-config
2.在 master 节点/root 目录下编写 YAML 文件 pv.yaml 创建 PV,具体要求如下:
(1)PV 名称:pv-local;
(2)回收策略:Delete;
(3)访问模式:RWO;
(4)挂载路径:node 节点/data/k8s/localpv;
(5)卷容量:5G。
[root@k8s-master-node1 ~]# vim pv-local.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local
spec:
capacity:
storage: 5Gi
storageClassName: slow
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
local:
path: /data/k8s/localpv
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- hostname=k8s-worker-node1
LimitRange
1.在 master 节点/root 目录下编写 YAML 文件 limitrange.yaml 创建命名空间,具体要求如下:
(1)命名空间名称: resource ;
(2)容器资源请求上限为 800Mi 内存、3000m CPU;
(3)容器资源请求下限为 100Mi 内存,300m CPU;
(4)容器默认资源请求为 256Mi 内存、500m CPU;
(5)内存和 CPU 超售的比率均为
[root@k8s-master-node1 ~]# vim limitrange.yaml
apiVersion: v1
kind: Namespace
metadata:
name: resource
---
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-pode
namespace: resource
spec:
limits:
- type: Pod
max:
cpu: 4
memory: 8Gi
min:
cpu: 250m
memory: 100Mi
maxLimitRequestRatio:
cpu: 2
memory: 2
---
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-con
namespace: resource
spec:
limits:
- type: Container
default:
cpu: 1
memory: 2Gi
defaultRequest:
cpu: 500m
memory: 1Gi
max:
cpu: 2
memory: 4Gi
min:
cpu: 300m
memory: 512Mi
maxLimitRequestRatio:
cpu: 2
memory: 2
默认和请求上限以 为准
defaultrequest和defaultlimit则是默认值,注意:pod级别没有这两项设置
如果container设置了max, pod中的容器必须设置limit,如果未设置,则使用defaultlimt的值,如果defaultlimit也没有设置,则无法成功创建
如果设置了container的min,创建容器的时候必须设置request的值,如果没有设置,则使用defaultrequest,如果没有defaultrequest,则默认等于容器的limit值,如果limit也没有,启动就会报错
ResourceQuota:
1.创建命名空间 quota-example,在 master 节点/root 目录下编写 YAML 文件 quota.yaml创建 ResourceQuota,具体要求如下:
(1)ResourceQuota 名称:compute-resources;
(2)命名空间resource内所有 Pod 数量不超过 4;
(4)命名空间resource内所有容器内存申请总和不得超过 1G;
(5)命名空间resource内所有容器内存限制不得超过 2G;
(6)命名空间resource内所有容器申请的 CPU 不得超过 1;
(7)命名空间resource内所有容器限制的 CPU 不得超过 2。
(8)限制命名空间resource的 PVC 数目为 10;
(9)限制命名空间resource累计存储容量为 20Gi。
[root@k8s-master-node1 ~]# vim quota.yaml
apiVersion: v1
kind: Namespace
metadata:
name: resource
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: resource
spec:
hard:
pods: 4
requests.cpu: 1
requests.memory: 1Gi
limits.cpu: 2
limits.memory: 2Gi
persistentvolumeclaims: 10
HPA
1.在 master 节点/root 目录下编写 YAML 文件 hpa.yaml 为上一题的 Deployment 创建 Pod水平自动伸缩,具体要求如下:
(1)Pod 水平自动伸缩名称:frontend-scaler;
(2)副本数伸缩范围:3--5;
(3)期望每个 Pod 根据设定的 CPU 使用率 50%动态的伸缩。
[root@k8s-master-node1 ~]# vim hpa.yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: frontend-scaler
spec:
maxReplicas: 5
minReplicas: 3
scaleTargetRef:
apiVersion: v1
kind: Deployment
name: deployment-redis
targetCPUUtilizationPercentage: 50
# metrics:
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 50
Role与ClusterRole
1. 在 master 节点/root 目录下编写 YAML 文件 role.yaml 文件创建集群角色,具体要求如下:
(1)角色名称:deployment-role;
(2)该角色拥有对 Deployment、Daemonset、StatefulSet 的创建权限。
[root@k8s-master-node1 ~]# vim role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployment-role
rules:
- apiGroups: [""]
resources: ["delopyments", "daemonsets", "statefulsets"]
verbs: ["create"]
apiGroups可配置参数
“”,“apps”, “autoscaling”, “batch”
resources可配置参数
services、endpoints、pods、secrets、configmaps、crontabs、deployments、jobs、nodes、rolebindings、clusterroles、daemonsets、replicasets、statefulsets、horizontalpodautoscalers、replicationcontrollers、cronjobs
verbs可配置参数
get、list、watch、create、update、patch、delete、exec
2. 在 master 节点/root 目录下编写 YAML 文件 clusterrole.yaml 创建集群角色,具体要求如下:
(1)集群角色名称:exam-reader;
(2)对 default 命名空间内的 Pod 拥有 get、watch、list、create、delete 的权限;
(3)对 default 命名空间内的 Deployment 拥有 get、list 的权限。
[root@k8s-master-node1 ~]# vim clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: exam-reader
namespace: default
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get, watch, list, create, delete"]
- apiGroups: [""]
resources: ["deployments"]
verbs: ["get, list"]
3. 在 master 节点/root 目录下编写 YAML 文件创建 ServiceAccount,具体要求如下:
(1)ServiceAccount 名称:exam-sa;
(2)将该 ServiceAccount 与上一题创建的 ClusterRole 进行绑定绑定。
[root@k8s-master-node1 ~]# vim sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: exam-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: clusterrole-bind-sa
subjects:
- kind: ServiceAccount
namespace: defaulet
name: exam-sa
roleRef:
kind: ClusterRole
name: exam-reader
apiGroup: ""
PriorityClass
1.Kubernetes 集群支持 Pod 优先级抢占,通过抢占式调度策略来实现同一个 Node 节点内部的 Pod 对象抢占。在 master 节点/root 目录下编写 YAML 文件 schedule.yaml 创建一个抢占式调度策略,具体要求如下:
(1)抢占式调度策略名称:high-scheduling;
(2)优先级为 1000000;
(3)不要将该调度策略设置为默认优先调度策略。
[root@k8s-master-node1 ~]# vim priorityClass.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-scheduling
value: 1000000
globalDefault: false
PodSecurityPolicy
1.在 master 节点/root 目录下编写 yaml 文件 policy.yaml,具体要求如下:
(1)安全策略名称:pod-policy;
(2)仅禁止创建特权模式的 Pod;
(3)其它所有字段都被允许。
[root@k8s-master-node1 ~]# vim podsecuritypolicy.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: pod-policy
spec:
privileged: false
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
NetworkPolicy
1.在 master 节点/root 目录下编写 YAML 文件创建 network.yaml 创建网络策略,具体要求如下:
(1)网络策略名称:exam-nework;
(2)针对 namespace test 下的 Pod,只允许相同 namespace 下的 Pod 访问,并且可访问Pod 的 9000 端口。
[root@k8s-master-node1 ~]# vim network-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
name: test
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: exam-nework
namespace: test
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: test
ports:
- protocol: TCP
port: 9000
2. 在 master 节点/root 目录下编写 yaml 文件 network-deny.yaml,具体要求如下:
(1)NetworkPolicy 名称:default-deny;
(2)命名空间:default;
(3)默认禁止所有入 Pod 流量。
[root@k8s-master-node1 ~]# vim network-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
[root@k8s-master-node1 ~]# kubectl create deployment nginx --image=172.100.0.22/library/nginx:latest
[root@k8s-master-node1 ~]# kubectl expose deployment nginx --port=80
[root@k8s-master-node1 ~]# kubectl run busybox --rm -it --image=172.100.0.22/library/nginx:latest
/ # wget --spider --timeout=1 nginx
NFS
1.在 master 和 node 节点安装 NFS 文件服务器,共享目录为/data/k8s/,然后在 master节点/root 目录下编写 YAML 文件 nfs-pv.yaml 创建 PV,具体要求如下:
(1)PV 名称:exma-pv;
(2)使用 NFS 存储作为后端存储;
(3)存储空间为 1Gi;
(4)访问模式为 ReadWriteOnce;
(5)回收策略为 Recyle。
k8s-master-node1节点
[root@k8s-master-node1 ~]# yum install -y nfs-utils rpcbind
[root@k8s-master-node1 ~]# mkdir /data/k8s/
[root@k8s-master-node1 ~]# chmod 755 /data/k8s
[root@k8s-master-node1 ~]# vim /etc/exports
/data/k8s *(rw,sync,no_root_squash)
[root@master ~]# systemctl restart nfs && systemctl restart rpcbind
[root@master ~]# systemctl enable nfs && systemctl enable rpcbind
k8s-worker-node1节点
[root@k8s-worker-node1 ~]# yum install -y nfs-utils rpcbind
[root@k8s-worker-node1 ~]# mkdir /data/k8s/
[root@k8s-worker-node1 ~]# chmod 755 /data/k8s
[root@k8s-worker-node1 ~]# mount -t nfs k8s-master-node1:/data/k8s/ /data/k8s/
[root@k8s-master-node1 ~]# vim nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: exma-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/k8s
server: 172.100.0.22
[root@k8s-master-node1 ~]# vim nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Mysql
1.在 master 节点/root 目录下编写 YAML 文件部署 MySQL 服务,具体要求如下:
(1)Service 名称:myqsl;Deployment 名称:myqsl;
(2)镜像:mysql:5.7;
(3)数据库用户:root;密码:123456;
(4)挂载一个持久卷 mysql-pv,拥有 2GB 的存储空间,路径为/ mnt/data;
(5)以 NodePort 方式将 3306 端口对外暴露为 33306.(端口范围)
[root@k8s-master-node1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-node-port-range=20000-65535
[root@k8s-master-node1 ~]# kubectl describe pod $apiserver_pods -n kube-system |grep service
[root@k8s-master-node1 ~]# vim mysql.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
nodePort: 33306
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: 172.100.0.22/library/mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-pvc
Istio
BookInfo部署
k8s-worker-node1节点
[root@k8s-worker-node1 ~]# curl -O 172.100.0.11:/1-package/ServiceMesh.tar.gz
[root@k8s-worker-node1 ~]# tar -xf ServiceMesh.tar.gz && docker load -i ServiceMesh/images/image.tar
[root@k8s-worker-node1 ~]# docker load -i ServiceMesh/images/image.tar
k8s-master-node1节点
[root@k8s-master-node1 ~]# curl -O 172.100.0.11:/1-package/ServiceMesh.tar.gz
[root@k8s-master-node1 ~]# tar -xf ServiceMesh.tar.gz
[root@k8s-master-node1 ~]# docker load -i ServiceMesh/images/image.tar
[root@k8s-master-node1 ~]# kubectl apply -f ServiceMesh/bookinfo/
[root@k8s-master-node1 ~]# kubectl apply -f bookinfo-gateway.yaml
[root@k8s-master-node1 ~]# kubectl apply -f dr-all.yaml
[root@k8s-master-node1 ~]# kubectl label namespaces default istio-injection=enabled
[root@k8s-master-node1 ~]# vim bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
[root@k8s-master-node1 ~]# vim dr-all.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
subsets:
- name: v1
labels:
version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings
spec:
host: ratings
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details
spec:
host: details
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
BookInfo运维:
1.在 Kubernetes 集群中完成 Bookinfo 样例程序的部署,在 master 节点/root目录下编写 YAML 文件 istio.yaml 创建请求路由,具体要求如下:
(1)路由名称:bookinfo-virtualservice;
(2)将所有流量路由到每个微服务的 v1 版本。
[root@k8s-master-node1 ~]# vim vs-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
2. 在 Kubernetes 集群中完成 Bookinfo 示例程序的部署,然后在 master 节点/root 目录下编写 YAML 文件 istio.yaml 创建基于权重的路由,具体要求如下:
(1)虚拟服务名称:reviews;
(2)将 30%的流量路由到 reviews 服务的 v1 版本;
(3)将 70%的流量路由到 reviews 服务的 v3 版本。
[root@k8s-master-node1 ~]# vim vs-30-70.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 30
- destination:
host: reviews
subset: v3
weight: 70
3. 在 Kubernetes 集群中完成 Bookinfo 样例程序的部署,然后在 maser 节点/root 目录下编写 YAML 文件 istio.yaml 配置 HTTP 请求超时,具体要求如下:
(1)路由名称:reviews;
(2)将请求路由到 reviews 服务的 v2 版本;
(3)对 reviews 服务的调用增加一个半秒的请求超时。
[root@k8s-master-node1 ~]# vim vs-timeout.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v2
timeout: 0.5s
4. 在 Kubernetes 集群中完成 Bookinfo 示例程序的部署,然后在 master 节点/root 目录下编写 YAML 文件 istio.yaml 为 ratings 服务注入 HTTP 延迟故障,具体要求如下:
(1)注入规则名称:ratings;
(2)为用户 jason 在 reviews:v2 和 ratings 服务之间注入一个 7 秒的延迟。
[root@k8s-master-node1 ~]# vim vs-delay.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- fault:
delay:
percentage:
value: 100.0
fixedDelay: 7s
match:
- headers:
end-user:
exact: jason
route:
- destination:
host: ratings
subset: v1
- route:
- destination:
host: ratings
subset: v1
Httpbin运维:
1.在 Kubernetes 集群中完成 httpbin 样例程序的部署,然后在 master 节点/root 目录下编写 YAML 文件 istio.yaml 创建默认路由策略,具体要求如下:
(1)路由策略名称:httpbin;
(2)将 100%流量路由到服务的 v1 版本;
(3)将 100%的相同流量镜像到服务的 v2 版本。
[root@k8s-master-node1 ~]# kubectl apply -f httpbin-v1-v2.yaml
[root@k8s-master-node1 ~]# kubectl apply -f sleep.yaml
[root@k8s-master-node1 ~]# kubectl apply -f mirror.yaml
[root@k8s-master-node1 ~]# kubectl exec SLEEP_POD -c sleep -- curl -sS http://httpbin:8000/headers # 发送流量
[root@k8s-master-node1 ~]# kubectl logs HTTPBIN_V1_POD -c httpbin # 查看日志
[root@k8s-master-node1 ~]# kubectl logs HTTPBIN_V2_POD -c httpbin # 查看日志
[root@k8s-master-node1 ~]# vim istio/httpbin/mirror.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
mirrorPercent: 100
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
2.在 Kubernetes 集群中完成 httpbin 样例程序的部署,然后在 master 节点/root 目录下编写 YAML 文件 istio.yaml 配置熔断器,具体要求如下:
(1)目标规则名称:httpbin;
(2)将 TLS 流量策略 mode:ISTIO_MUTUAL 添加到目标规则;
(3)要求当并发的连接和请求数超过一个,在 istio-proxy 进行进一步的请求和连接时,后续请求或连接将被阻止。
(1)定义到目标主机的 HTTP1/TCP 最大连接数为 1;
(2)定义针对一个目标的 HTTP 请求的最大排队数量为 1;
(3)定义对某一后端的请求中,一个连接内能够发出的最大请求数量为 1。
[root@k8s-master-node1 ~]# kubectl apply -f httpbin-cb.yaml
[root@k8s-master-node1 ~]# kubectl apply -f dr-cb.yaml
[root@k8s-master-node1 ~]# kubectl apply -f fortio-deploy.yaml
[root@k8s-master-node1 ~]# kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio curl -quiet http://httpbin:8000/get # 一次请求
[root@k8s-master-node1 ~]# kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get # 发送并发数为 2 的连接(-c 2),请求 20 次(-n 20)
[root@k8s-master-node1 ~]# kubectl exec "$FORTIO_POD" -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending # 查看熔断详情
[root@k8s-master-node1 ~]# vim dr-rd.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1
http:
http1MaxPendingRequests: 1
http1MaxPendingRequests: 1
outlierDetection:
consecutive5xxErrors: 1
interval: 1s
baseEjectionTime: 3m
maxEjectionPercent: 100
tls:
mode: ISTIO_MUTUAL
3.在 Kubernetes 集群中完成 HTTPBin 样例程序的部署,然后在 master 节点/root 目录下编写 YAML 文件 istio.yaml 创建 Ingress Gateway,具体要求如下:
(1)以 NodePort 方式配置 Ingress 端口;
(2)在 80 端口为 HTTP 流量配置一个 Gateway,名称:httpbin-gateway;
(3)为 Gateway 的入口流量配置路由,允许流量流向路径/status 和/delay;
(4)对外访问的域名:httpbin.example.com。
[root@k8s-master-node1 ~]# kubectl apply -f httpbin-nodeport.yaml
[root@k8s-master-node1 ~]# kubectl apply -f ingress-gateway.yaml
[root@k8s-master-node1 ~]# curl -s -I -HHost:httpbin.example.com "http://192.168.100.30:31083/status/200"
[root@k8s-master-node1 ~]# curl -s -I -HHost:httpbin.example.com "http://192.168.100.30:31083/headers"
[root@k8s-master-node1 ~]# vim ingress-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.example.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "httpbin.example.com"
gateways:
- httpbin-gateway
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
route:
- destination:
port:
number: 8000
host: httpbin
4. 在 Kubernetes 集群中完成 HTTPBin 服务的部署,在 master 节点/root 目录下编写YAML 文件 istio.yaml 创建 Egress 控制 Istio 服务网格的出口流量,具体要求如下:
(1)虚拟服务名称:httpbin-ext;
(2)设置调用外部服务 httpbin.org 的超时时间为 3 秒;
(3)响应时间超过 3 秒的 httpbin.org 服务将被切断。
[root@k8s-master-node1 ~]# kubectl apply -f sleep.yaml
[root@k8s-master-node1 ~]# kubectl exec -it SLEEP_POD -c sleep -- curl -I https://www.baidu.com | grep "HTTP/" # 200
[root@k8s-master-node1 ~]# istioctl install --set profile=demo --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY
[root@k8s-master-node1 ~]# kubectl exec -it SLEEP_POD -c sleep -- curl -I https://www.baidu.com | grep "HTTP/"; # code35
[root@k8s-master-node1 ~]# kubectl apply -f se-http.yaml
[root@k8s-master-node1 ~]# kubectl exec SLEEP_POD -c sleep -- time curl -o /dev/null -sS -w "%{http_code}\n" http://httpbin.org/delay/5 # 200
[root@k8s-master-node1 ~]# kubectl apply -f egress.yaml
[root@k8s-master-node1 ~]# kubectl exec SLEEP_POD -c sleep -- time curl -o /dev/null -sS -w "%{http_code}\n" http://httpbin.org/delay/5 # 504
[root@k8s-master-node1 ~]# vim egress.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin-ext
spec:
hosts:
- httpbin.org
http:
- timeout: 3s
route:
- destination:
host: httpbin.org
weight: 100
[root@k8s-master-node1 ~]# vim se-http.yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin-ext
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
5. 在 Kubernetes 集群中完成 HTTPBin 样例程序的部署,然后在 master 节点/root 目录下编写 YAML 文件 istio.yaml 创建 Ingress,具体要求如下:
(1)在端口 80 上配置 Ingress 以实现 HTTP 流量;
(2)Ingress 名称:httpbin-ingress;
(3)允许流量流向路径/status 和/delay;
(3)对外访问的域名:httpbin.example.com。
[root@k8s-master-node1 ~]# vi ingress-myapp.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myapp
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myapp.magedu.com
http:
paths:
- path: /stats/*
- path: /delay/*
backend:
serviceName: myapp
servicePort: 80
kubevirt
基本操作:
[root@k8s-master-node1 ~]# vim Dockerfile
FROM kubevirt/registry-disk-v1alpha
MAINTAINER chinaskill
ADD CentOS-7-x86_64-2009.qcow2 /home/centos7.qcow2
[root@k8s-master-node1 ~]# docker build -t centos7:latest .
[root@k8s-master-node1 ~]# docker tag centos7:latest 172.100.0.22/library/centos7:latest
[root@k8s-master-node1 ~]# docker push 172.100.0.22/library/centos7:latest
[root@k8s-master-node1 ~]# virtctl start VMI
[root@k8s-master-node1 ~]# virtctl console VMI
1.使用提供的 OpenStack qcow2 镜像,在 master 节点/root 目录下编写 YAML 文件vm.yaml 创建 VM,具体要求如下:
(1)VM 名称:exam;
(2)要求内存为 2Gi,CPU 为 1 核;
(3)运行策略:Always。
[root@k8s-master-node1 ~]# vim vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: exam
spec:
# running: false
runStrategy: Always
template:
metadata:
name: testvm
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
resources:
requests:
cpu: 1
memory: 2Gi
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
2. 在 KubeVirt 中启用实时迁移功能,以实现将一台正在运行的虚拟机实例从一个节点迁移到另一个节点,期间工作负载继续运行且能保持持续访问。
[root@k8s-master-node1 ~]# kubectl edit -n kubevirt kubevirt kubevirt
spec:
configuration:
developerConfiguration:
featureGates:
- LiveMigration
[root@k8s-master-node1 ~]# kubectl apply -f vmim-vm.yaml
[root@k8s-master-node1 ~]# virtctl start testvm
[root@k8s-master-node1 ~]# kubectl apply -f vmim.yaml
[root@k8s-master-node1 ~]# virtctl migrate testvm
[root@k8s-master-node1 ~]# vim vmim-vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: testvm
spec:
running: false
template:
metadata:
name: testvm
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
memory: 64M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
[root@k8s-master-node1 ~]# vim vmim.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstanceMigration
metadata:
name: migration-job
spec:
vmiName: testvm
3.在 KubeVirt 中启用快照/恢复支持功能,然后在 master 节点/root 目录下编写 YAML文件 snap.yaml 为虚拟实例 exam 创建一个快照,具体要求如下:
(1)快照名称:exam-snap;
(2)创建快照超时时间:1min。
[root@k8s-master-node1 ~]# kubectl apply -f kubevirt/vm.yaml # 运行状态: running: flase
[root@k8s-master-node1 ~]# virtctl start exam
[root@k8s-master-node1 ~]# kubectl apply -f snap.yaml
[root@k8s-master-node1 ~]# virtctl stop exam
[root@k8s-master-node1 ~]# kubectl apply -f restore.yaml
[root@k8s-master-node1 ~]# virtctl start exam
[root@k8s-master-node1 ~]# kubectl edit -n kubevirt kubevirt kubevirt
spec:
configuration:
developerConfiguration:
featureGates:
- Snapshot
[root@k8s-master-node1 ~]# vim snap.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
name: exam-snap
spec:
failureDeadline: 1m
source:
apiGroup: kubevirt.io
kind: VirtualMachine
name: exam
[root@k8s-master-node1 ~]# vim restore.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
name: exam-restore
spec:
target:
apiGroup: kubevirt.io
kind: VirtualMachine
name: exam
virtualMachineSnapshotName: exam-snap
4.使用提供的 OpenStack qcow2 镜像,在 master 节点/root 目录下编写 YAML 文件创建VMI,具体要求如下:
(1)VMI 名称:chinaskill-vmi;
(2)运行策略:Manual;
(3)磁盘驱动:virtio。
[root@k8s-master-node1 ~]# vim vmi.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: chinaskill-vmi
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
resources:
requests:
memory: 2Gi
cpu: 1
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
5.使用提供的 OpenStack qcow2 镜像(该镜像内部署有 Web 应用),在 master 节点/root目录下编写 YAM 文件 vmi.yaml 创建 VMI,具体要求如下:
(1)VMI 名称:exam;
(2)允许节点资源过量使用;
(3)内存为 8Gi,CPU 为 4 核;
(4)运行策略:RerunOnFailure。
[root@k8s-master-node1 ~]# vim vmi-over.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: exam
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
resources:
overcommitGuestOverhead: true
requests:
memory: 8Gi
cpu: 4
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
6.使用提供的 OpenStack qcow2 镜像,在 master 节点/root 目录下编写 YAML 文件vmi.yaml 创建 VMI,具体要求如下:
(1)VMI 名称为 exam;
(2)内存为 2Gi,CPU 为 1 核;
(3)启动命令:echo 123456 | passwd --stdin root。
(4)基于 Multus 多网络方案为该 VMI 添加一个额外网络 macvlan;
[root@k8s-master-node1 ~]# vim vmi-com-macvlan.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: exam
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: configdrive
disk:
bus: virtio
interfaces:
- name: default
bridge: {}
- name: test
bridge: {}
resources:
requests:
memory: 2Gi
cpu: 1
networks:
- name: test
multus:
networkName: macvlan
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
- name: configdrive
cloudInitConfigDrive:
userData: |
#!/bin/bash
sudo echo 123456 | passwd --stdin root
7.使用提供的 OpenStack qcow2 镜像(该镜像内部署有 Web 应用),在 master 节点/root目录下编写 YAM 文件 vmi.yaml 创建 VMI,具体要求如下:
(1)VMI 名称:exam;
(2)启用 Istio 代理注入;
(3)内存为 2Gi,CPU 为 1 核;
(4)运行策略:Always。
[root@k8s-master-node1 ~]# kubectl label namespaces default istio-injection=enabled
[root@k8s-master-node1 ~]# kubectl get pod
[root@k8s-master-node1 ~]# kubectl get pods virt-launcher-vmi-istio-574hm -o jsonpath='{.spec.containers[*].name}'
[root@k8s-master-node1 ~]# istioctl proxy-ststus
[root@k8s-master-node1 ~]# vim vmi-istio.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: vmi-istio
name: vmi-istio
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
memory: 2Gi
cpu: 1
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
8.使用提供的OpenStack qcow2镜像,在master节点/root目录下编写YAML文件vm.yaml创建 VMI,具体要求如下:
(1)VMI 名称:vmi-ssh-static;
(2)将 SSH 密钥放入 Kubernetes 密钥并注入到 VMI 中;
(3)内存为 2Gi,CPU 为 1000m。
[root@k8s-master-node1 ~]# kubectl create secret generic my-pub-key --from-file=key=.ssh/id_rsa.pub
[root@k8s-master-node1 ~]# vim vmi-static.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: vmi-ssh-static
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: configdrive
disk:
bus: virtio
resources:
requests:
memory: 2Gi
cpu: 1
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
configDrive: {}
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
- name: configdrive
cloudInitConfigDrive:
userData: |
#!/bin/bash
echo "test
9.在 master 节点上编写 YAML 文件 vmi-sshkey.yaml 为虚拟实例 chinaskill-vmi 进行动态密钥注入,具体要求如下:
(1)使用 qemuGuestAgent 将访问凭证 api 附加到 chinaskill-vmi。
[root@k8s-master-node1 ~]# kubectl create secret generic my-pub-key --from-file=key=.ssh/id_rsa.pub
[root@k8s-master-node1 ~]# vim vmi-ssh.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: vmi-ssh-static
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: configdrive
disk:
bus: virtio
resources:
requests:
memory: 2Gi
cpu: 1
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: my-pub-key
propagationMethod:
qemuGuestAgent:
users:
- "root"
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
- name: configdrive
cloudInitConfigDrive:
userData: |
#!/bin/bash
echo "test
10.在 master 节点/root 目录下编写 YAML 文件 vmi-service.yaml 为 VMI 内部应用创建Service,具体要求如下:
(1)Service 名称:vmi-service
(2)访问方式:NodePort;
(3)将 VMI 的 80 端口对外暴露为 30888。
[root@k8s-master-node1 ~]# vim vmi-server.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: exam
labels:
app: vmi
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
resources:
requests:
memory: 2Gi
cpu: 1
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
---
apiVersion: v1
kind: Service
metadata:
name: vmi-service
spec:
externalTrafficPolicy: Cluster
type: NodePort
selector:
app: vmi
ports:
- name: nodeport
nodePort: 30888
port: 80
protocol: TCP
11.在 master 节点/root 目录下编写 YAML 文件 hpa.yaml 创建 HorizontalPodAutoscaler 对象,具体要求如下:
(1)名称:exam-hpa;
(2)将其关联到虚拟实例 exam;
(3)VMI 的最小副本数为 1,最大副本数为 3
(4)要求 VMI 根据设定的 CPU 使用率 70%动态伸缩。
[root@k8s-master-node1 ~]# vim vmi-rs.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: exam
spec:
replicas: 2
selector:
matchLabels:
myvmi: myvmi
template:
metadata:
name: exam
labels:
myvmi: myvmi
spec:
domain:
devices:
disks:
- disk:
name: containerdisk
resources:
requests:
memory: 64M
volumes:
- name: containerdisk
containerDisk:
image: 172.100.0.22/library/centos7:latest
path: /home/centos7.qcow2
[root@k8s-master-node1 ~]# vim vmi-hpa.yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: exam-hpa
spec:
maxReplicas: 3
minReplicas: 1
scaleTargetRef:
kind: VirtualMachineInstanceReplicaSet
name: exam
apiVersion: kubevirt.io/v1
targetCPUUtilizationPercentage: 70
12.在 master 节点/root 目录下编写 YAML 文件 vmi-role.yaml 创建 RBAC 角色,具体要求如下:
(1)角色名称:vm-role;
(2)该角色对 VM 拥有 get、delete、create、update、patch 和 list 权限。
[root@k8s-master-node1 ~]# vim role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: vm-role
rules:
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachines"]
verbs: ["get, delete, create, update, patch, list"]
13.在 master 节点/root 目录下编写 YAML 文件 vmi-network.yaml 为 VM 创建网络策略,具体要求如下:
(1)策略名称:deny-by-default;
(2)要求 VM 仅允许来自同一命名空间内的 VM 的 HTTP 和 HTTPS 访问请求。
[root@k8s-master-node1 ~]# vim vmi-np.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-by-default
spec:
podSelector: {}
ingress:
- ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 8443
dokcer
CICD