一、.实现基于velero对etcd的单独namespace的备份和恢复
- k8s部署节点部署mimio
[root@k8s-deploy ~]# mkdir /data/minio
[root@k8s-deploy ~]# docker run --name minio -p 9000:9000 -p 9999:9999 -d --restart=always \
-e "MINIO_ROOT_USER=admin" -e "MINIO_ROOT_PASSWORD=12345678" \
-v /data/minio/data:/data minio/minio:RELEASE.2022-04-12T06-55-35Z server /data --console-address "0.0.0.0:9999"
访问172.16.220.9:9000 登录minio
image.png
image.png
- 在master1部署velero
[root@master1 data]# cd /usr/local/src/
[root@master1 src]# wget https://github.com/vmware-tanzu/velero/releases/download/v1.10.0/velero-v1.10.1-linux-amd64.tar
[root@master1 src]# tar -xf velero-v1.10.1-linux-amd64.tar
[root@master1 src]# cp velero-v1.10.1-linux-amd64/velero /usr/local/bin/
#创建工作目录
[root@master1 src]# mkdir /data/velero -p
[root@master1 src]# cd /data/velero/
#minio认证文件
[root@master1 velero]# cat > velero-auth.txt <<EOF
[default]
aws_access_key_id = admin
aws_secret_access_key = 12345678
EOF
#使用默认认证文件
[root@k8s-deploy ~]# scp -r /root/.kube/config root@172.16.220.11:/data/velero
[root@master1 velero]# ls
config velero-auth.txt
#执行安装velero :
[root@master1 velero]# velero --kubeconfig ./config install --provider admin --plugins velero/velero-plugin-for-aws:v1.3.1 --bucket velerodata --secret-file ./velero-auth.txt --use-volume-snapshots=false --namespace velero-system --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://172.16.220.9:9000
-
查看日志没有报错,切是valid状态
image.png - 备份指定namespace
[root@master1 velero]# DATE=`date +%Y%m%d%H%M%S`
[root@master1 velero]# velero backup create kube-system-ns-backup-${DATE} --include-cluster-resources=true --include-namespaces kube-system --kubeconfig=/root/.kube/config
Backup request "kube-system-ns-backup-20230306162700" submitted successfully.
二、掌握k8s中常见的资源对象的使用:
deployment :
- 目前主流使用的控制器,比rs更高一级的控制器,除了有rs的功能之外,还有很多高级功能,比如说最重要的:滚动升级、回滚等
service简介 :
- 由于pod重建之后ip就变了,因此pod之间使用pod的IP直接访问会出现无法访问的问题,而service则解耦了服务和应用,service的实现方式就是通过label标签动态匹配后端endpoint。
- kube-proxy监听着k8s-apiserver,一旦service资源发生变化(调k8s- api修改service信息),kube- proxy就会生成对应的负载调度的调整,这样就保证service的最新状态。
service类型:
- ClusterIP:用于内部服务基于service name的访问。
- NodePort:用于kubernetes集群以外的服务主动访问运行在kubernetes集群内部的服务。
- LoadBalancer:用于公有云环境的服务暴露。
- ExternalName:用于将k8s集群外部的服务映射至k8s集群内部访问,从而让集群内部的pod能够通过固定的
- service name访问集群外部的服务,有时候也用于将不同namespace之间的pod通过ExternalName进行访问。
[root@deploy case4-service]# pwd
/data/k8s-data/yaml/k8s-case-n70/case4-service
#创建容器
[root@deploy case4-service]# kubectl apply -f 1-deploy_node.yml
#创建service
[root@deploy case4-service]# kubectl apply -f 2-svc_service.yml
[root@deploy case4-service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 67m
ng-deploy-80 ClusterIP 10.100.84.139 <none> 80/TCP 16s
#k8s以内的环境是通的(有ipvs规则),以外的环境不通(没有路由)
[root@node1 ~]# curl -I 10.100.84.139
HTTP/1.1 200 OK
Server: nginx/1.20.0
#添加端口映射,提供对外访问
[root@deploy case4-service]# kubectl apply -f 3-svc_NodePort.yml
[root@deploy case4-service]# curl -I 172.16.201.211:30012
HTTP/1.1 200 OK
Volume-存储卷简介
- Volume将容器中的指定数据和容器解耦,并将数据存储到指定的位置,不同的存储卷功能不一样,如果是基于网络存储的存储卷可以可实现容器间的数据共享和持久化。
- 静态存储卷需要在使用前手动创建PV和PVC,然后绑定至pod使用。
常用的几种卷:
- Secret:是一种包含少量敏感信息例如密码、令牌或密钥的对象
- configmap: 配置文件
- emptyDir:本地临时卷
- hostPath:本地存储卷
- nfs等:网络存储卷
configmap:
- Configmap配置信息和镜像解耦, 实现方式为将配置信息放到configmap对象中,然后在pod的中作为Volume挂载到pod中,从而实现导入配置的目的。
使用场景:
- 通过Configmap给pod定义全局环境变量
- 通过Configmap给pod传递命令行参数,如mysql -u -p中的账户名密码可以通过Configmap传递。
- 通过Configmap给pod中的容器服务提供配置文件,配置文件以挂载到容器的形式使用。
注意事项
Configmap需要在pod使用它之前创建。
pod只能使用位于同一个namespace的Configmap,及Configmap不能夸namespace使用。
通常用于非安全加密的配置场景。
Configmap通常是小于1MB的配置。
secret
-
Secret简介
image.png -
Secret简介类型
image.png -
Secret类型-Opaque格式
image.png -
Secret挂载流程
image.png
三、掌握基于NFS实现pod数据持久化的使用方式,测试emptyDir、hostPath的使用
- NFS使用
[root@deploy case7-nfs]# yum -y install nfs-utils rpcbind
[root@deploy case7-nfs]# mkdir /data/k8sdata/pool{1..2}
[root@deploy case7-nfs]# cat /etc/exports
/data/k8sdata *(rw,no_root_squash)
[root@deploy case7-nfs]# systemctl restart nfs-server.service
[root@deploy case7-nfs]# systemctl enable nfs-server.service
[root@node1 ~]# showmount -e 172.16.201.210
Export list for 172.16.201.210:
/data/k8sdata *
[root@deploy case7-nfs]# pwd
/data/k8s-data/yaml/k8s-case-n70/case7-nfs
[root@deploy case7-nfs]# kubectl apply -f 2-deploy_nfs.yml
[root@deploy case7-nfs]# echo "pool1" >> /data/k8sdata/pool1/index.html
[root@deploy case7-nfs]# echo "pool2" >> /data/k8sdata/pool2/index.html
[root@deploy case7-nfs]# curl 172.16.201.213:30017/pool1/index.html
pool1
[root@deploy case7-nfs]# curl 172.16.201.213:30017/pool2/index.html
pool2
- emptyDir使用
[root@deploy case5-emptyDir]# pwd
/data/k8s-data/yaml/k8s-case-n70/case5-emptyDir
[root@deploy case5-emptyDir]# kubectl apply -f deploy_emptyDir.yml
[root@deploy case5-emptyDir]# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-6bfd99db86-85wvl 1/1 Running 0 29s 10.200.166.135 172.16.201.213 <none> <none>
[root@deploy case5-emptyDir]# kubectl exec -it nginx-deployment-6bfd99db86-85wvl bash
root@nginx-deployment-6bfd99db86-85wvl:/# touch /cache/test.txt
root@nginx-deployment-6bfd99db86-85wvl:/# echo "1111111" > /cache/test.txt
[root@node1 ~]# find /var/lib/kubelet/ -name test.txt
/var/lib/kubelet/pods/31840071-7d8b-4e31-9539-ec4e1535a95b/volumes/kubernetes.io~empty-dir/cache-volume/test.txt
[root@node1 ~]# tail -f /var/lib/kubelet/pods/31840071-7d8b-4e31-9539-ec4e1535a95b/volumes/kubernetes.io~empty-dir/cache-volume/test.txt
1111111
[root@deploy case5-emptyDir]# kubectl delete -f deploy_emptyDir.yml
deployment.apps "nginx-deployment" deleted
[root@node1 ~]# find /var/lib/kubelet/ -name test.txt #没有改文件
- hostPath使用
[root@deploy case6-hostPath]# pwd
/data/k8s-data/yaml/k8s-case-n70/case6-hostPath
[root@deploy case6-hostPath]# kubectl apply -f deploy_hostPath.yml
[root@deploy case6-hostPath]# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-7d4bcfd7b8-8lzds 1/1 Running 0 27s 10.200.166.136 172.16.201.213 <none> <none>
[root@deploy case6-hostPath]# kubectl exec -it nginx-deployment-7d4bcfd7b8-8lzds bash
root@nginx-deployment-7d4bcfd7b8-8lzds:/# touch /cache/n70.txt
[root@node1 ~]# find / -name n70.txt
/data/kubernetes/n70.txt
root@nginx-deployment-7d4bcfd7b8-8lzds:/# echo "123" >> /cache/n70.txt
root@nginx-deployment-7d4bcfd7b8-8lzds:/# echo "456" >> /cache/n70.txt
[root@node1 ~]# tail -f /data/kubernetes/n70.txt
123
456
[root@deploy case6-hostPath]# kubectl delete -f deploy_hostPath.yml
[root@node1 ~]# cat /data/kubernetes/n70.txt
123
456
四、实现基于Secret实现nginx的tls认证、并实现私有仓库镜像的下载认证
- Secret实现nginx的tls认证
[root@deploy case11-secret]# pwd
/data/k8s-data/yaml/k8s-case-n70/case11-secret
[root@deploy case11-secret]# mkdir certs
[root@deploy case11-secret]# cd certs/
[root@deploy certs]# openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.ca.com'
[root@deploy certs]# openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.mysite.com'
[root@deploy certs]# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
[root@deploy certs]# kubectl create secret tls myserver-tls-key --cert=./server.crt --key=./server.key -n myserver
secret/myserver-tls-key created
[root@deploy certs]# kubectl get secrets -n myserver
NAME TYPE DATA AGE
mysecret-data Opaque 3 25m
mysecret-stringdata Opaque 2 28m
myserver-tls-key kubernetes.io/tls 2 36s
[root@deploy certs]# kubectl get secrets myserver-tls-key -n myserver -o yaml
[root@deploy case11-secret]# kubectl apply -f 4-secret-tls.yaml
[root@deploy case11-secret]# kubectl get pod -n myserver
NAME READY STATUS RESTARTS AGE
myserver-myapp-frontend-deployment-6d6f756f99-gqhlx 1/1 Running 0 20s
[root@deploy case11-secret]# kubectl exec -it myserver-myapp-frontend-deployment-6d6f756f99-gqhlx -n myserver bash
root@myserver-myapp-frontend-deployment-6d6f756f99-gqhlx:/# vim /etc/nginx/nginx.conf
include /etc/nginx/conf.d/myserver/*.conf;
root@myserver-myapp-frontend-deployment-6d6f756f99-gqhlx:/# nginx -s reload
root@myserver-myapp-frontend-deployment-6d6f756f99-gqhlx:/# lsof -i:443
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1 root 18u IPv4 4592618 0t0 TCP *:443 (LISTEN)
root@myserver-myapp-frontend-deployment-6d6f756f99-gqhlx:/# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1 root 7u IPv4 4558524 0t0 TCP *:80 (LISTEN)
nginx 1 root 8u IPv6 4558525 0t0 TCP *:80 (LISTEN)
[root@deploy case11-secret]# vim /etc/haproxy/haproxy.cfg
listen myserver-nginx-80
bind 172.16.201.210:80
mode tcp
server 172.16.201.213 172.16.201.213:30020 check inter 3s fall 3 rise 3
listen myserver-nginx-443
bind 172.16.201.210:443
mode tcp
server 172.16.201.213 172.16.201.213:30019 check inter 3s fall 3 rise 3
[root@deploy case11-secret]# systemctl restart haproxy.service
[root@deploy case11-secret]# curl -I mysite.com
HTTP/1.1 200 OK
Date: Thu, 16 Mar 2023 08:35:35 GMT
Server: .V12 Apache
Content-length: 20026
Content-Type: text/html
- 实现私有仓库镜像的下载认证
[root@node1 ~]# vim /etc/containerd/config.toml
163 #[plugins."io.containerd.grpc.v1.cri".registry.configs."qj.harbor.com".auth]
164 # username = "admin"
165 # password = "123456"
[root@node1 ~]# systemctl restart containerd.service
[root@deploy case11-secret]# docker login qj.harbor.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@deploy case11-secret]# kubectl create secret generic harbor-registry-image-pull-key --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson -n myserver
secret/harbor-registry-image-pull-key created
[root@deploy case11-secret]# kubectl get secrets -n myserver
NAME TYPE DATA AGE
harbor-registry-image-pull-key kubernetes.io/dockerconfigjson 1 11s
mysecret-data Opaque 3 100m
mysecret-stringdata Opaque 2 103m
myserver-tls-key kubernetes.io/tls 2 75m
[root@deploy case11-secret]# vim 5-secret-imagePull.yaml
image: qj.harbor.com/baseimages/nginx:1.20.0
[root@deploy case11-secret]# kubectl apply -f 5-secret-imagePull.yaml
[root@deploy case11-secret]# kubectl get pod -n myserver
NAME READY STATUS RESTARTS AGE
myserver-myapp-frontend-deployment-654ddbfdd-s998g 1/1 Running 0 15h