1. configMap
1.1 原理介绍
给容器内应用程序传递参数的实现方式:
- 将配置文件直接打包到镜像中,但这种方式不推荐使用,因为修改配置不够灵活。
- 通过定义Pod清单时,指定自定义命令行参数,即设定 args:["命令参数"],这种也可在启动Pod时,传参来修改Pod的应用程序的配置文件.
- 使用环境变量来给Pod中应用传参修改配置, 但要使用此种方式,必须符合以下前提之一:
- Pod中的应用程序必须是Cloud Native的应用程序,即支持直接通过环境变量来加载配置信息。
- 通过定义Entrypoint脚本的预处理变量来修改Pod中应用程序的配置文件,这些Entrypoint脚本可以使用set,sed,grep等工具来实现修改,但也要确保容器中有这些工具。
- 存储卷: 我们可将配置信息直接放到存储卷中,如PV中,Pod启动时,自动挂载存储卷到配置文件目录,来实现给Pod中应用提供不同的配置。
- configMap 或 secret
1.2 作用
就是为了让镜像 和 配置文件解耦,以便实现镜像的可移植性和可复用性,因为一个configMap
其实就是一系列配置信息的集合,将来可直接注入到Pod中的容器使用,而注入方式有两种,一种将configMap
做为存储卷,一种是将configMap
通过env中configMapKeyRef注入到容器中; configMap
是KeyValve形式来保存数据的,如: name=zhangsan 或 nginx.conf="http{server{...}}" 对于configMap
的Value的长度是没有限制的,所以它可以是一整个配置文件的信息。
configMap:
它是K8s中的标准组件,它通过两种方式实现给Pod传递配置参数:
- 将环境变量直接定义在
configMap
中,当Pod启动时,通过env来引用configMap
中定义的环境变量。 - 将一个完整配置文件封装到
configMap
中,然后通过共享卷的方式挂载到Pod中,实现给应用传参。
secret:
它时一种相对安全的configMap
,因为它将configMap
通过base64做了编码, 让数据不是明文直接存储在configMap
中,起到了一定的保护作用,但对Base64进行反编码,对专业人士来说,没有任何难度,因此它只是相对安全。
对于configMap
中第一种,让Pod引用configMap
中的环境变量的方式:
kubectl explain pods.spec.containers.env #env也可直接定义传递给Pod中容器的环境变量,这点需要记住。
env.valueFrom
configMapKeyRef: 可用于定义Pod启动时引用的configMapKey是哪个。
fieldRef: 也可引用一个字段,为Pod中容器内应用程序的每个环境变量值,如:
metadata.name: 引用Pod的名称
metadata.namespace: 引用Pod所在的名称空间名
metadata.labels: 引用Pod的标签
status.hostIP: 引用Pod所在节点的IP
status.podIP: 引用Pod的IP
resourceFieldRef: 引用一个资源需求 或 资源限制
secretKeyRef: 引用一个secretKey来为Pod传参
在定义configMap
时,通常仅需要定义它的data 或 binaryData(二进制数据),它俩都是map[string]类型的,
所以它们的值都是以hash列表方式存储的,即key和value没有直接关系,key就是hash码。
1.3 使用configmap为nginx提供一个配置文件
root@k8s-ansible-client:~/yaml/20211010/01# vim deploy_configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default: |
server {
listen 80;
server_name www.pop.com;
index index.html;
location / {
root /data/nginx/html;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data/nginx/html
name: nginx-static-dir
- name: nginx-config
mountPath: /etc/nginx/conf.d
volumes:
- name: nginx-static-dir
hostPath:
path: /data/nginx/pop
- name: nginx-config
configMap:
name: nginx-config
items:
- key: default
path: mysite.conf
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-80
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 30019
protocol: TCP
type: NodePort
selector:
app: ng-deploy-80
root@k8s-ansible-client:~/yaml/20211010/01# kubectl apply -f deploy_configmap.yaml
configmap/nginx-config created
deployment.apps/nginx-deployment created
service/ng-deploy-80 created
root@k8s-ansible-client:~/yaml/20211010/01# kubectl get pods,deploy
NAME READY STATUS RESTARTS AGE
pod/alpine-test 1/1 Running 29 (3h49m ago) 16d
pod/kube100-site 2/2 Running 0 2d
pod/nginx-deployment-6b86dd48c8-bdgv7 1/1 Running 0 50s
pod/nginx-test-001 1/1 Running 5 (4h26m ago) 3d1h
pod/nginx-test1 1/1 Running 29 (3h59m ago) 16d
pod/nginx-test2 1/1 Running 29 (3h59m ago) 16d
pod/nginx-test3 1/1 Running 29 (3h59m ago) 16d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 1/1 1 1 50s
root@k8s-ansible-client:~/yaml/20211010/01# kubectl get configmap
NAME DATA AGE
kube-root-ca.crt 1 17d
nginx-config 1 57s
root@k8s-ansible-client:~/yaml/20211010/01# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 17d
ng-deploy-80 NodePort 10.68.114.213 <none> 81:30019/TCP 61s
验证
root@k8s-ansible-client:~/yaml/20211010/01# kubectl exec -it pod/nginx-deployment-6b86dd48c8-bdgv7 bash
root@nginx-deployment-6b86dd48c8-bdgv7:/# cat /etc/nginx/conf.d/mysite.conf
server {
listen 80;
server_name www.pop.com;
index index.html;
location / {
root /data/nginx/html;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
# 创建一个index.html文件
root@nginx-deployment-6b86dd48c8-bdgv7:/# cd /data/nginx/html/
root@nginx-deployment-6b86dd48c8-bdgv7:/data/nginx/html# ls
root@nginx-deployment-6b86dd48c8-bdgv7:/data/nginx/html# echo "pop20211011" > index.html
本地绑定host,浏览器访问,如截图:
1.4 使用configmap为mysql提供一个配置文件
root@k8s-ansible-client:~/yaml/20211010/01# vim configmap_mysql.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
mysqld.cnf: |-
[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
symbolic-links = 0
max_allowed_packet = 50M
character_set_server = utf8
collation_server = utf8_general_ci
group_concat_max_len = 102400
[client]
default_character_set = utf8
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pop-mysql
labels:
app: mysql-dev
spec:
replicas: 1
selector:
matchLabels:
app: mysql-dev
template:
metadata:
labels:
app: mysql-dev
spec:
containers:
- image: mysql:5.7
imagePullPolicy: IfNotPresent
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345678"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: config-volume
mountPath: /etc/mysql/mysql.conf.d
volumes:
- name: config-volume
configMap:
name: mysql-config
root@k8s-ansible-client:~/yaml/20211010/01# kubectl apply -f configmap_mysql.yaml configmap/mysql-config unchanged
deployment.apps/pop-mysql created
root@k8s-ansible-client:~/yaml/20211010/01# kubectl get pods,deploy
NAME READY STATUS RESTARTS AGE
pod/alpine-test 1/1 Running 34 (5h48m ago) 19d
pod/kube100-site 2/2 Running 0 5d
pod/nginx-test-001 1/1 Running 10 (6h25m ago) 6d1h
pod/nginx-test1 1/1 Running 34 (5h58m ago) 19d
pod/nginx-test2 1/1 Running 34 (5h58m ago) 19d
pod/nginx-test3 1/1 Running 34 (5h58m ago) 19d
pod/pop-mysql-9d9967bbb-g9mw7 1/1 Running 0 27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pop-mysql 1/1 1 1 27s
root@k8s-ansible-client:~/yaml/20211010/01# kubectl get configmap
NAME DATA AGE
kube-root-ca.crt 1 20d
mysql-config 1 6m55s
验证
root@k8s-ansible-client:~/yaml/20211010/01# kubectl exec -it pod/pop-mysql-9d9967bbb-g9mw7 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@pop-mysql-9d9967bbb-g9mw7:/#
root@pop-mysql-9d9967bbb-g9mw7:/# cat /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
symbolic-links = 0
max_allowed_packet = 50M
character_set_server = utf8
collation_server = utf8_general_ci
group_concat_max_len = 102400
[client]
default_character_set = utf8
root@pop-mysql-9d9967bbb-g9mw7:/# mysql -u root -p12345678
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.35 MySQL Community Server (GPL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
2. 持久化存储PV、PVC
2.1 简介
PersistentVolume(PV):是集群中的一块存储,可以由管理员事先供应,或者 使用存储类(Storage Class)来动态供应。 持久卷是集群资源,就像节点也是集群资源一样。PV 持久卷和普通的 Volume 一样,也是使用 卷插件来实现的,只是它们拥有独立于任何使用 PV 的 Pod 的生命周期。 此 API 对象中记述了存储的实现细节,无论其背后是 NFS、iSCSI 还是特定于云平台的存储系统。
PersistentVolumeClaim(PVC):表达的是用户对存储的请求。概念上与 Pod 类似。 Pod 会耗用节点资源,而 PVC 申领会耗用 PV 资源。Pod 可以请求特定数量的资源(CPU 和内存);同样 PVC 申领也可以请求特定的大小和访问模式 (例如,可以要求 PV 卷能够以 ReadWriteOnce、ReadOnlyMany 或 ReadWriteMany 模式之一来挂载,参见访问模式)。
Persistent Volume和Persistent Volume Claim类似Pods和Nodes的关系,创建Pods需要消耗一定的Nodes的资源。而Persistent Volume则是提供了各种存储资源,而Persistent Volume Claim提出需要的存储标准,然后从现有存储资源中匹配或者动态建立新的资源,最后将两者进行绑定。
2.2 PV和PVC的生命周期
PV 卷是集群中的资源。PVC 申领是对这些资源的请求,也被用来执行对资源的申领检查。 PV 卷和 PVC 申领之间的互动遵循如下生命周期:
2.2.1 供应
PV 卷的供应有两种方式:静态供应或动态供应。
静态供应
集群管理员创建若干 PV 卷。这些卷对象带有真实存储的细节信息,并且对集群 用户可用(可见)。PV 卷对象存在于 Kubernetes API 中,可供用户消费(使用)。
动态供应
如果管理员所创建的所有静态 PV 卷都无法与用户的 PersistentVolumeClaim 匹配, 集群可以尝试为该 PVC 申领动态供应一个存储卷。 这一供应操作是基于 StorageClass 来实现的:PVC 申领必须请求某个 存储类,同时集群管理员必须 已经创建并配置了该类,这样动态供应卷的动作才会发生。 如果 PVC 申领指定存储类为 ""
,则相当于为自身禁止使用动态供应的卷。
为了基于存储类完成动态的存储供应,集群管理员需要在 API 服务器上启用 DefaultStorageClass
准入控制器。 举例而言,可以通过保证 DefaultStorageClass
出现在 API 服务器组件的 --enable-admission-plugins
标志值中实现这点;该标志的值可以是逗号 分隔的有序列表。关于 API 服务器标志的更多信息,可以参考 kube-apiserver 文档。
2.2.2 绑定
用户创建一个带有特定存储容量和特定访问模式需求的 PersistentVolumeClaim 对象; 在动态供应场景下,这个 PVC 对象可能已经创建完毕。 主控节点中的控制回路监测新的 PVC 对象,寻找与之匹配的 PV 卷(如果可能的话), 并将二者绑定到一起。 如果为了新的 PVC 申领动态供应了 PV 卷,则控制回路总是将该 PV 卷绑定到这一 PVC 申领。 否则,用户总是能够获得他们所请求的资源,只是所获得的 PV 卷可能会超出所请求的配置。 一旦绑定关系建立,则 PersistentVolumeClaim 绑定就是排他性的,无论该 PVC 申领是 如何与 PV 卷建立的绑定关系。 PVC 申领与 PV 卷之间的绑定是一种一对一的映射,实现上使用 ClaimRef 来记述 PV 卷 与 PVC 申领间的双向绑定关系。
如果找不到匹配的 PV 卷,PVC 申领会无限期地处于未绑定状态。 当与之匹配的 PV 卷可用时,PVC 申领会被绑定。 例如,即使某集群上供应了很多 50 Gi 大小的 PV 卷,也无法与请求 100 Gi 大小的存储的 PVC 匹配。当新的 100 Gi PV 卷被加入到集群时,该 PVC 才有可能被绑定。
2.2.3 使用
Pod 将 PVC 申领当做存储卷来使用。集群会检视 PVC 申领,找到所绑定的卷,并 为 Pod 挂载该卷。对于支持多种访问模式的卷,用户要在 Pod 中以卷的形式使用申领 时指定期望的访问模式。
一旦用户有了申领对象并且该申领已经被绑定,则所绑定的 PV 卷在用户仍然需要它期间 一直属于该用户。用户通过在 Pod 的 volumes
块中包含 persistentVolumeClaim
节区来调度 Pod,访问所申领的 PV 卷。 相关细节可参阅使用申领作为卷。
保护使用中的存储对象
保护使用中的存储对象(Storage Object in Use Protection)这一功能特性的目的 是确保仍被 Pod 使用的 PersistentVolumeClaim(PVC)对象及其所绑定的 PersistentVolume(PV)对象在系统中不会被删除,因为这样做可能会引起数据丢失。
说明: 当使用某 PVC 的 Pod 对象仍然存在时,认为该 PVC 仍被此 Pod 使用。
如果用户删除被某 Pod 使用的 PVC 对象,该 PVC 申领不会被立即移除。 PVC 对象的移除会被推迟,直至其不再被任何 Pod 使用。 此外,如果管理员删除已绑定到某 PVC 申领的 PV 卷,该 PV 卷也不会被立即移除。 PV 对象的移除也要推迟到该 PV 不再绑定到 PVC。
当 PVC 的状态为 Terminating 且其 Finalizers 列表中包含 kubernetes.io/pvc-protection 时,PVC 对象是处于被保护状态的。
当 PV 对象的状态为 Terminating 且其 Finalizers 列表中包含 kubernetes.io/pv-protection 时,PV 对象是处于被保护状态的。
2.2.4 回收
当用户不再使用其存储卷时,他们可以从 API 中将 PVC 对象删除,从而允许 该资源被回收再利用。PersistentVolume 对象的回收策略告诉集群,当其被 从申领中释放时如何处理该数据卷。 目前,数据卷可以被 Retained(保留)、Recycled(回收)或 Deleted(删除)。
Retained(保留)
回收策略 Retain 使得用户可以手动回收资源。当 PersistentVolumeClaim 对象 被删除时,PersistentVolume 卷仍然存在,对应的数据卷被视为"已释放(released)"。 由于卷上仍然存在这前一申领人的数据,该卷还不能用于其他申领。 管理员可以通过下面的步骤来手动回收该卷:
- 删除 PersistentVolume 对象。与之相关的、位于外部基础设施中的存储资产 (例如 AWS EBS、GCE PD、Azure Disk 或 Cinder 卷)在 PV 删除之后仍然存在。
- 根据情况,手动清除所关联的存储资产上的数据。
- 手动删除所关联的存储资产。
如果你希望重用该存储资产,可以基于存储资产的定义创建新的 PersistentVolume 卷对象。
Recycled(回收)
对于支持 Delete
回收策略的卷插件,删除动作会将 PersistentVolume 对象从 Kubernetes 中移除,同时也会从外部基础设施(如 AWS EBS、GCE PD、Azure Disk 或 Cinder 卷)中移除所关联的存储资产。 动态供应的卷会继承其 StorageClass 中设置的回收策略,该策略默认 为 Delete
。 管理员需要根据用户的期望来配置 StorageClass;否则 PV 卷被创建之后必须要被 编辑或者修补。参阅更改 PV 卷的回收策略.
Deleted(删除)
警告: 回收策略 Recycle 已被废弃。取而代之的建议方案是使用动态供应。
如果下层的卷插件支持,回收策略 Recycle
会在卷上执行一些基本的 擦除(rm -rf /thevolume/*
)操作,之后允许该卷用于新的 PVC 申领。
不过,管理员可以按 参考资料 中所述,使用 Kubernetes 控制器管理器命令行参数来配置一个定制的回收器(Recycler) Pod 模板。此定制的回收器 Pod 模板必须包含一个 volumes
规约,如下例所示:
apiVersion: v1
kind: Pod
metadata:
name: pv-recycler
namespace: default
spec:
restartPolicy: Never
volumes:
- name: vol
hostPath:
path: /any/path/it/will/be/replaced
containers:
- name: pv-recycler
image: "k8s.gcr.io/busybox"
command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
volumeMounts:
- name: vol
mountPath: /scrub
定制回收器 Pod 模板中在 volumes 部分所指定的特定路径要替换为 正被回收的卷的路径。
2.3 PV
2.3.1 支持类型
PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插件:
-
awsElasticBlockStore
- AWS 弹性块存储(EBS) -
azureDisk
- Azure Disk -
azureFile
- Azure File -
cephfs
- CephFS volume -
csi
- 容器存储接口 (CSI) -
fc
- Fibre Channel (FC) 存储 -
flexVolume
- FlexVolume -
gcePersistentDisk
- GCE 持久化盘 -
glusterfs
- Glusterfs 卷 -
hostPath
- HostPath 卷 (仅供单节点测试使用;不适用于多节点集群; 请尝试使用local
卷作为替代) -
iscsi
- iSCSI (SCSI over IP) 存储 -
local
- 节点上挂载的本地存储设备 -
nfs
- 网络文件系统 (NFS) 存储 -
portworxVolume
- Portworx 卷 -
rbd
- Rados 块设备 (RBD) 卷 -
vsphereVolume
- vSphere VMDK 卷
以下的持久卷已被弃用。这意味着当前仍是支持的,但是 Kubernetes 将来的发行版会将其移除。
-
cinder
- Cinder(OpenStack 块存储)(于 v1.18 弃用) -
flocker
- Flocker 存储(于 v1.22 弃用) -
quobyte
- Quobyte 卷 (于 v1.22 弃用) -
storageos
- StorageOS 卷(于 v1.22 弃用)
旧版本的 Kubernetes 仍支持这些“树内(In-Tree)”持久卷类型:
-
photonPersistentDisk
- Photon 控制器持久化盘。(v1.15 之后 不可用) -
scaleIO
- ScaleIO 卷(v1.21 之后 不可用)
2.3.2 访问模式
ReadWriteOnce
卷可以被一个节点以读写方式挂载。 ReadWriteOnce 访问模式也允许运行在同一节点上的多个 Pod 访问卷。
ReadOnlyMany
卷可以被多个节点以只读方式挂载。
ReadWriteMany
卷可以被多个节点以读写方式挂载。
ReadWriteOncePod
卷可以被单个 Pod 以读写方式挂载。 如果你想确保整个集群中只有一个 Pod 可以读取或写入该 PVC, 请使用ReadWriteOncePod 访问模式。这只支持 CSI 卷以及需要 Kubernetes 1.22 以上版本。
在命令行接口(CLI)中,访问模式也使用以下缩写形式:
- RWO - ReadWriteOnce
- ROX - ReadOnlyMany
- RWX - ReadWriteMany
- RWOP - ReadWriteOncePod
重要提醒! 每个卷同一时刻只能以一种访问模式挂载,即使该卷能够支持 多种访问模式。例如,一个 GCEPersistentDisk 卷可以被某节点以 ReadWriteOnce 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式 挂载。
卷插件 | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
---|---|---|---|---|
AWSElasticBlockStore | ✓ | - | - | - |
AzureFile | ✓ | ✓ | ✓ | - |
AzureDisk | ✓ | - | - | - |
CephFS | ✓ | ✓ | ✓ | - |
Cinder | ✓ | - | - | - |
CSI | 取决于驱动 | 取决于驱动 | 取决于驱动 | 取决于驱动 |
FC | ✓ | ✓ | - | - |
FlexVolume | ✓ | ✓ | 取决于驱动 | - |
Flocker | ✓ | - | - | - |
GCEPersistentDisk | ✓ | ✓ | - | - |
Glusterfs | ✓ | ✓ | ✓ | - |
HostPath | ✓ | - | - | - |
iSCSI | ✓ | ✓ | - | - |
Quobyte | ✓ | ✓ | ✓ | - |
NFS | ✓ | ✓ | ✓ | - |
RBD | ✓ | ✓ | - | - |
VsphereVolume | ✓ | - | - (Pod 运行于同一节点上时可行) | - |
PortworxVolume | ✓ | - | ✓ | - |
StorageOS | ✓ | - | - | - |
2.3.3 类
每个 PV 可以属于某个类(Class),通过将其 storageClassName
属性设置为某个 StorageClass 的名称来指定。 特定类的 PV 卷只能绑定到请求该类存储卷的 PVC 申领。 未设置 storageClassName
的 PV 卷没有类设定,只能绑定到那些没有指定特定 存储类的 PVC 申领。
早前,Kubernetes 使用注解 volume.beta.kubernetes.io/storage-class
而不是 storageClassName
属性。这一注解目前仍然起作用,不过在将来的 Kubernetes 发布版本中该注解会被彻底废弃。
2.3.4 回收策略
目前的回收策略有:
- Retain -- 手动回收
- Recycle -- 基本擦除 (rm -rf /thevolume/*)
- Delete -- 诸如 AWS EBS、GCE PD、Azure Disk 或 OpenStack Cinder 卷这类关联存储资产也被删除
目前,仅 NFS 和 HostPath 支持回收(Recycle)。 AWS EBS、GCE PD、Azure Disk 和 Cinder 卷都支持删除(Delete)。
2.3.5 挂载选项
Kubernetes 管理员可以指定持久卷被挂载到节点上时使用的附加挂载选项。
说明: 并非所有持久卷类型都支持挂载选项。
以下卷类型支持挂载选项:
- AWSElasticBlockStore
- AzureDisk
- AzureFile
- CephFS
- Cinder (OpenStack 块存储)
- GCEPersistentDisk
- Glusterfs
- NFS
- Quobyte 卷
- RBD (Ceph 块设备)
- StorageOS
- VsphereVolume
- iSCSI
Kubernetes 不对挂载选项执行合法性检查。如果挂载选项是非法的,挂载就会失败。
早前,Kubernetes 使用注解 volume.beta.kubernetes.io/mount-options 而不是 mountOptions 属性。这一注解目前仍然起作用,不过在将来的 Kubernetes 发布版本中该注解会被彻底废弃。
2.3.6 阶段
每个卷会处于以下阶段(Phase)之一:
- Available(可用)-- 卷是一个空闲资源,尚未绑定到任何申领;
- Bound(已绑定)-- 该卷已经绑定到某申领;
- Released(已释放)-- 所绑定的申领已被删除,但是资源尚未被集群回收;
- Failed(失败)-- 卷的自动回收操作失败。
命令行接口能够显示绑定到某 PV 卷的 PVC 对象。
2.4 PVC
每个 PVC 对象都有 spec
和 status
部分,分别对应申领的规约和状态。 PersistentVolumeClaim 对象的名称必须是合法的 DNS 子域名.
2.4.1 访问模式
pvc在请求具有特定访问模式的存储时,使用与pv相同的访问模式
2.4.2 卷模式
pvc使用与pv相同的约定来表明是将卷作为文件系统还是块设备来使用。
2.4.3 资源
pvc和 pod 一样,也可以请求特定数量的资源。在这个上下文中,请求的资源是存储。 pv和pvc都使用相同的 资源模型。
2.4.4 选择算符
申领可以设置标签选择算符 来进一步过滤卷集合。只有标签与选择算符相匹配的卷能够绑定到申领上。 选择算符包含两个字段:
-
matchLabels
- 卷必须包含带有此值的标签 -
matchExpressions
- 通过设定键(key)、值列表和操作符(operator) 来构造的需求。合法的操作符有 In、NotIn、Exists 和 DoesNotExist。
来自 matchLabels
和 matchExpressions
的所有需求都按逻辑与的方式组合在一起。 这些需求都必须被满足才被视为匹配。
2.4.5 类
申领可以通过为 storageClassName
属性设置 StorageClass 的名称来请求特定的存储类。 只有所请求的类的 PV 卷,即 storageClassName
值与 PVC 设置相同的 PV 卷, 才能绑定到 PVC 申领。
PVC 申领不必一定要请求某个类。如果 PVC 的 storageClassName
属性值设置为 ""
, 则被视为要请求的是没有设置存储类的 PV 卷,因此这一 PVC 申领只能绑定到未设置 存储类的 PV 卷(未设置注解或者注解值为 ""
的 PersistentVolume(PV)对象在系统中不会被删除,因为这样做可能会引起数据丢失。 未设置 storageClassName
的 PVC 与此大不相同,也会被集群作不同处理。 具体筛查方式取决于DefaultStorageClass
准入控制器插件 是否被启用。
- 如果准入控制器插件被启用,则管理员可以设置一个默认的 StorageClass。 所有未设置
storageClassName
的 PVC 都只能绑定到隶属于默认存储类的 PV 卷。 设置默认 StorageClass 的工作是通过将对应 StorageClass 对象的注解storageclass.kubernetes.io/is-default-class
赋值为true
来完成的。 如果管理员未设置默认存储类,集群对 PVC 创建的处理方式与未启用准入控制器插件 时相同。如果设定的默认存储类不止一个,准入控制插件会禁止所有创建 PVC 操作。 - 如果准入控制器插件被关闭,则不存在默认 StorageClass 的说法。 所有未设置
storageClassName
的 PVC 都只能绑定到未设置存储类的 PV 卷。 在这种情况下,未设置storageClassName
的 PVC 与storageClassName
设置未""
的 PVC 的处理方式相同。
取决于安装方法,默认的 StorageClass 可能在集群安装期间由插件管理器(Addon Manager)部署到集群中。
当某 PVC 除了请求 StorageClass 之外还设置了 selector
,则这两种需求会按 逻辑与关系处理:只有隶属于所请求类且带有所请求标签的 PV 才能绑定到 PVC。
说明: 目前,设置了非空 selector 的 PVC 对象无法让集群为其动态供应 PV 卷。
早前,Kubernetes 使用注解 volume.beta.kubernetes.io/storage-class 而不是 storageClassName 属性。这一注解目前仍然起作用,不过在将来的 Kubernetes 发布版本中该注解会被彻底废弃。
3. 基于pv/pvc部署zookeeper和nginx服务
3.1 部署zookeeper服务
3.1.1 制作zookeeper镜像
下载jdk
root@k8s-ansible-client:~/yaml/20211010/02# docker pull elevy/slim_java:8
8: Pulling from elevy/slim_java
88286f41530e: Pull complete
7141511c4dad: Pull complete
fd529fe251b3: Pull complete
Digest: sha256:044e42fb89cda51e83701349a9b79e8117300f4841511ed853f73caf7fc98a51
Status: Downloaded newer image for elevy/slim_java:8
docker.io/elevy/slim_java:8
# tag镜像
root@k8s-ansible-client:~/yaml/20211010/02/dockerfiles/zk# docker tag elevy/slim_java:8 harbor.openscp.com/base/slim_java:8
# 推送自建镜像仓库
root@k8s-ansible-client:~/yaml/20211010/02/dockerfiles/zk# docker push harbor.openscp.com/base/slim_java:8
The push refers to repository [harbor.openscp.com/base/slim_java]
e053edd72ca6: Pushed
aba783efb1a4: Pushed
5bef08742407: Pushed
8: digest: sha256:817d0af5d4f16c29509b8397784f5d4ec3accb1bfde4e474244ed3be7f41a604 size: 952
构建自定义镜像
root@k8s-ansible-client:~/yaml/20211010/02/dockerfiles/zk# vim Dockerfile
FROM harbor.openscp.com/base/slim_java:8
ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
ca-certificates \
gnupg \
tar \
wget && \
#
# Install dependencies
apk add --no-cache \
bash && \
#
#
# Verify the signature
export GNUPGHOME="$(mktemp -d)" && \
gpg -q --batch --import /tmp/KEYS && \
gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
#
# Set up directories
#
mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
#
# Install
tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
#
# Slim down
cd /zookeeper && \
cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
rm -rf \
*.txt \
*.xml \
bin/README.txt \
bin/*.cmd \
conf/* \
contrib \
dist-maven \
docs \
lib/*.txt \
lib/cobertura \
lib/jdiff \
recipes \
src \
zookeeper-*.asc \
zookeeper-*.md5 \
zookeeper-*.sha1 && \
#
# Clean up
apk del .build-deps && \
rm -rf /tmp/* "$GNUPGHOME"
COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /
ENV PATH=/zookeeper/bin:${PATH} \
ZOO_LOG_DIR=/zookeeper/log \
ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
JMXPORT=9010
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "zkServer.sh", "start-foreground" ]
EXPOSE 2181 2888 3888 9010
root@k8s-ansible-client:~/yaml/20211010/02/dockerfiles/zk# chmod a+x *.sh
root@k8s-ansible-client:~/yaml/20211010/02/dockerfiles/zk# chmod a+x bin/*.sh
root@k8s-ansible-client:~/yaml/20211010/02/dockerfiles/zk# vim build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.openscp.com/base/zookeeper:${TAG} .
sleep 1
docker push harbor.openscp.com/base/zookeeper:${TAG}
root@k8s-ansible-client:~/yaml/20211010/02/dockerfiles/zk# bash build-command.sh 2021-10-15_232810
...
Successfully built f0c5a1156af1
Successfully tagged harbor.openscp.com/base/zookeeper:2021-10-15_232810
The push refers to repository [harbor.openscp.com/base/zookeeper]
0b68dce37282: Pushed
fcf64f572f68: Pushed
3e95117418a3: Pushed
0c993a4050f7: Pushed
086988057ab2: Pushed
4feee6756c68: Pushed
e05a4008c1c7: Pushed
b35e3e407570: Pushed
e053edd72ca6: Mounted from base/slim_java
aba783efb1a4: Mounted from base/slim_java
5bef08742407: Mounted from base/slim_java
2021-10-15_232810: digest: sha256:07589952c13868ee10598947c7b3864be064512cea3def7c463d390233d8f3bb size: 2621
3.1.2 部署zookeeper
创建pv
root@k8s-ansible-client:~/yaml/20211010/02# vim zk-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zk-pop-pv-1
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
nfs:
server: 10.10.0.26
path: /data/pop/zk-datadir-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zk-pop-pv-2
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
nfs:
server: 10.10.0.26
path: /data/pop/zk-datadir-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zk-pop-pv-3
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
nfs:
server: 10.10.0.26
path: /data/pop/zk-datadir-3
root@k8s-ansible-client:~/yaml/20211010/02# kubectl apply -f zk-pv.yaml
persistentvolume/zk-pop-pv-1 created
persistentvolume/zk-pop-pv-2 created
persistentvolume/zk-pop-pv-3 created
root@k8s-ansible-client:~/yaml/20211010/02# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
zk-pop-pv-1 3Gi RWO Retain Available 3s
zk-pop-pv-2 3Gi RWO Retain Available 3s
zk-pop-pv-3 3Gi RWO Retain Available 3s
创建PVC
root@k8s-ansible-client:~/yaml/20211010/02# vim zk-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zk-pop-pvc-1
spec:
accessModes:
- ReadWriteOnce
volumeName: zk-pop-pv-1
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zk-pop-pvc-2
spec:
accessModes:
- ReadWriteOnce
volumeName: zk-pop-pv-2
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zk-pop-pvc-3
spec:
accessModes:
- ReadWriteOnce
volumeName: zk-pop-pv-3
resources:
requests:
storage: 3Gi
root@k8s-ansible-client:~/yaml/20211010/02# kubectl apply -f zk-pvc.yaml
persistentvolumeclaim/zk-pop-pvc-1 created
persistentvolumeclaim/zk-pop-pvc-2 created
persistentvolumeclaim/zk-pop-pvc-3 created
root@k8s-ansible-client:~/yaml/20211010/02# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zk-pop-pvc-1 Bound zk-pop-pv-1 3Gi RWO 3s
zk-pop-pvc-2 Bound zk-pop-pv-2 3Gi RWO 3s
zk-pop-pvc-3 Bound zk-pop-pv-3 3Gi RWO 3s
创建zk集群
root@k8s-ansible-client:~/yaml/20211010/02# vim zk.yaml
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper1
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper2
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32182
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper3
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32183
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "3"
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper1
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: harbor.openscp.com/base/zookeeper:2021-10-15_232810
imagePullPolicy: Always
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zk-pop-pvc-1
volumes:
- name: zk-pop-pvc-1
persistentVolumeClaim:
claimName: zk-pop-pvc-1
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper2
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "2"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: harbor.openscp.com/base/zookeeper:2021-10-15_232810
imagePullPolicy: Always
env:
- name: MYID
value: "2"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zk-pop-pvc-2
volumes:
- name: zk-pop-pvc-2
persistentVolumeClaim:
claimName: zk-pop-pvc-2
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper3
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "3"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: harbor.openscp.com/base/zookeeper:2021-10-15_232810
imagePullPolicy: Always
env:
- name: MYID
value: "3"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zk-pop-pvc-3
volumes:
- name: zk-pop-pvc-3
persistentVolumeClaim:
claimName: zk-pop-pvc-3
root@k8s-ansible-client:~/yaml/20211010/02# kubectl apply -f zk.yaml
service/zookeeper1 created
service/zookeeper2 created
service/zookeeper3 created
deployment.apps/zookeeper1 created
deployment.apps/zookeeper2 created
deployment.apps/zookeeper3 created
root@k8s-ansible-client:~/yaml/20211010/02# kubectl get pods,deploy
NAME READY STATUS RESTARTS AGE
pod/alpine-test 1/1 Running 39 (6h21m ago) 22d
pod/kube100-site 2/2 Running 0 7d22h
pod/nginx-test-001 1/1 Running 15 (6h58m ago) 8d
pod/nginx-test1 1/1 Running 39 (6h31m ago) 22d
pod/nginx-test2 1/1 Running 39 (6h31m ago) 22d
pod/nginx-test3 1/1 Running 39 (6h31m ago) 22d
pod/zookeeper1-cdbb7fbc-5pgdg 1/1 Running 1 (11s ago) 14s
pod/zookeeper2-f4944446d-2xnjd 1/1 Running 0 14s
pod/zookeeper3-589f6bc7-2mnz6 1/1 Running 0 13s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/zookeeper1 1/1 1 1 14s
deployment.apps/zookeeper2 1/1 1 1 14s
deployment.apps/zookeeper3 1/1 1 1 13s
验证
# 查看集群的状态
root@k8s-ansible-client:~/yaml/20211010/02# kubectl exec -it zookeeper1-cdbb7fbc-5pgdg bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower
bash-4.3# exit
exit
root@k8s-ansible-client:~/yaml/20211010/02# kubectl exec -it zookeeper2-f4944446d-2xnjd bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower
bash-4.3# exit
exit
root@k8s-ansible-client:~/yaml/20211010/02# kubectl exec -it zookeeper3-589f6bc7-2mnz6 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader
可以看到:zookeeper3 是集群leader,zookeeper1 和 zookeeper2 是集群follower
4. 实现NGINX与Tomcat静动分离
将用户访问的静态页面、js、图片等由NGINX处理响应,动态的请求通过NGINX的location转发给Tomcat处理。
4.1 构建nginx业务镜像
nginx基础镜像
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/nginx-base# vim Dockerfile
#Nginx Base Image
FROM harbor.openscp.com/base/centos:centos7.9.2009
RUN yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
ADD nginx-1.14.2.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.14.2 && ./configure && make && make install && ln -sv /usr/local/nginx/sbin/nginx /usr/sbin/nginx &&useradd nginx -u 2001 &&rm -rf /usr/local/src/nginx-1.14.2.tar.gz
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/nginx-base# cat build-command.sh
#!/bin/bash
docker build -t harbor.openscp.com/base/nginx-base:v1.14.2 .
sleep 1
docker push harbor.openscp.com/base/nginx-base:v1.14.2
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/nginx-base# sh build-command.sh
nginx配置
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/nginx# vim nginx.conf
upstream tomcat_webserver {
server pop-tomcat-app1-service.default.svc.pop.local:80;
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /webapp {
root html;
index index.html index.htm;
}
location /myapp {
proxy_pass http://tomcat_webserver;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
}
nginx业务镜像
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/nginx# cat Dockerfile
#Nginx 1.14.2
FROM harbor.openscp.com/base/nginx-base:v1.14.2
ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app.tar.gz /usr/local/nginx/html/webapp/
ADD index.html /usr/local/nginx/html/index.html
#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images
EXPOSE 80 443
CMD ["nginx"]
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/nginx# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.openscp.com/base/nginx-web1:${TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker push harbor.openscp.com/base/nginx-web1:${TAG}
echo "镜像上传到harbor完成"
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/nginx# bash build-command.sh v1
...
Successfully tagged harbor.openscp.com/base/nginx-web1:v1
镜像构建完成,即将上传到harbor
The push refers to repository [harbor.openscp.com/base/nginx-web1]
bb90d1fa3830: Pushed
bf323dbd224b: Pushed
edf25d730b57: Pushed
c7e8be9aa1f4: Pushed
b8f402932364: Mounted from base/nginx-base
3265817f225b: Mounted from base/nginx-base
7fc5345cbe01: Mounted from base/nginx-base
174f56854903: Mounted from base/nginx-base
v1: digest: sha256:b4c7a812324d668e4475d5e5134dbc8536cb9469a405575fee574f92a8989499 size: 1993
镜像上传到harbor完成
4.2 构建tomcat业务镜像
jdk基础镜像
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/jdk# cat Dockerfile
#JDK Base Image
FROM harbor.openscp.com/base/centos:centos7.9.2009
ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/jdk# cat build-command.sh
#!/bin/bash
docker build -t harbor.openscp.com/base/jdk-base:v8.212 .
sleep 1
docker push harbor.openscp.com/base/jdk-base:v8.212
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/jdk# sh build-command.sh
...
Successfully tagged harbor.openscp.com/base/jdk-base:v8.212
The push refers to repository [harbor.openscp.com/base/jdk-base]
38dbe7a8225d: Pushed
4cdbfe6aa3f6: Pushed
3aec209f0edd: Pushed
174f56854903: Pushed
v8.212: digest: sha256:2e51cb419ee7b103b33b442988142a28231e48bd576d28b89ec30c70fc0cea90 size: 1157
tomcat基础镜像
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-base# cat Dockerfile
#Tomcat 8.5.43基础镜像
FROM harbor.openscp.com/base/jdk-base:v8.212
RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
ADD apache-tomcat-8.5.43.tar.gz /apps
RUN useradd nginx -u 2001 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R nginx.nginx /apps /data -R
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-base# cat build-command.sh
#!/bin/bash
docker build -t harbor.openscp.com/base/tomcat-base:v8.5.43 .
sleep 3
docker push harbor.openscp.com/base/tomcat-base:v8.5.43
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-base# sh build-command.sh
...
Successfully tagged harbor.openscp.com/base/tomcat-base:v8.5.43
The push refers to repository [harbor.openscp.com/base/tomcat-base]
51003f7fe6e5: Pushed
afa3eb2a2173: Pushed
7136febc3401: Pushed
38dbe7a8225d: Mounted from base/jdk-base
4cdbfe6aa3f6: Mounted from base/jdk-base
3aec209f0edd: Mounted from base/jdk-base
174f56854903: Mounted from base/jdk-base
v8.5.43: digest: sha256:7f073d3b9accaf0f7904b6002daf773932c1d1b1045cabb8980c8809ee288e43 size: 1786
tomcat业务镜像
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-app1# cat Dockerfile
#tomcat web1
FROM harbor.openscp.com/base/tomcat-base:v8.5.43
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
RUN chown -R nginx.nginx /data/ /apps/
EXPOSE 8080 8443
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-app1# vim build-command.sh
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-app1# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.openscp.com/base/tomcat-app1:${TAG} .
sleep 3
docker push harbor.openscp.com/base/tomcat-app1:${TAG}
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-app1# cat run_tomcat.sh
#!/bin/bash
su - nginx -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-app1# chmod a+x *.sh
root@k8s-ansible-client:~/yaml/20211010/03/dockerfile/tomcat-app1# sh build-command.sh v1
...
Successfully tagged harbor.openscp.com/base/tomcat-app1:v1
The push refers to repository [harbor.openscp.com/base/tomcat-app1]
9ca3b03a0bf2: Pushed
c1448800f9cc: Pushed
948ac6b79a77: Pushed
2689f543d032: Pushed
d5bf884d7108: Pushed
d5123d987925: Mounted from base/tomcat-base
afa3eb2a2173: Mounted from base/tomcat-base
7136febc3401: Mounted from base/tomcat-base
38dbe7a8225d: Mounted from base/tomcat-base
4cdbfe6aa3f6: Mounted from base/tomcat-base
3aec209f0edd: Mounted from base/tomcat-base
174f56854903: Mounted from base/tomcat-base
v1: digest: sha256:915aa095e3cb8f6ad8c589b6af6695c581eacde00fa0193a1db7a5675abdd3ab size: 2827
4.3 k8s启动NGINX和Tomcat实现动静分离
4.3.1 Tomcat启动
root@k8s-ansible-client:~/yaml/20211010/03/tomcat-app1# vim tomcat-app1.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: pop-tomcat-app1-deployment-label
name: pop-tomcat-app1-deployment
namespace: pop
spec:
replicas: 1
selector:
matchLabels:
app: pop-tomcat-app1-selector
template:
metadata:
labels:
app: pop-tomcat-app1-selector
spec:
containers:
- name: pop-tomcat-app1-container
image: harbor.openscp.com/base/tomcat-app1:v1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
volumeMounts:
- name: pop-images
mountPath: /usr/local/nginx/html/webapp/images
readOnly: false
- name: pop-static
mountPath: /usr/local/nginx/html/webapp/static
readOnly: false
volumes:
- name: pop-images
nfs:
server: 10.10.0.26
path: /data/pop/images
- name: pop-static
nfs:
server: 10.10.0.26
path: /data/pop/static
# nodeSelector:
# project: pop
# app: tomcat
---
kind: Service
apiVersion: v1
metadata:
labels:
app: pop-tomcat-app1-service-label
name: pop-tomcat-app1-service
namespace: pop
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30003
selector:
app: pop-tomcat-app1-selector
root@k8s-ansible-client:~/yaml/20211010/03/tomcat-app1# kubectl apply -f tomcat-app1.yaml
deployment.apps/pop-tomcat-app1-deployment created
service/pop-tomcat-app1-service created
root@k8s-ansible-client:~/yaml/20211010/03/tomcat-app1# kubectl get pods,deploy
NAME READY STATUS RESTARTS AGE
pod/alpine-test 1/1 Running 41 (3h37m ago) 23d
pod/kube100-site 2/2 Running 0 8d
pod/nginx-test-001 1/1 Running 17 (4h14m ago) 10d
pod/nginx-test1 1/1 Running 41 (3h47m ago) 23d
pod/nginx-test2 1/1 Running 41 (3h47m ago) 23d
pod/nginx-test3 1/1 Running 41 (3h47m ago) 23d
pod/pop-tomcat-app1-deployment-54bb9d8f8c-bzwdr 1/1 Running 0 10s
pod/zookeeper1-cdbb7fbc-5pgdg 1/1 Running 1 (25h ago) 25h
pod/zookeeper2-f4944446d-2xnjd 1/1 Running 0 25h
pod/zookeeper3-589f6bc7-2mnz6 1/1 Running 0 25h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pop-tomcat-app1-deployment 1/1 1 1 10s
deployment.apps/zookeeper1 1/1 1 1 25h
deployment.apps/zookeeper2 1/1 1 1 25h
deployment.apps/zookeeper3 1/1 1 1 25h
root@k8s-ansible-client:~/yaml/20211010/03/tomcat-app1# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 23d
pop-tomcat-app1-service NodePort 10.68.248.96 <none> 80:30013/TCP 36s
zookeeper1 NodePort 10.68.42.189 <none> 2181:32181/TCP,2888:30923/TCP,3888:30168/TCP 25h
zookeeper2 NodePort 10.68.78.146 <none> 2181:32182/TCP,2888:31745/TCP,3888:30901/TCP 25h
zookeeper3 NodePort 10.68.199.44 <none> 2181:32183/TCP,2888:32488/TCP,3888:31621/TCP 25h
验证,浏览器访问
4.3.2 Nginx启动
root@k8s-ansible-client:~/yaml/20211010/03/nginx# vim nginx.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: pop-nginx-deployment-label
name: pop-nginx-deployment
namespace: pop
spec:
replicas: 1
selector:
matchLabels:
app: pop-nginx-selector
template:
metadata:
labels:
app: pop-nginx-selector
spec:
containers:
- name: pop-nginx-container
image: harbor.openscp.com/base/nginx-web1:v1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
env:
- name: "password"
value: "123456"
- name: "age"
value: "20"
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: pop-images
mountPath: /usr/local/nginx/html/webapp/images
readOnly: false
- name: pop-static
mountPath: /usr/local/nginx/html/webapp/static
readOnly: false
volumes:
- name: pop-images
nfs:
server: 10.10.0.26
path: /data/pop/images
- name: pop-static
nfs:
server: 10.10.0.26
path: /data/pop/static
#nodeSelector:
# group: pop
---
kind: Service
apiVersion: v1
metadata:
labels:
app: pop-nginx-service-label
name: pop-nginx-service
namespace: pop
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30002
- name: https
port: 443
protocol: TCP
targetPort: 443
nodePort: 30444
selector:
app: pop-nginx-selector
root@k8s-ansible-client:~/yaml/20211010/03/nginx# kubectl apply -f nginx.yaml
deployment.apps/pop-nginx-deployment created
service/pop-nginx-service created
root@k8s-ansible-client:~/yaml/20211010/03/nginx# kubectl get pods,deploy
NAME READY STATUS RESTARTS AGE
pod/alpine-test 1/1 Running 41 (3h44m ago) 23d
pod/kube100-site 2/2 Running 0 8d
pod/nginx-test-001 1/1 Running 17 (4h20m ago) 10d
pod/nginx-test1 1/1 Running 41 (3h53m ago) 23d
pod/nginx-test2 1/1 Running 41 (3h53m ago) 23d
pod/nginx-test3 1/1 Running 41 (3h53m ago) 23d
pod/pop-nginx-deployment-76cc49d6b4-vrghv 1/1 Running 0 4s
pod/pop-tomcat-app1-deployment-54bb9d8f8c-bzwdr 1/1 Running 0 6m51s
pod/zookeeper1-cdbb7fbc-5pgdg 1/1 Running 1 (25h ago) 25h
pod/zookeeper2-f4944446d-2xnjd 1/1 Running 0 25h
pod/zookeeper3-589f6bc7-2mnz6 1/1 Running 0 25h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pop-nginx-deployment 1/1 1 1 4s
deployment.apps/pop-tomcat-app1-deployment 1/1 1 1 6m51s
deployment.apps/zookeeper1 1/1 1 1 25h
deployment.apps/zookeeper2 1/1 1 1 25h
deployment.apps/zookeeper3 1/1 1 1 25h
root@k8s-ansible-client:~/yaml/20211010/03/nginx# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 23d
pop-nginx-service NodePort 10.68.18.244 <none> 80:30002/TCP,443:30444/TCP 11s
pop-tomcat-app1-service NodePort 10.68.248.96 <none> 80:30013/TCP 6m58s
zookeeper1 NodePort 10.68.42.189 <none> 2181:32181/TCP,2888:30923/TCP,3888:30168/TCP 25h
zookeeper2 NodePort 10.68.78.146 <none> 2181:32182/TCP,2888:31745/TCP,3888:30901/TCP 25h
zookeeper3 NodePort 10.68.199.44 <none> 2181:32183/TCP,2888:32488/TCP,3888:31621/TCP 25h
4.3.3 验证动静分离
先访问http://192.168.20.236:30002/,默认NGINX响应处理了
访问http://192.168.20.236:30002/myapp/,location配置了/mysqpp跳转到Tomcat程序