Kubernetes Ingress
Kubernetes 暴露服务的有三种方式,分别为 LoadBlancer Service、NodePort Service、Ingress。官网对 Ingress 的定义为管理对外服务到集群内服务之间规则的集合,通俗点讲就是它定义规则来允许进入集群的请求被转发到集群中对应服务上,从来实现服务暴漏。 Ingress 能把集群内 Service 配置成外网能够访问的 URL,流量负载均衡,终止SSL,提供基于域名访问的虚拟主机等等。相对于Trafik ingress来说 Nginx ingress 功能更强大 性能更好 实现的功能也相对更多.
部署Nginx Ingress
cat ingress-nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: ges.harbor.in/tools/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
# 增加nodePort
nodePort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
# 增加nodePort
nodePort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
在apply这个service之前,我们要注意的是我们的apiserver 默认的node port端口范围是30000-32767,但是我们所需要的nodeport端口不在这个范围之内,所以要修改apiserver的nodeport端口。编辑下面这个文件,
vim /etc/kubernetes/manifests/kube-apiserver.yaml
在command下新增
- --service-node-port-range=0-65535
测试Ingress Nginx
cat ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: es-backend-pre
name: nginx-web
annotations:
# 指定 Ingress Controller 的类型
kubernetes.io/ingress.class: "nginx"
# 指定我们的 rules 的 path 可以使用正则表达式
nginx.ingress.kubernetes.io/use-regex: "true"
# 连接超时时间,默认为 5s
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
# 后端服务器回转数据超时时间,默认为 60s
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
# 后端服务器响应超时时间,默认为 60s
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
# 客户端上传文件,最大大小,默认为 20m
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
# URL 重写
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
# 路由规则
rules:
# 主机名,只能是域名,修改为你自己的
- host: k8s.test.com
http:
paths:
- path: /es/api
backend:
# 后台部署的 Service Name
serviceName: es-platform-portal-api-svc
# 后台部署的 Service Port
servicePort: 8080
在hosts中添加解析
Your Node ip k8s.test.com
[root@pre-k8s-app-master ~]# kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
es-backend-pre nginx-web <none> k8s.test.com 10.107.51.25 80 2d18h
[root@pre-k8s-app-master ~]# curl k8s.test.com/es/api
{"timestamp":1652664604167,"status":404,"error":"Not Found","message":"No message available","path":"/"} 访问API成功 由于rewrite重写规则导致not found
Kubernetes自动伸缩
HPA全称是Horizontal Pod Autoscaler,翻译成中文是POD水平自动伸缩,以下都会用HPA代替Horizontal Pod Autoscaler,HPA可以基于CPU利用率对replication controller、deployment和replicaset中的pod数量进行自动扩缩容(除了CPU利用率也可以基于其他应程序提供的度量指标custom metrics进行自动扩缩容)。pod自动缩放不适用于无法缩放的对象,比如DaemonSets。HPA由Kubernetes API资源和控制器实现。资源决定了控制器的行为。控制器会周期性的获取平均CPU利用率,并与目标值相比较后来调整replication controller或deployment中的副本数量。
HPA的实现是一个控制循环,由controller manager的–horizontal-pod-autoscaler-sync-period参数指定周期(默认值为15秒)。每个周期内,controller manager根据每个HorizontalPodAutoscaler定义中指定的指标查询资源利用率。controller manager可以从resource metrics API(pod 资源指标)和custom metrics API(自定义指标)获取指标。
1)对于每个pod的资源指标(如CPU),控制器从资源指标API中获取每一个 HorizontalPodAutoscaler指定的pod的指标,然后,如果设置了目标使用率,控制器获取每个pod中的容器资源使用情况,并计算资源使用率。如果使用原始值,将直接使用原始数据(不再计算百分比)。然后,控制器根据平均的资源使用率或原始值计算出缩放的比例,进而计算出目标副本数。需要注意的是,如果pod某些容器不支持资源采集,那么控制器将不会使用该pod的CPU使用率
2)如果 pod 使用自定义指标,控制器机制与资源指标类似,区别在于自定义指标只使用原始值,而不是使用率。
3)如果pod 使用对象指标和外部指标(每个指标描述一个对象信息)。这个指标将直接跟据目标设定值相比较,并生成一个上面提到的缩放比例。在autoscaling/v2beta2版本API中,这个指标也可以根据pod数量平分后再计算。通常情况下,控制器将从一系列的聚合API(metrics.k8s.io、custom.metrics.k8s.io和external.metrics.k8s.io)中获取指标数据。metrics.k8s.io API通常由 metrics-server(需要额外启动)提供。
Metrics Server
[root@pre-k8s-app-master ~]# cat metrics.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- deployments
verbs:
- get
- list
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metrics-server-config
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
NannyConfiguration: |-
apiVersion: nannyconfig/v1alpha1
kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v0.3.6
spec:
selector:
matchLabels:
k8s-app: metrics-server
version: v0.3.6
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
version: v0.3.6
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
containers:
- name: metrics-server
image: ges.harbor.in/tools/metrics-server-amd64:v0.3.6
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
ports:
- containerPort: 443
name: https
protocol: TCP
- name: metrics-server-nanny
image: ges.harbor.in/tools/addon-resizer:1.8.4
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 5m
memory: 50Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: metrics-server-config-volume
mountPath: /etc/config
command:
- /pod_nanny
- --config-dir=/etc/config
- --cpu=300m
- --extra-cpu=20m
- --memory=200Mi
- --extra-memory=10Mi
- --threshold=5
- --deployment=metrics-server
- --container=metrics-server
- --poll-period=300000
- --estimator=exponential
- --minClusterSize=2
volumes:
- name: metrics-server-config-volume
configMap:
name: metrics-server-config
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: https
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
测试Metrics 是否安装成功
[root@pre-k8s-app-master ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
pre-k8s-app-master 336m 8% 3039Mi 39%
pre-k8s-app-nd01 545m 6% 8739Mi 55%
pre-k8s-app-nd02 521m 6% 9646Mi 61%
pre-k8s-app-nd03 788m 9% 7980Mi 50%
[root@pre-k8s-app-master ~]# kubectl top pod -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
cattle-system cattle-cluster-agent-66dc65c4fd-rzjmn 4m 65Mi
cattle-system cattle-node-agent-7f26j 1m 41Mi
cattle-system cattle-node-agent-clnpr 1m 32Mi
cattle-system cattle-node-agent-gh8f5 1m 32Mi
cattle-system cattle-node-agent-vzql6 1m 28Mi
cephfs cephfs-provisioner-5cdfc6d89f-v6fbt 3m 10Mi
es-backend-pre es-dashboard-portal-api-5486dbfdd9-gmczg 26m 1581Mi
es-backend-pre es-education-portal-api-584f48c8c7-mrcdz 19m 513Mi
es-backend-pre es-gis-portal-api-6f6ccf4d49-54fcn 31m 579Mi
es-backend-pre es-hse-portal-api-79f98fddd5-hcnbj 42m 754Mi
es-backend-pre es-hse-svc-ds-alarm-67f9b566df-dmgt6 124m 667Mi
es-backend-pre es-hse-svc-opc-alarm-6bbdd8fdd9-zrtbr 76m 543Mi
es-backend-pre es-license-api-5fd9bbcb48-dk5r6 13m 456Mi
es-backend-pre es-platform-iot-portal-api-79547569bd-7qrfz 40m 603Mi
es-backend-pre es-platform-portal-api-8477dff74f-zlh89 65m 775Mi
es-backend-pre es-sso-portal-api-545b986d97-5tnvr 15m 492Mi
es-backend-pre es-svc-gis-alarm-c6454fd69-zqbv2 329m 1085Mi
es-backend-pre es-svc-gis-pos-68f758bf4f-qb9rh 103m 575Mi
es-backend-pre es-svc-iot-data-55d4d64b94-tsv6x 58m 580Mi
es-backend-pre es-svc-report-generate-6f747c5589-cks9q 50m 488Mi
es-backend-pre es-svc-sys-msg-broker-66d69dc445-kl8pk 80m 505Mi
es-backend-pre es-svc-system-tool-79dc8d64c7-tnbwx 101m 888Mi
es-backend-pre es-svc-workflow-daemon-6db955cf8f-b8qrc 16m 580Mi
es-backend-pre es-svc-workorder-trigger-fd98ccc6f-2lnd9 75m 568Mi
es-backend-pre es-terminal-portal-api-5cf6bd9f78-fcjk4 24m 591Mi
es-frontend-pre es-dashboard-portal-5c7b85588-v7bd2 0m 4Mi
es-frontend-pre es-education-web-portal-7d5cb97c56-pg7tb 0m 6Mi
es-frontend-pre es-gis-web-portal-b54475b6d-8mjfz 0m 5Mi
es-frontend-pre es-hse-web-portal-6cbd5ff67b-svcg6 0m 6Mi
es-frontend-pre es-main-web-portal-799ff4d7b8-lplsp 0m 6Mi
es-frontend-pre es-platform-web-portal-78989688c9-v72z6 0m 7Mi
es-frontend-pre es-sso-web-portal-7b4d67d54b-v4ttd 0m 5Mi
es-frontend-pre es-terminal-web-portal-6cd99d6b49-wn5xf 0m 4Mi
ingress-nginx nginx-ingress-controller-2hzg8 3m 135Mi
ingress-nginx nginx-ingress-controller-6vpcl 3m 88Mi
ingress-nginx nginx-ingress-controller-lh6qs 7m 169Mi
ingress-nginx nginx-ingress-controller-r2wf7 4m 133Mi
kube-system calico-kube-controllers-7f4f5bf95d-bldcp 3m 40Mi
kube-system calico-node-jgrsr 43m 154Mi
kube-system calico-node-ps2bv 49m 155Mi
kube-system calico-node-s4hx9 43m 131Mi
kube-system calico-node-wsd74 42m 153Mi
kube-system coredns-f9fd979d6-jhtrv 3m 24Mi
kube-system coredns-f9fd979d6-ndvjv 3m 22Mi
kube-system etcd-pre-k8s-app-master 29m 307Mi
kube-system kube-apiserver-pre-k8s-app-master 112m 455Mi
kube-system kube-controller-manager-pre-k8s-app-master 16m 68Mi
kube-system kube-proxy-4lxfn 8m 43Mi
kube-system kube-proxy-lntrq 1m 43Mi
kube-system kube-proxy-lr8vc 9m 28Mi
kube-system kube-proxy-nmdvv 1m 45Mi
kube-system kube-scheduler-pre-k8s-app-master 4m 33Mi
kube-system kube-sealyun-lvscare-pre-k8s-app-nd01 3m 19Mi
kube-system kube-sealyun-lvscare-pre-k8s-app-nd02 3m 19Mi
kube-system kube-sealyun-lvscare-pre-k8s-app-nd03 1m 18Mi
kube-system metrics-server-76f5687466-fdvbs 2m 41Mi
prometheus-exporter-app kube-state-metrics-6d4c97fdd9-f4sbk 2m 38Mi
prometheus-exporter-app node-exporter-6x9m7 4m 23Mi
prometheus-exporter-app node-exporter-8p2jw 0m 13Mi
prometheus-exporter-app node-exporter-tm2bz 3m 25Mi
prometheus-exporter-app node-exporter-wqlzc 6m 24Mi
prometheus-exporter-app prometheus-pre-app-869d859896-bj2zz 93m 780Mi
创建HPA
kubectl autoscale deployment es-platform-portal-api --cpu-percent=50 --min=1 --max=10
[root@pre-k8s-app-master ~]# kubectl describe hpa es-platform-portal-api -n es-backend-pre
Name: es-platform-portal-api
Namespace: es-backend-pre
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 16 May 2022 09:44:12 +0800
Reference: Deployment/es-platform-portal-api
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 156% (78m) / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 24s horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target
[root@pre-k8s-app-master ~]# kubectl get hpa -A
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
es-backend-pre es-platform-portal-api Deployment/es-platform-portal-api 156%/50% 1 10 3 45s
可以看到 es-platform-portal-api 的replicas 变为了3个
如果出现了 failed to get cpu utilization: missing request for cpu 这样的错误信息。这是因为我们上面创建的 Pod 对象没有添加 request 资源声明,这样导致 HPA 读取不到 CPU 指标信息,所以如果要想让 HPA 生效,对应的 Pod 资源必须添加 requests 资源声明,更新yaml文件
resources:
requests:
cpu: 0.01
memory: 25Mi
limits:
cpu: 0.05
memory: 60Mi
cat hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: es-platform-portal-api
namespace: es-backend-pre
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: es-platform-portal-api
metrics:
- type: Resource
resource:
name: memory
targetAverageUtilization: 60