一、k8s 部署 MongoDB
1.1 前提准备
-
MongoDB
镜像172.18.231.30:5000/si-tech/mongo:latest
。(可从 Docker Hub 官网 下载镜像) - 主机新建
/home/csd/mongodb/db
目录。(使用HostPath
数据卷挂载到容器中,作为MongoDB
的文件存储目录。您也可以使用其他类型的数据卷)
1.2 编写 yaml
文件
编写 mongo.yaml
文件,提供了一个对外暴露的 NodePort 类型的 Service,用于外部访问。
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
ports:
- name: mongo
port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
app: mongo
spec:
ports:
- name: mongo-http
port: 27017
selector:
app: mongo
type: NodePort
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: "mongo"
replicas: 2
podManagementPolicy: Parallel
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: 172.18.231.30:5000/si-tech/mongo:latest
command:
- mongod
- "--bind_ip_all"
- "--replSet"
- rs0
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-data
mountPath: /data/db
volumes:
- name: mongo-data
hostPath:
path: /home/csd/mongodb/db
1.3 部署启动 MongoDB
启动命令 kubectl create -f mongo.yaml
。
kubectl create -f mongo.yaml --kubeconfig=/home/csd/csd.kubeconfig
执行 mongo.yaml
文件成功,如下所示:
查看 Service、StatefulSet、Pod 信息,正常启动,如下:
kubectl get service
kubectl get statefulset
kubectl get pod
1.4 测试连接 MongoDB
提供了对外暴露的服务 mongo-service
,可通过 主机:NodePort的端口
的方式访问。
使用 curl http://172.18.232.207:30741
测试连接 MongoDB
。
其中 172.18.232.207
为 k8s 集群的主机(任一主机),30741
为 mongo-service
服务的 NodePort 端口。如下图所示:
提示信息
It looks like you are trying to access MongoDB over HTTP on the native driver port.
,nice,部署 MongoDB
成功!
------------------------------我是华丽的分割线----------------------------
hostpath会把宿主机上的指定卷加载到容器之中,但如果 Pod 发生跨主机的重建,数据会丢失,无法保证数据持久化。如果需要做数据持久化,建议使用PV、PVC。下面是kafka数据持久化的一个示例:
apiVersion: v1
kind: Service
metadata:
name: kafka-service
namespace: crm-mci
spec:
type: NodePort
ports:
- port: 9092
targetPort: 9092
nodePort: 32129
name: kafkaport
selector:
app: kafka-server
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka-stateful
namespace: crm-mci
spec:
replicas: 1
serviceName: kafka-service
selector:
matchLabels:
app: kafka-server
template:
metadata:
labels:
app: kafka-server
version: v1
spec:
containers:
- name: k8skafka
image: 172.18.231.30:5000/si-tech/crm-mci/kafka:2.0.0
imagePullPolicy: Always
resources:
requests:
memory: "2Gi"
cpu: "500m"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: NODE_PORT
value: "32129"
- name: ZK_ADDRESS
value: "172.18.238.26:35618"
ports:
- containerPort: 9092
command:
- /bin/sh
- -c
- "/kafka/bin/kafkaGenConfig.sh && /kafka/bin/kafka-server-start.sh /kafka/config/server.properties"
volumeMounts:
- name: datadir
mountPath: /kafka/data
- name: logdir
mountPath: /kafka/logs
livenessProbe:
tcpSocket:
port: 9092
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 9092
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-storage
resources:
requests:
storage: 975Mi
- metadata:
name: logdir
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-storage
resources:
requests:
storage: 975Mi