使用外部mysql和zookeeper部署dolphinscheduler

1、使用mysql驱动器重构镜像

FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler-<service>:<version>
# 例如
# FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler-tools:<version>

# 注意,如果构建的是dolphinscheduler-tools镜像
# 需要将下面一行修改为COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/tools/libs
# 其他服务保持不变即可
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/libs

注:dolphinscheduler-tools, dolphinscheduler-master, dolphinscheduler-worker, dolphinscheduler-api, dolphinscheduler-alert-server 都需要重新构建。可以参考:https://hub.docker.com/r/apache/dolphinscheduler

\color{blue}{构建镜像}
docker build -t registry_name:Tags .
# 举例
docker build -t registry.cn-hangzhou.aliyuncs.com/3finfo/dolphinscheduler-worker:3.0.1 .

2、k8s中部署nfs-storegeClass

1)创建命名空间

kubectl create namespace dol

2)部署nfs-StorageClaess

nfs-Storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  namespace: dol
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs-client
parameters:
  archiveOnDelete: "true"

nfs-provisioner.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: dol
  labels:
    app: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: gmoney23/nfs-client-provisioner 
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-client
            - name: NFS_SERVER
              value: 192.168.7.114
            - name: NFS_PATH
              value: /RaidDisk/nfs_k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.7.114
            path: /RaidDisk/nfs_k8s

nfs-rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
  namespace: dol
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
  namespace: dol
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: dol
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: dol
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

dolphin-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dol-pvc
  namespace: dol
spec:
  storageClassName: nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Gi

3、部署mysql

#mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
  namespace: dol
data:
  mysqld.cnf: |-
    [client]
    default-character-set=utf8
    [mysqld]
    skip_ssl
    wait_timeout=31536000  
    interactive_timeout=31536000
    
---    
#mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: dol
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7.25
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "1234qwer"
        ports:
        - containerPort: 3306
          protocol: TCP
          name: 3306tcp01
        volumeMounts:
        - name: mysql-data
          mountPath: "/var/lib/mysql"
          subPath: mysql/data
        - name: mysql-conf
          mountPath: "/etc/mysql/mysql.conf.d/"
      volumes:
        - name: mysql-data
          persistentVolumeClaim:
            claimName: dol-pvc
        - name: mysql-conf
          configMap:
            name: mysql-config
---
# mysql-service.yaml
    apiVersion: v1
kind: Service
metadata:
  name: mysql-svc
  namespace: dol
  labels: 
    name: mysql-svc
spec:
  type: NodePort
  ports:
  - port: 3306
    protocol: TCP
    targetPort: 3306
    name: http
    nodePort: 32766
  selector:
    app: mysql

踩坑:

\color{blue}{巨坑:}如果是使用的dockerhub中的mysql镜像,那么需要修改my.cnf中[mysqld]的配置;添加一行:skip_ssl,关闭mysql的ssl认证。不然job不能初始化数据库,pod就不能连接数据库;此处查看Mysql日志有很多“[Note] Bad handshake”,ds里边的pod有数据库连接日志报错;
如果自己构建镜像,5.7.27以前不需要配置skip_ssl,从5.7.28开始默认就开启了ssl。

\color{blue}{小坑:}部署好mysql后,需要创建一个名为dolphinscheduler的数据库;不然job也不能初始化数据库。这个查看日志提示很明显; mysql> create database dolphinscheduler default character set utf8;

4、k8s部署zookeeper

需要部署一个pvc,也可以用mysql中的pvc

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: zookeeper
  name: zookeeper
  namespace: dol
spec:
  selector:
    matchLabels:
     app: zookeeper
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      containers:
      - image: zookeeper:3.5.9
        imagePullPolicy: IfNotPresent
        name: zookeeper
        ports:
        - containerPort: 2181
        volumeMounts:
          - name: zookeeper-pvc
            mountPath: /var/lib/zookeeper
            subPath: zookeeper/
          - name: zookeeper-pvc
            mountPath: /data
            subPath: zookeeper/data/
          - name: zookeeper-pvc
            mountPath: /datalog
            subPath: zookeeper/datalog/
      volumes:
        - name: zookeeper-pvc
          persistentVolumeClaim:
            claimName: dol-pvc
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: zookeeper-service
  name: zookeeper-service
  namespace: dol
spec:
  type: NodePort
  ports:
  - name: zookeeper-port
    port: 2181
    nodePort: 30181
    targetPort: 2181
  selector:
    app: zookeeper

5、部署dolphinscheduler

mkdir dolphinscheduler && cd dolphinscheduler
wget https://www.apache.org/dyn/closer.lua/dolphinscheduler/3.0.0/apache-dolphinscheduler-3.0.0-src.tar.gz
#也可以下载其他版本:https://dolphinscheduler.apache.org/zh-cn/download/download.html

#解压并进入k8s安装的chart目录:
tar zxf apache-dolphinscheduler-3.0.0-src.tar.gz
cd ./apache-dolphinscheduler-3.0.0-src/deploy/kubernetes/dolphinscheduler

1)修改values.yaml

    #镜像修改
image:
registry: "registry.cn-hangzhou.aliyuncs.com/3finfo" #修改成你自己的仓库
tag: "3.0.1"  # 镜像tag,
pullPolicy: "IfNotPresent" # 镜像拉取策略
pullSecret: "aliyun-registry"  #镜像拉取认证 kubectl create secret aliyun-registry docker-registry ......  

#数据库修改
postgresql:
enabled: false  #修改true 为false,禁止使用默认的postgresql。
postgresqlUsername: "root"
postgresqlPassword: "root"
postgresqlDatabase: "dolphinscheduler"
persistence:
  enabled: false
  size: "20Gi"
  storageClass: "-"
 
#添加mysql配置
externalDatabase:
type: "mysql"
host: "192.168.xxx.xxx"  #mysql的登录地址
port: "32766"  #mysql 端口
username: "root" #mysql账号
password: "1234qwer"  #mysql密码
database: "dolphinscheduler"  #数据库名字
params: "useUnicode=true&characterEncoding=UTF-8"  

#zookeeper修改
zookeeper:
enabled: false  #和数据库一样将true改为false
service:
  port: 2181
fourlwCommandsWhitelist: "srvr,ruok,wchs,cons"
persistence:
  enabled: false
  size: "20Gi"
  storageClass: "-"  


externalRegistry:
  registryPluginName: "zookeeper" # 名字只能是zookeeper
  registryServers: "192.168.xxx.xxx:30181"  #zookeeper的连接地址  

#master存储修改:master.persistentVolumeClaim
persistentVolumeClaim:
  enabled: true   #修改false为true
  accessModes:
  - "ReadWriteOnce"
  storageClassName: "nfs-storage" #可以使用kubectl get sc -n dol获取名字
  storage: "20Gi"   #根据自己需求填写  


#worker存储修改:worker.persistentVolumeClaim
persistentVolumeClaim:
  enabled: true    #修改false为true
  dataPersistentVolume:
    enabled: true  #修改false为true
    accessModes:
    - "ReadWriteOnce"
    storageClassName: "nfs-storage" #可以使用kubectl get sc -n dol获取名字
    storage: "20Gi"   #根据自己需求填写  

#alter存储修改:alter.persistentVolumeClaim
persistentVolumeClaim:
  enabled: true   #修改false为true
  accessModes:
  - "ReadWriteOnce"
  storageClassName: "nfs-storage"  #可以使用kubectl get sc -n dol获取名字
  storage: "20Gi"    #根据自己需求填写  

#api存储修改:api.persistentVolumeClaim
persistentVolumeClaim:
  enabled: true   #修改false为true 
  accessModes:
  - "ReadWriteOnce"
  storageClassName: "nfs-storage"    #可以使用kubectl get sc -n dol获取名字
  storage: "20Gi"    #根据自己需求填写

2)修改同目录下的Chart.yaml

注:注释或者删除dependencies字段所有配置,因为我们使用外部的zookeeper、mysql,不然使用helm安装的时候会提示:找不到postgresql和zookeeper

      #dependencies:
#- name: postgresql
#  version: 10.3.18
  # Due to a change in the Bitnami repo, https://charts.bitnami.com/bitnami was truncated only
  # containing entries for the latest 6 months (from January 2022 on).
  # This URL: https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami
  # contains the full 'index.yaml'.
  # See detail here: https://github.com/bitnami/charts/issues/10833
#  repository: https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami
#  condition: postgresql.enabled    

#- name: mysql
##  version: 5.7.33
#  condition: mysql.enabled    

#- name: zookeeper
#  version: 6.5.3
  # Same as above.
#  repository: https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami
#  condition: zookeeper.enabled

3)部署dolphinscheduler

helm install dol . -n dol

4)部署api的nodePort-service

  apiVersion: v1
  kind: Service
  metadata:
    name: dol-api-nodeport
    namespace: dol
  spec:
    type: NodePort
    selector:
      app.kubernetes.io/name: dol-api   #此处的标签需要去对比api的svc标签和api的pod标签。每个人都不一样
    ports:
    - port: 12345
      targetPort: 12345
      nodePort: 30730

\color{red}{至此,部署完成。}

6登录

http://IP:30730/dolphinscheduler

默认账号密码: admin/dolphinscheduler123

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 219,753评论 6 508
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,668评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,090评论 0 356
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,010评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,054评论 6 395
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,806评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,484评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,380评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,873评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,021评论 3 338
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,158评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,838评论 5 346
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,499评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,044评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,159评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,449评论 3 374
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,136评论 2 356

推荐阅读更多精彩内容