kubesphere生产环境落地实践(二)kubekey本地化

我们基于kubekey 1.0.1做了本地化修改,那为什么要做本地化修改呢?

实际上kubekey 1.0.1 部署kubesphere流程如下:

  1. 检测依赖安装情况
  2. 部署kubernetes集群
  3. 部署存储插件(默认openebs localPV)
  4. 部署kubesphere

如下

createTasks := []manager.Task{
        {Task: preinstall.Precheck, ErrMsg: "Failed to precheck"},
        {Task: preinstall.DownloadBinaries, ErrMsg: "Failed to download kube binaries"},
        {Task: preinstall.InitOS, ErrMsg: "Failed to init OS"},
        {Task: docker.InstallerDocker, ErrMsg: "Failed to install docker"},
        {Task: preinstall.PrePullImages, ErrMsg: "Failed to pre-pull images"},
        {Task: etcd.GenerateEtcdCerts, ErrMsg: "Failed to generate etcd certs"},
        {Task: etcd.SyncEtcdCertsToMaster, ErrMsg: "Failed to sync etcd certs"},
        {Task: etcd.GenerateEtcdService, ErrMsg: "Failed to create etcd service"},
        {Task: etcd.SetupEtcdCluster, ErrMsg: "Failed to start etcd cluster"},
        {Task: etcd.RefreshEtcdConfig, ErrMsg: "Failed to refresh etcd configuration"},
        {Task: etcd.BackupEtcd, ErrMsg: "Failed to backup etcd data"},
        {Task: kubernetes.GetClusterStatus, ErrMsg: "Failed to get cluster status"},
        {Task: kubernetes.InstallKubeBinaries, ErrMsg: "Failed to install kube binaries"},
        {Task: kubernetes.InitKubernetesCluster, ErrMsg: "Failed to init kubernetes cluster"},
        {Task: network.DeployNetworkPlugin, ErrMsg: "Failed to deploy network plugin"},
        {Task: kubernetes.JoinNodesToCluster, ErrMsg: "Failed to join node"},
        {Task: addons.InstallAddons, ErrMsg: "Failed to deploy addons"},
        {Task: kubesphere.DeployLocalVolume, ErrMsg: "Failed to deploy localVolume"},
        {Task: kubesphere.DeployKubeSphere, ErrMsg: "Failed to deploy kubesphere"},
    }

1.部署分离

我们修改了pkg/install/install.go部分内容, 首先将部署存储与kubesphere流程摘除,此时kubekey只是用来部署kubernetes集群

createTasks := []manager.Task{
        {Task: preinstall.CheckOfflineBinaries, ErrMsg: "Failed to find kube offline binaries"},
        //{Task: preinstall.DetectKernel, ErrMsg: "Failed to check kernel"},
        {Task: preinstall.InitYum, ErrMsg: "Failed to config yum"},
        {Task: preinstall.InitTime, ErrMsg: "Failed to config time"},
        {Task: preinstall.Precheck, ErrMsg: "Failed to precheck"},
        {Task: preinstall.InitOS, ErrMsg: "Failed to init OS"},
        {Task: docker.InstallerDocker, ErrMsg: "Failed to install docker"},
        {Task: docker.ConfigDocker, ErrMsg: "Failed to config docker"},
        {Task: preinstall.PrePullImages, ErrMsg: "Failed to pre-pull images"},
        {Task: etcd.GenerateEtcdCerts, ErrMsg: "Failed to generate etcd certs"},
        {Task: etcd.SyncEtcdCertsToMaster, ErrMsg: "Failed to sync etcd certs"},
        {Task: etcd.GenerateEtcdService, ErrMsg: "Failed to create etcd service"},
        {Task: etcd.SetupEtcdCluster, ErrMsg: "Failed to start etcd cluster"},
        {Task: etcd.RefreshEtcdConfig, ErrMsg: "Failed to refresh etcd configuration"},
        {Task: etcd.BackupEtcd, ErrMsg: "Failed to backup etcd data"},
        {Task: kubernetes.GetClusterStatus, ErrMsg: "Failed to get cluster status"},
        {Task: kubernetes.InstallKubeBinaries, ErrMsg: "Failed to install kube binaries"},
        {Task: kubernetes.InitKubernetesCluster, ErrMsg: "Failed to init kubernetes cluster"},
        {Task: network.DeployNetworkPlugin, ErrMsg: "Failed to deploy network plugin"},
        {Task: kubernetes.JoinNodesToCluster, ErrMsg: "Failed to join node"},
        {Task: addons.InstallAddons, ErrMsg: "Failed to deploy addons"},
        // 重写配置
        //{Task: kubernetes.OverwriteKubeletConfig, ErrMsg: "Failed to overwrite config"},
        //{Task: kubesphere.DeployCephVolume, ErrMsg: "Failed to deploy cephVolume"},
        //{Task: kubesphere.DeployLocalVolume, ErrMsg: "Failed to deploy localVolume"},
        //{Task: kubesphere.DeployKubeSphere, ErrMsg: "Failed to deploy kubesphere"},
    }

2.自定义运行时配置

这里说的运行时配置是指/etc/docker/daemon.json,实际使用过程中我们对该配置进行了一些变更。

{
  "log-opts": {
    "max-size": "500m",
    "max-file":"3"
  },
  "userland-proxy": false,
  "live-restore": true,
  "default-ulimits": {
    "nofile": {
      "Hard": 65535,
      "Name": "nofile",
      "Soft": 65535
    }
  },
  "default-address-pools": [
    {
      "base": "172.80.0.0/16",
      "size": 24
    },
    {
      "base": "172.90.0.0/16",
      "size": 24
    }
  ],
  "default-gateway": "",
  "default-gateway-v6": "",
  "default-runtime": "runc",
  "default-shm-size": "64M",
  {{- if .DataPath }}
  "data-root": "{{ .DataPath }}",
  {{- end}}
  {{- if .Mirrors }}
  "registry-mirrors": [{{ .Mirrors }}],
  {{- end}}
  {{- if .InsecureRegistries }}
  "insecure-registries": [{{ .InsecureRegistries }}],
  {{- end}}
  "exec-opts": ["native.cgroupdriver=systemd"]
}

主要通过修改ConfigDocker函数实现

func ExecTasks(mgr *manager.Manager) error {
    createTasks := []manager.Task{
...
        {Task: docker.ConfigDocker, ErrMsg: "Failed to config docker"},
    }
...
}

3.镜像添加私有域名

kubesphere所使用的镜像tag列表如下(部分镜像)

kubesphere/kube-apiserver:v1.20.4
kubesphere/kube-scheduler:v1.20.4
kubesphere/kube-proxy:v1.20.4
kubesphere/kube-controller-manager:v1.20.4
kubesphere/kube-apiserver:v1.19.8
kubesphere/kube-scheduler:v1.19.8
kubesphere/kube-proxy:v1.19.8
kubesphere/kube-controller-manager:v1.19.8
kubesphere/kube-apiserver:v1.19.9
kubesphere/kube-scheduler:v1.19.9
kubesphere/kube-proxy:v1.19.9
kubesphere/kube-controller-manager:v1.19.9
kubesphere/kube-apiserver:v1.18.6
kubesphere/kube-scheduler:v1.18.6

由于实际投产环境为离线环境,这些镜像在该场景下无法正常拉取到,所以我们对这些镜像添加了自定义域名,以私有镜像库(harbor)的方式进行管理。

修改后的镜像tag如下

harbor.wl.io/kubesphere/kube-apiserver:v1.20.4
harbor.wl.io/kubesphere/kube-scheduler:v1.20.4
harbor.wl.io/kubesphere/kube-proxy:v1.20.4
harbor.wl.io/kubesphere/kube-controller-manager:v1.20.4
harbor.wl.io/kubesphere/kube-apiserver:v1.19.8
harbor.wl.io/kubesphere/kube-scheduler:v1.19.8
harbor.wl.io/kubesphere/kube-proxy:v1.19.8
harbor.wl.io/kubesphere/kube-controller-manager:v1.19.8
harbor.wl.io/kubesphere/kube-apiserver:v1.19.9
harbor.wl.io/kubesphere/kube-scheduler:v1.19.9
harbor.wl.io/kubesphere/kube-proxy:v1.19.9
harbor.wl.io/kubesphere/kube-controller-manager:v1.19.9
harbor.wl.io/kubesphere/kube-apiserver:v1.18.6
harbor.wl.io/kubesphere/kube-scheduler:v1.18.6

并通过kubekey在部署阶段,添加自定义host解析(/etc/hosts)

我们在对kubekey本地化后的配置样例如下

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.1.11, internalAddress: 192.168.1.11, user: root, password: 123456}
  - {name: node2, address: 192.168.1.12, internalAddress: 192.168.1.12, user: root, password: 123456}
  - {name: node3, address: 192.168.1.13, internalAddress: 192.168.1.13, user: root, password: 123456}
  roleGroups:
    etcd:
    - node1-3
    master: 
    - node1-3
    worker:
    - node1-3
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: "192.168.1.111"
    port: "6443"
  externalHarbor:
    domain: 123456
    address: 192.168.1.114
    user: admin
    password: 123456
  kubernetes:
    version: v1.18.6
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  docker:
    DataPath: /data
  storage:
    type: local
    ceph:
      id: 90140a86-58c9-41ce-8825-c4123bc52edd
      monitors:
        - 192.168.1.69:6789
        - 192.168.1.70:6789
        - 192.168.1.71:6789
      userID: kubernetes
      userKey: AQB1dFFgVJSnBhAAtJOOKOVU78aWN2iudY8QDw==
      adminKey: AQDlcFFges49BxAA0xAYat3tzyMHRZ4LNJqqIw==
      fsName: cephfs-paas
      pools:
        rbdDelete: rbd-paas-pool
        rbdRetain: rbd-paas-pool-retain
        fs: cephfs-paas-pool
  registry:
    registryMirrors:
    - http://harbor.wl.io
    insecureRegistries:
    - harbor.wl.io
    privateRegistry: harbor.wl.io
  addons: []
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 218,284评论 6 506
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,115评论 3 395
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 164,614评论 0 354
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,671评论 1 293
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,699评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,562评论 1 305
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,309评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,223评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,668评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,859评论 3 336
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,981评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,705评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,310评论 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,904评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,023评论 1 270
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,146评论 3 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,933评论 2 355

推荐阅读更多精彩内容