kubernetes1.13.1集群安装包管理工具helm

参考文档

https://github.com/goharbor/harbor-helm
https://docs.helm.sh/using_helm/#installing-helm
https://github.com/goharbor/harbor/blob/master/docs/kubernetes_deployment.md
https://github.com/goharbor/harbor-helm/blob/master/docs/High%20Availability.md
https://li-sen.github.io/2018/10/08/k8s%E9%83%A8%E7%BD%B2%E9%AB%98%E5%8F%AF%E7%94%A8harbor/
https://www.cnblogs.com/ericnie/p/8463127.html
https://www.bountysource.com/issues/60265705-error-looks-like-https-kubernetes-charts-storage-googleapis-com-is-not-a-valid-chart-repository-or-cannot-be-reached-pipline-error-exit-status-1

一、安装helm客户端

[root@elasticsearch01 ~] wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz
[root@elasticsearch01 ~] tar zxvf helm-v2.12.3-linux-amd64.tar.gz 
[root@elasticsearch01 ~] cd linux-amd64/
[root@elasticsearch01 linux-amd64]mv helm tiller /usr/local/bin/
[root@elasticsearch01 linux-amd64]# helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: could not find tiller

二、安装tiller

依赖关系
socat
需要在各个节点上安装socat
yum install socat

1、创建rbac角色

[root@elasticsearch01 helm]# cat helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

[root@elasticsearch01 helm]# kubectl create -f helm-rbac.yaml 
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

2、初始化安装tiller

[root@elasticsearch01 helm]# helm init
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: read tcp 10.2.8.44:49020->216.58.220.208:443: read: connection reset by peer

报错一
更换国内源

[root@elasticsearch01 helm]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"stable" has been added to your repositories

[root@elasticsearch01 helm]# helm init
$HELM_HOME has been configured at /root/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

[root@elasticsearch01 helm]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
[root@elasticsearch01 helm]# kubectl get pods -n kube-system
NAME                                   READY   STATUS             RESTARTS   AGE
coredns-7748f7f6df-2c7ws               1/1     Running            0          21d
coredns-7748f7f6df-chhwx               1/1     Running            0          21d
kubernetes-dashboard-cb55bd5bd-p644x   1/1     Running            0          15d
kubernetes-dashboard-cb55bd5bd-vlmdh   1/1     Running            0          22d
metrics-server-788c48df64-cfnnx        1/1     Running            0          13d
metrics-server-788c48df64-v75gr        1/1     Running            0          13d
tiller-deploy-69ffbf64bc-rxcj8         0/1     ImagePullBackOff   0          6m13s
[root@elasticsearch01 helm]# 

报错二
更换国内docker镜像(gcr.io/kubernetes-helm/tiller:v2.12.3),或者下载镜像后重新打标签

[root@elasticsearch01 helm]# kubectl describe pod/tiller-deploy-69ffbf64bc-rxcj8  -n kube-system
Name:               tiller-deploy-69ffbf64bc-rxcj8
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               10.2.8.34/10.2.8.34
Start Time:         Fri, 25 Jan 2019 09:46:25 +0800
Labels:             app=helm
                    name=tiller
                    pod-template-hash=69ffbf64bc
Annotations:        <none>
Status:             Pending
IP:                 10.254.73.7
Controlled By:      ReplicaSet/tiller-deploy-69ffbf64bc
Containers:
  tiller:
    Container ID:   
    Image:          gcr.io/kubernetes-helm/tiller:v2.12.3
    Image ID:       
    Ports:          44134/TCP, 44135/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f8hz7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-f8hz7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-f8hz7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                Message
  ----     ------     ----                  ----                -------
  Normal   Scheduled  10m                   default-scheduler   Successfully assigned kube-system/tiller-deploy-69ffbf64bc-rxcj8 to 10.2.8.34
  Normal   Pulling    8m36s (x4 over 10m)   kubelet, 10.2.8.34  pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Warning  Failed     8m21s (x4 over 10m)   kubelet, 10.2.8.34  Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.12.3": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     8m21s (x4 over 10m)   kubelet, 10.2.8.34  Error: ErrImagePull
  Normal   BackOff    5m51s (x15 over 10m)  kubelet, 10.2.8.34  Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Warning  Failed     49s (x36 over 10m)    kubelet, 10.2.8.34  Error: ImagePullBackOff

可以通过阿里云容器服务节后github构建海外镜像到国内

[root@elasticsearch02 ~]# docker pull registry.cn-beijing.aliyuncs.com/minminmsn/tiller:v2.12.3
v2.12.3: Pulling from minminmsn/tiller
407ea412d82c: Pull complete 
b384553aa9a9: Pull complete 
9015cc67398b: Pull complete 
b4d55549c9ed: Pull complete 
Digest: sha256:bbc6dbfc37b82de97da58ce9a99b17db8f474b3deb51130c36f463849c69bd3b
Status: Downloaded newer image for registry.cn-beijing.aliyuncs.com/minminmsn/tiller:v2.12.3
[root@elasticsearch02 ~]# docker tag registry.cn-beijing.aliyuncs.com/minminmsn/tiller:v2.12.3 gcr.io/kubernetes-helm/tiller:v2.12.3
[root@elasticsearch02 ~]# docker images |grep tiller
gcr.io/kubernetes-helm/tiller                                     v2.12.3             336eb7f809d0        5 minutes ago       81.4MB
registry.cn-beijing.aliyuncs.com/minminmsn/tiller                 v2.12.3             336eb7f809d0        5 minutes ago       81.4MB

等一会儿查看tiller pod正常运行了

[root@elasticsearch01 helm]# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-7748f7f6df-2c7ws               1/1     Running   0          21d
coredns-7748f7f6df-chhwx               1/1     Running   0          21d
kubernetes-dashboard-cb55bd5bd-p644x   1/1     Running   0          15d
kubernetes-dashboard-cb55bd5bd-vlmdh   1/1     Running   0          22d
metrics-server-788c48df64-cfnnx        1/1     Running   0          13d
metrics-server-788c48df64-v75gr        1/1     Running   0          13d
tiller-deploy-69ffbf64bc-rxcj8         1/1     Running   0          28m


[root@elasticsearch01 helm]# kubectl log pod/tiller-deploy-69ffbf64bc-rxcj8  -n kube-system
log is DEPRECATED and will be removed in a future version. Use logs instead.
[main] 2019/01/25 02:13:01 Starting Tiller v2.12.3 (tls=false)
[main] 2019/01/25 02:13:01 GRPC listening on :44134
[main] 2019/01/25 02:13:01 Probes listening on :44135
[main] 2019/01/25 02:13:01 Storage driver is ConfigMap
[main] 2019/01/25 02:13:01 Max history per release is 0

[root@elasticsearch01 helm]# helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

其他错误

[root@elasticsearch01 helm]# helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

处理方式参考https://github.com/helm/helm/issues/3130
原因分析,rbac权限问题,其实之前已经创建过,只是没有生效,需要重启tiller使其生效

[root@elasticsearch01 helm]# helm init --upgrade --service-account tiller
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!

[root@elasticsearch01 helm]# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-7748f7f6df-2c7ws               1/1     Running   0          24d
coredns-7748f7f6df-chhwx               1/1     Running   0          24d
kubernetes-dashboard-cb55bd5bd-p644x   1/1     Running   0          18d
kubernetes-dashboard-cb55bd5bd-vlmdh   1/1     Running   0          25d
metrics-server-788c48df64-cfnnx        1/1     Running   0          16d
metrics-server-788c48df64-v75gr        1/1     Running   0          16d
tiller-deploy-dbb85cb99-9mhk8          1/1     Running   0          32s
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 218,036评论 6 506
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,046评论 3 395
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 164,411评论 0 354
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,622评论 1 293
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,661评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,521评论 1 304
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,288评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,200评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,644评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,837评论 3 336
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,953评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,673评论 5 346
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,281评论 3 329
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,889评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,011评论 1 269
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,119评论 3 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,901评论 2 355

推荐阅读更多精彩内容