安装helm
这里选择最简单的安装方式,使用官方的Binary安装
- 下载需要的helm版本
- 解压(tar -zxvf helm-v2.8.2-linux-amd64.tgz)
- 解压后得到对应版本的helm二进制文件,将其copy到$PATH路径下(cp linux-amd64/helm /usr/local/bin/helm)
- 没问题的话执行 helm help将会打印出help info,致此helm client安装完成
[root@tcz-dev-adam ~]# helm help
The Kubernetes package manager
To begin working with Helm, run the 'helm init' command:
$ helm init
This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.
Common actions from this point include:
- helm search: search for charts
- helm fetch: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
…………
安装tiller
这里使用指定镜像的方式安装, 所以先准备好镜像
1. 从docker hub上下载中意的镜像并打包到自己的仓库中
[root@tcz-dev-adam ~]# docker pull jiang7865134/tiller:v2.8.2
[root@tcz-dev-adam ~]# docker tag jiang7865134/tiller:v2.8.2 hub.xxx.xxx/tiller:v2.8.2
2. 通过helm init安装tiller, 这里指定了service-account为tiller,需要创建相应的rbac再安装tiller
# 创建tiller-rbac,这里默认使用了cluster-admin最大的权限,若不需要可以更改clusterrolebinding或新建一个自定义的role
[root@tcz-dev-adam ~]# cat tiller-brac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
# 创建tiller-rbac
[root@tcz-dev-adam ~]# kubectl create -f tiller-brac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
3. 安装tiller,指定tiller-image镜像,指定service-account
[root@tcz-dev-adam ~]# helm init --service-account tiller --tiller-image hub.xxx.xxx/tiller:v2.8.2 --debug
NOTE:
- 如果由于网络问题没能获取到注册文件有以下报错,则可以从别处copy一份/root/.helm/repository/repositories.yaml,再执行上面安装tiller的命令
[root@tcz-dev-adam ~]# helm repo list
Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
- tiller安装成功后,通过
kubectl get pods,deployment,service -o wide -n kube-system|grep tiller
可以看到安装成功的resource - tiller默认是没有nodeSelector也没有tolerations的,可以通过edit tiller的deployment按自己需要重新部署到指定机器上
4. 通过执行 helm version 确认helm可用,正常client & server version都正常显示
[root@tcz-dev-adam ~]# helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean”}
NOTE:
- 若有以下报错,则可能有两种原因,一是需要安装socat
yum install -y socat
,二是没有倒helm host的环境变量,通kubectl get service -n kube-system|grep service
获取helm service的clusterIP及port,export HELM_HOST=$tiller-sevice-ClusterIP:44134
E0327 10:36:41.258053 30270 portforward.go:331] an error occurred forwarding 39855 -> 44134: error forwarding port 44134 to pod
a8d76186f92eea818842492a803683cc91cc24f4bff60c3cf3f4a7cd2f34ad53,
uid : unable to do port forwarding: socat not found.
- (*可选)uninstall tiller,添加补全命令
# uninstall tiller
[root@tcz-dev-adam ~]# helm reset ( or helm reset -f )
# 添加补全命令
[root@tcz-dev-adam ~]# helm completion bash > .helmrc
[root@tcz-dev-adam ~]# echo "source .helmrc" >> .bashrc
使用helm安装服务
1. 查看当前可用的repo
Helm 安装时已经默认配置好了两个仓库:stable 和 local,stable 是官方仓库,local 是用户存放自己开发的 chart 的本地仓库
[root@tcz-dev-adam ~]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
2. 使用helm search可查看当前可安装的chart
[root@tcz-dev-adam ~]# helm search
NAME CHART VERSION APP VERSION DESCRIPTION
stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools
stable/aerospike 0.2.3 v4.5.0.5 A Helm chart for Aerospike in Kubernetes
stable/airflow 2.3.0 1.10.0 Airflow is a platform to programmatically autho...
stable/ambassador 1.1.5 0.50.3 A Helm chart for Datawire Ambassador
stable/anchore-engine 0.12.0 0.3.3 Anchore container analysis and policy evaluatio...
stable/apm-server 0.1.0 6.2.4 The server receives data from the Elastic APM a
。。。。。。
或者直接search对应的chart,helm会将所有repo中可安装的chart list出来
[root@tcz-dev-adam base]# helm search prometheus
NAME CHART VERSION APP VERSION DESCRIPTION
local/prometheus 0.1.2 1.0 A Helm Prometheus chart for Kubernetes
stable/prometheus 8.9.0 2.8.0 Prometheus is a monitoring system and time seri...
stable/prometheus-adapter v0.4.1 v0.4.1 A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter 0.2.0 0.12.0 Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter 0.4.2 0.5.0 A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-snmp-exporter 0.0.2 0.14.0 Prometheus SNMP Exporter
stable/prometheus-to-sd 0.1.1 0.2.2 Scrape metrics stored in prometheus format and ...
telemetry/prometheus 0.1.2 1.0 A Helm Prometheus chart for Kubernetes
stable/elasticsearch-exporter 1.1.3 1.0.2 Elasticsearch stats exporter for Prometheus
3. 使用helm安装chart,指定要安装从chart,name & namespace
[root@tcz-dev-adam ~]# helm install stable/influxdb -n influxdb --namespace kube-system
# 输出分为三部分
# (1)chart 本次部署的描述信息,包括通过-n参数指定的name(若不指定则随机生成),—-namespace指定部署的nmespace(默认为default)
NAME: influxdb
LAST DEPLOYED: Wed Mar 27 15:17:13 2019
NAMESPACE: kube-system
STATUS: DEPLOYED # deployed表示已经部署到集群
# (2)当前部署到集群的资源列表configmap/service/deployment/pod
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
influxdb 1 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
influxdb ClusterIP 10.109.47.137 <none> 8086/TCP,8088/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
influxdb 1 0 0 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
influxdb-855769f97b-mbqff 0/1 Pending 0 0s
# (3)NOTES 部分显示的是 release 的使用方法
NOTES:
InfluxDB can be accessed via port 8086 on the following DNS name from within your cluster:
- http://influxdb.kube-system:808
You can easily connect to the remote instance with your local influx cli. To forward the API port to localhost:8086 run the following:
- kubectl port-forward --namespace kube-system $(kubectl get pods --namespace kube-system -l app=influxdb -o jsonpath='{ .items[0].metadata.name }') 8086:8086
You can also connect to the influx cli from inside the container. To open a shell session in the InfluxDB pod run the following:
- kubectl exec -i -t --namespace kube-system $(kubectl get pods --namespace kube-system -l app=influxdb -o jsonpath='{.items[0].metadata.name}') /bin/sh
To tail the logs for the InfluxDB pod run the following:
- kubectl logs -f --namespace kube-system $(kubectl get pods --namespace kube-system -l app=influxdb -o jsonpath='{ .items[0].metadata.name }’)
安装成功后,可在resource中看到
[root@tcz-dev-adam ~]# kubectl get deployment -n kube-system |grep influxdb
influxdb 1 1 1 1 3m
[root@tcz-dev-adam ~]# kubectl get configmap -n kube-system |grep influxdb
influxdb 1 3m
influxdb.v1 1 3m
[root@tcz-dev-adam ~]# kubectl get service -n kube-system |grep influxdb
influxdb ClusterIP 10.109.47.137 <none> 8086/TCP,8088/TCP 3m
[root@tcz-dev-adam ~]# kubectl get pod -n kube-system |grep influxdb
influxdb-855769f97b-mbqff 1/1 Running 0 3m
开发helm chart
1. 先在本地创建自定义chart
Kubernetes 给我们提供了大量官方 chart,不过要部署微服务应用,还是需要开发自己的 chart。
[root@tcz-dev-adam helm-hub]# helm create kube-state-metrics
Creating kube-state-metrics
Helm 会帮我们创建目录 kube-state-metrics,并生成了各类 chart 文件,可在此chart文件修改生成自己的yaml,新建的 chart 默认包含一个 nginx 应用示例values.yaml
[root@tcz-dev-adam helm-hub]# tree kube-state-metrics
kube-state-metrics
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ └── service.yaml
└── values.yaml
2. 可直接删除templates目录下的所有文件,替换成自己的yaml,再配合values.yaml的使用开发自己的chart
自己的替换修改完后,tree如下,templates.tpl为原先的_helpers.tpl,这里自己当成template使用便重命名了
[root@tcz-dev-adam helm-hub]# tree kube-state-metrics
kube-state-metrics
├── charts
├── Chart.yaml
├── templates
│ ├── configmap.yaml
│ ├── deployment.yaml
│ ├── rbac.yaml
│ └── templates.tpl
└── values.yaml
如果有不同的环境都需要部署,但是每个环境都个别参数都不一样,则可通过 define一个template,在template里做结构控制判断,再通过外部部署chart时传入特定的参数来选择不同的参数
[root@tcz-dev-adam templates]# cat templates.tpl
{{- define "kube-state-metrics.containers.args" -}}
args:
- --kubeconfig
- /etc/kube-state-metrics/kubeconfig
- --apiserver
- --collectors
- namespaces,nodes,pods
{{- if (eq .Values.context|upper “ZONE-XXX") -}}. # (.Values.context则在vaulues.yaml文件中定义)
- https://10.xx.xxx.xxx:443
{{- else if (eq .Values.context|upper "ZONE-XXX") -}}
- https://10.xx.xxx.xxx:443
{{- else -}}
- https://127.0.0.1:6443
{{- end -}}
# kube-state-metrics.containers.args 这个template则在deployment.yaml中引用
[root@tcz-dev-adam templates]# cat deployment.yaml
……….
template:
metadata:
labels:
app: kube-state-metrics
spec:
containers:
- name: kube-state-metrics
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
{{ include "kube-state-metrics.containers.args" . |indent 8}}
…
3. debug chart
可通过helm lint 和 helm install --dry-run —debug来调试刚开发完的chart是否正常
[root@tcz-dev-adam kube-state-metrics]# helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: parse error in "kube-state-metrics/templates/templates.tpl": template: kube-state-metrics/templates/templates.tpl:6: unexpected EOF
Error: 1 chart(s) linted, 1 chart(s) failed
可根据报错提示修改,修改没问题helm lint如下
[root@tcz-dev-adam kube-state-metrics]# helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
通过--dry-run 会模拟安装 chart,并输出每个模板生成的 YAML 内容没,可查看将要部署渲染后的yaml,检视这些输出,判断是否与预期相符。
[root@tcz-dev-adam helm-hub]# helm install ./kube-state-metrics --dry-run --set context=ZONE-xxx --debug
[debug] Created tunnel using local port: '40281'
[debug] SERVER: "127.0.0.1:40281"
[debug] Original chart version: ""
[debug] CHART PATH: /root/helm-hub/kube-state-metrics
NAME: silly-dragon
REVISION: 1
…...
…
…
4. 部署测试开发的chart
将debug用的 —-dry-run参数去掉
[root@tcz-dev-adam helm-hub]# helm install ./kube-state-metrics --name kube-state-metrics --namespace kube-system --set context=ZONE-xxx —debug
5. 打包chart,并创建index,上传至远端文件服务器(自建的repo)
[root@tcz-dev-adam helm-hub]# helm package ./kube-state-metrics -d charts-packages/
Successfully packaged chart and saved it to: charts-packages/kube-state-metrics-0.1.0.tgz
# 生成远端repo的index
[root@tcz-dev-adam helm-hub]# helm repo index charts-packages/ --url http://xxxx.xxx.xxx.com/devops/kubernetes/charts
[root@tcz-dev-adam helm-hub]# cat charts-packages/index.yaml # 新生成的index.yaml记录了当前仓库中所有chart的信息
apiVersion: v1
entries:
kube-state-metrics:
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.422812821+08:00
description: A Helm chart for Kubernetes
digest: d7a8efac3149268df45411b50aa346f154f5aac1cc8cc63352a1e20159672fe5
name: kube-state-metrics
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/kube-state-metrics-0.1.0.tgz
version: 0.1.0
prometheus:
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.456720188+08:00
description: A Helm Prometheus chart for Kubernetes
digest: 940d457c6cb9047869f4bccb3a7c49a3a6f97bc3cb39ebc2c743dc3dc1f138e2
name: prometheus
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/prometheus-0.1.2.tgz
version: 0.1.2
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.448098372+08:00
description: A Helm Prometheus chart for Kubernetes
digest: 010925071ffa5350fb0e57f7c22e9dbc1857b3cdf2f764f49fbade6b13a020ee
name: prometheus
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/prometheus-0.1.1.tgz
version: 0.1.1
- apiVersion: v1
appVersion: "1.0"
created: 2019-03-28T19:00:14.438597594+08:00
description: A Helm chart for Kubernetes
digest: 42859453dbe55b790c86949947f609f8a23cac59a605be79910ecc17b511d5cc
name: prometheus
urls:
- http://xxxx.xxx.xxx.com/devops/kubernetes/charts/prometheus-0.1.0.tgz
version: 0.1.0
generated: 2019-03-28T19:00:14.421820865+08:00
将chart package和index上传到远端repo
[root@tcz-dev-adam helm-hub]# scp charts-packages/* root@xxxx.xxx.xxx.com:/var/www/dl/devops/kubernetes/charts
root@xxxx.xxx.xxx.com's password:
index.yaml 100% 1525 1.4MB/s 00:00
kube-state-metrics-0.1.0.tgz 100% 2030 1.5MB/s 00:00
prometheus-0.1.0.tgz 100% 2781 2.4MB/s 00:00
prometheus-0.1.1.tgz 100% 2021 1.2MB/s 00:00
prometheus-0.1.2.tgz
6. 到其他有环境安装helm客户端机器上,通过 helm repo add 将远端repo添加到 Helm
repo 命名为 telemetry
[root@SVRxxxxxx ~]# helm repo add telemetry http://xxxx.xxx.xxx.com/devops/kubernetes/charts
"telemetry" has been added to your repositories
[root@tcz-dev-adam ~]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
telemetry http://xxxx.xxx.xxx.com/devops/kubernetes/charts
# 更新repo
[root@SVRxxxxx ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "telemetry" chart repository
Update Complete. ⎈ Happy Helming!⎈
7. 通过新添加的repo安装chart
# search刚上传的chart
[root@SVRxxxx ~]# helm search kube-state-metrics
WARNING: Repo "local" is corrupt or missing. Try 'helm repo update'.NAME CHART VERSION APP VERSION DESCRIPTION
telemetry/kube-state-metrics 0.1.0 1.0 A Helm chart for Kubernetes
# 安装chart
[root@SVRxxxx ~]# helm install telemetry/kube-state-metrics --name kube-state-metrics --namespace kube-system --set context=ZONE-xxx —debug