CentOS 7.6安装rancher V2.2.8 高可用集群

CentOS 7.6安装rancher V2.2.8 高可用集群

  • 使用GCP主机作为安装示例
[rke@rancher-1 ~]$ cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
用户名 主机名 内网IP 公网IP SSH端口
rke rancher-1 10.128.0.2 xxx 22
rke rancher-2 10.128.0.3 xxx 22
rke rancher-3 10.128.0.4 xxx 22
  1. 临时关闭selinux
sudo setenforce 0
  1. 永久关闭selinux
sudo  sed -i 's/SELINUX=enforcing/SELINUX=disabled/'  /etc/sysconfig/selinux
  1. 安装docker
sudo yum install -y yum-utils wget device-mapper-persistent-data lvm2

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install -y docker-ce-18.09.8 docker-ce-18.09.8 containerd.io

sudo usermod -aG docker ${USER}

sudo systemctl start docker

sudo systemctl enable docker

  1. 下载镜像
wget https://github.com/rancher/rke/releases/download/v0.2.8/rke_linux-amd64
sudo chmod 777 rke_linux-amd64 && sudo cp rke_linux-amd64 /usr/local/bin/rke
wget https://github.com/rancher/rancher/releases/download/v2.2.8/rancher-images.txt
wget https://github.com/rancher/rancher/releases/download/v2.2.8/rancher-save-images.sh
chmod 755 rancher-save-images.sh
#不运行docker save
sed -i 's/docker save/#docker save/g' rancher-save-images.sh
./rancher-save-images.sh
  1. 配置hosts,并测试ssh免密访问各个主机(rancher-1上安装的话,包括免密访问自己)
10.128.0.2  rancher-1
10.128.0.3  rancher-2
10.128.0.4  rancher-3
ssh rancher-1
  1. 下载helm

    wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
    tar zxf helm-v2.14.3-linux-amd64.tar.gz
    chmod +x ./linux-amd64/helm && sudo cp ./linux-amd64/helm /usr/local/bin/
    
  2. 下载kubectl

    # 有墙哦
    # https://storage.googleapis.com/kubernetes-release/release/v1.14.5/bin/linux/amd64/kubectl
    curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
    chmod +x ./kubectl && sudo cp ./kubectl /usr/local/bin/
    
  3. 初始化rke配置文件

nodes:
- address: 10.128.0.2
  #internal_address: 104.198.227.73
  user: rke
  role: [controlplane, etcd, worker]
  ssh_key_path: /home/rke/.ssh/id_rsa
- address: 10.128.0.3
  #internal_address: 35.232.38.212
  user: rke
  role: [controlplane, etcd, worker]
  ssh_key_path: /home/rke/.ssh/id_rsa
- address: 10.128.0.4
  #internal_address: 35.232.158.150
  user: rke
  role: [controlplane, etcd, worker]
  ssh_key_path: /home/rke/.ssh/id_rsa

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h
  1. 配置解析到各个节点, (如果节点有公网IP或者只打算内网访问,这样就可以了,否则就像官方文档那样,配置nginx代理吧), 例如:
rancher.xxx.top A 104.177.227.73
rancher.xxx.top A 35.111.38.212
rancher.xxx.top A 35.111.158.150
  1. rke安装集群
rke up --config ./rancher-cluster.yml
[rke@rancher-1 ~]$ rke up --config ./rancher-cluster.yml
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [certificates] Generating Kubernetes API server certificates
INFO[0000] [certificates] Generating admin certificates and kubeconfig
INFO[0000] [certificates] Generating etcd-10.128.0.2 certificate and key
INFO[0000] [certificates] Generating etcd-10.128.0.3 certificate and key
INFO[0000] [certificates] Generating etcd-10.128.0.4 certificate and key
INFO[0000] Successfully Deployed state file at [./rancher-cluster.rkestate]
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [10.128.0.2]
INFO[0000] [dialer] Setup tunnel for host [10.128.0.4]
INFO[0000] [dialer] Setup tunnel for host [10.128.0.3]
INFO[0000] [network] Deploying port listener containers
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [10.128.0.4]
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [10.128.0.3]
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [10.128.0.2]
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [10.128.0.2]
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [10.128.0.3]
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [10.128.0.4]
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [10.128.0.4]
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [10.128.0.2]
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [10.128.0.3]
INFO[0002] [network] Port listener containers deployed successfully
INFO[0002] [network] Running etcd <-> etcd port checks
INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
INFO[0002] [network] Running control plane -> etcd port checks
INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
INFO[0002] [network] Running control plane -> worker port checks
INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
INFO[0003] [network] Running workers -> control plane port checks
INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
INFO[0003] [network] Checking KubeAPI port Control Plane hosts
INFO[0003] [network] Removing port listener containers
INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [10.128.0.4]
INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [10.128.0.2]
INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [10.128.0.3]
INFO[0004] [remove/rke-cp-port-listener] Successfully removed container on host [10.128.0.4]
INFO[0004] [remove/rke-cp-port-listener] Successfully removed container on host [10.128.0.2]
INFO[0004] [remove/rke-cp-port-listener] Successfully removed container on host [10.128.0.3]
INFO[0004] [remove/rke-worker-port-listener] Successfully removed container on host [10.128.0.4]
INFO[0004] [remove/rke-worker-port-listener] Successfully removed container on host [10.128.0.2]
INFO[0004] [remove/rke-worker-port-listener] Successfully removed container on host [10.128.0.3]
INFO[0004] [network] Port listener containers removed successfully
INFO[0004] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0009] [reconcile] Rebuilding and updating local kube config
INFO[0009] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
INFO[0009] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
INFO[0009] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
INFO[0009] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0009] [reconcile] Reconciling cluster state
INFO[0009] [reconcile] This is newly generated cluster
INFO[0009] Pre-pulling kubernetes images
INFO[0009] Kubernetes images pulled successfully
INFO[0009] [etcd] Building up etcd plane..
INFO[0010] Waiting for [etcd] container to exit on host [10.128.0.2]
INFO[0010] [etcd] Successfully updated [etcd] container on host [10.128.0.2]
INFO[0010] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.128.0.2]
INFO[0010] [remove/etcd-rolling-snapshots] Successfully removed container on host [10.128.0.2]
INFO[0010] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.128.0.2]
INFO[0016] [certificates] Successfully started [rke-bundle-cert] container on host [10.128.0.2]
INFO[0016] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.2]
INFO[0016] Container [rke-bundle-cert] is still running on host [10.128.0.2]
INFO[0017] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.2]
INFO[0017] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.128.0.2]
INFO[0017] [etcd] Successfully started [rke-log-linker] container on host [10.128.0.2]
INFO[0017] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
INFO[0018] Waiting for [etcd] container to exit on host [10.128.0.3]
INFO[0018] [etcd] Successfully updated [etcd] container on host [10.128.0.3]
INFO[0018] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.128.0.3]
INFO[0018] [remove/etcd-rolling-snapshots] Successfully removed container on host [10.128.0.3]
INFO[0018] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.128.0.3]
INFO[0024] [certificates] Successfully started [rke-bundle-cert] container on host [10.128.0.3]
INFO[0024] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.3]
INFO[0024] Container [rke-bundle-cert] is still running on host [10.128.0.3]
INFO[0025] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.3]
INFO[0025] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.128.0.3]
INFO[0025] [etcd] Successfully started [rke-log-linker] container on host [10.128.0.3]
INFO[0025] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
INFO[0026] Waiting for [etcd] container to exit on host [10.128.0.4]
INFO[0026] [etcd] Successfully updated [etcd] container on host [10.128.0.4]
INFO[0026] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.128.0.4]
INFO[0026] [remove/etcd-rolling-snapshots] Successfully removed container on host [10.128.0.4]
INFO[0026] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.128.0.4]
INFO[0032] [certificates] Successfully started [rke-bundle-cert] container on host [10.128.0.4]
INFO[0032] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.4]
INFO[0032] Container [rke-bundle-cert] is still running on host [10.128.0.4]
INFO[0033] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.4]
INFO[0033] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.128.0.4]
INFO[0033] [etcd] Successfully started [rke-log-linker] container on host [10.128.0.4]
INFO[0033] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
INFO[0033] [etcd] Successfully started etcd plane.. Checking etcd cluster health
INFO[0034] [controlplane] Building up Controller Plane..
INFO[0034] [controlplane] Successfully started [kube-apiserver] container on host [10.128.0.3]
INFO[0034] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.128.0.3]
INFO[0034] [controlplane] Successfully started [kube-apiserver] container on host [10.128.0.4]
INFO[0034] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.128.0.4]
INFO[0034] [controlplane] Successfully started [kube-apiserver] container on host [10.128.0.2]
INFO[0034] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.128.0.2]
INFO[0044] [healthcheck] service [kube-apiserver] on host [10.128.0.3] is healthy
INFO[0044] [healthcheck] service [kube-apiserver] on host [10.128.0.4] is healthy
INFO[0044] [healthcheck] service [kube-apiserver] on host [10.128.0.2] is healthy
INFO[0045] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.4]
INFO[0045] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.2]
INFO[0045] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.3]
INFO[0045] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
INFO[0045] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
INFO[0045] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
INFO[0045] [controlplane] Successfully started [kube-controller-manager] container on host [10.128.0.2]
INFO[0045] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.128.0.2]
INFO[0045] [controlplane] Successfully started [kube-controller-manager] container on host [10.128.0.4]
INFO[0045] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.128.0.4]
INFO[0045] [controlplane] Successfully started [kube-controller-manager] container on host [10.128.0.3]
INFO[0045] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.128.0.3]
INFO[0051] [healthcheck] service [kube-controller-manager] on host [10.128.0.2] is healthy
INFO[0051] [healthcheck] service [kube-controller-manager] on host [10.128.0.4] is healthy
INFO[0051] [healthcheck] service [kube-controller-manager] on host [10.128.0.3] is healthy
INFO[0051] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.2]
INFO[0051] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.4]
INFO[0051] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.3]
INFO[0051] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
INFO[0051] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
INFO[0052] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
INFO[0052] [controlplane] Successfully started [kube-scheduler] container on host [10.128.0.2]
INFO[0052] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.128.0.2]
INFO[0052] [controlplane] Successfully started [kube-scheduler] container on host [10.128.0.4]
INFO[0052] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.128.0.4]
INFO[0052] [controlplane] Successfully started [kube-scheduler] container on host [10.128.0.3]
INFO[0052] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.128.0.3]
INFO[0057] [healthcheck] service [kube-scheduler] on host [10.128.0.2] is healthy
INFO[0057] [healthcheck] service [kube-scheduler] on host [10.128.0.4] is healthy
INFO[0057] [healthcheck] service [kube-scheduler] on host [10.128.0.3] is healthy
INFO[0057] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.2]
INFO[0058] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.4]
INFO[0058] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
INFO[0058] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.3]
INFO[0058] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
INFO[0058] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
INFO[0058] [controlplane] Successfully started Controller Plane..
INFO[0058] [authz] Creating rke-job-deployer ServiceAccount
INFO[0058] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0058] [authz] Creating system:node ClusterRoleBinding
INFO[0058] [authz] system:node ClusterRoleBinding created successfully
INFO[0058] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0058] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0058] Successfully Deployed state file at [./rancher-cluster.rkestate]
INFO[0058] [state] Saving full cluster state to Kubernetes
INFO[0058] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state
INFO[0058] [worker] Building up Worker Plane..
INFO[0058] [sidekick] Sidekick container already created on host [10.128.0.3]
INFO[0058] [sidekick] Sidekick container already created on host [10.128.0.2]
INFO[0058] [sidekick] Sidekick container already created on host [10.128.0.4]
INFO[0058] [worker] Successfully started [kubelet] container on host [10.128.0.2]
INFO[0058] [healthcheck] Start Healthcheck on service [kubelet] on host [10.128.0.2]
INFO[0058] [worker] Successfully started [kubelet] container on host [10.128.0.4]
INFO[0058] [healthcheck] Start Healthcheck on service [kubelet] on host [10.128.0.4]
INFO[0058] [worker] Successfully started [kubelet] container on host [10.128.0.3]
INFO[0058] [healthcheck] Start Healthcheck on service [kubelet] on host [10.128.0.3]
INFO[0064] [healthcheck] service [kubelet] on host [10.128.0.2] is healthy
INFO[0064] [healthcheck] service [kubelet] on host [10.128.0.4] is healthy
INFO[0064] [healthcheck] service [kubelet] on host [10.128.0.3] is healthy
INFO[0064] [worker] Successfully started [rke-log-linker] container on host [10.128.0.2]
INFO[0064] [worker] Successfully started [rke-log-linker] container on host [10.128.0.4]
INFO[0064] [worker] Successfully started [rke-log-linker] container on host [10.128.0.3]
INFO[0064] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
INFO[0064] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
INFO[0064] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
INFO[0064] [worker] Successfully started [kube-proxy] container on host [10.128.0.2]
INFO[0064] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.128.0.2]
INFO[0065] [worker] Successfully started [kube-proxy] container on host [10.128.0.3]
INFO[0065] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.128.0.3]
INFO[0065] [worker] Successfully started [kube-proxy] container on host [10.128.0.4]
INFO[0065] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.128.0.4]
INFO[0065] [healthcheck] service [kube-proxy] on host [10.128.0.3] is healthy
INFO[0065] [healthcheck] service [kube-proxy] on host [10.128.0.4] is healthy
INFO[0065] [worker] Successfully started [rke-log-linker] container on host [10.128.0.4]
INFO[0065] [worker] Successfully started [rke-log-linker] container on host [10.128.0.3]
INFO[0065] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
INFO[0065] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
INFO[0070] [healthcheck] service [kube-proxy] on host [10.128.0.2] is healthy
INFO[0070] [worker] Successfully started [rke-log-linker] container on host [10.128.0.2]
INFO[0071] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
INFO[0071] [worker] Successfully started Worker Plane..
INFO[0071] [cleanup] Successfully started [rke-log-cleaner] container on host [10.128.0.2]
INFO[0071] [cleanup] Successfully started [rke-log-cleaner] container on host [10.128.0.4]
INFO[0071] [cleanup] Successfully started [rke-log-cleaner] container on host [10.128.0.3]
INFO[0071] [remove/rke-log-cleaner] Successfully removed container on host [10.128.0.2]
INFO[0071] [remove/rke-log-cleaner] Successfully removed container on host [10.128.0.4]
INFO[0071] [remove/rke-log-cleaner] Successfully removed container on host [10.128.0.3]
INFO[0071] [sync] Syncing nodes Labels and Taints
INFO[0071] [sync] Successfully synced nodes Labels and Taints
INFO[0071] [network] Setting up network plugin: canal
INFO[0071] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0071] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0071] [addons] Executing deploy job rke-network-plugin
INFO[0076] [addons] Setting up coredns
INFO[0076] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0076] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0076] [addons] Executing deploy job rke-coredns-addon
INFO[0081] [addons] CoreDNS deployed successfully..
INFO[0081] [dns] DNS provider coredns deployed successfully
INFO[0081] [addons] Setting up Metrics Server
INFO[0081] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0082] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0082] [addons] Executing deploy job rke-metrics-addon
INFO[0087] [addons] Metrics Server deployed successfully
INFO[0087] [ingress] Setting up nginx ingress controller
INFO[0087] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0087] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0087] [addons] Executing deploy job rke-ingress-controller
INFO[0092] [ingress] ingress controller nginx deployed successfully
INFO[0092] [addons] Setting up user addons
INFO[0092] [addons] no user addons defined
INFO[0092] Finished building Kubernetes cluster successfully
  1. 测试安装好的集群
    export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
    kubectl get nodes
NAME         STATUS   ROLES                      AGE   VERSION
10.128.0.2   Ready    controlplane,etcd,worker   64s   v1.14.6
10.128.0.3   Ready    controlplane,etcd,worker   64s   v1.14.6
10.128.0.4   Ready    controlplane,etcd,worker   64s   v1.14.6
  1. 安装helm tiller

kubectl -n kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

helm init --service-account tiller

#Users in China: You will need to specify a specific tiller-image in order to initialize tiller. 

#The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085. 

#When initializing tiller, you'll need to pass in --tiller-image

#tag v2.14.3 20190824
helm init --service-account tiller \
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3

[rke@rancher-1 ~]$ kubectl -n kube-system get pod
NAME                                      READY   STATUS      RESTARTS   AGE
canal-bp9hw                               2/2     Running     0          4m22s
canal-d84cx                               2/2     Running     0          4m22s
canal-kftk2                               2/2     Running     0          4m22s
coredns-autoscaler-5d5d49b8ff-4h8f5       1/1     Running     0          4m17s
coredns-bdffbc666-mlc44                   1/1     Running     0          4m18s
metrics-server-7f6bd4c888-cqw2k           1/1     Running     0          4m13s
rke-coredns-addon-deploy-job-s78h6        0/1     Completed   0          4m19s
rke-ingress-controller-deploy-job-f69k8   0/1     Completed   0          4m9s
rke-metrics-addon-deploy-job-8ncvb        0/1     Completed   0          4m14s
rke-network-plugin-deploy-job-sntgn       0/1     Completed   0          4m25s
tiller-deploy-7f4d76c4b6-qdckk            1/1     Running     0          79s
[rke@rancher-1 ~]$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
  1. 安装rancher chart
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
  1. 安装 cert manager
helm install stable/cert-manager \
--name cert-manager \
--namespace kube-system \
--version v0.5.2
  1. 安装rancher
helm install rancher-stable/rancher --name rancher --namespace cattle-system --set hostname=rancher.xxx.top
kubectl -n cattle-system get pod
  1. 安装完成啦,访问 Rancher-UI
(资料找到,但此问题会自行消失,admin用户已经被添加)
访问: https://rancher.xxx.top/ 为 admin账户设置初始密码,并登入系统。提示设置server-url,确保你的地址无误,确认即可。随后稍等一会儿,待系统完成初始化。
如果出现local集群一直停留在等待状态,并提示 Waiting for server-url setting to be set,可以尝试点击 全局->local->升级->添加一个成员角色(admin/ClusterOwner)->保存即可。
  1. 问题排查
  • 启动后local集群存在(v2.2.8)

    
    Exit status 1, Error from server (Forbidden)... balabala 权限问题
    
    #检查发现 cattle-system 下面就只有 rancher
    [rke@rancher-1 ~]$ kubectl get pod -n cattle-system
    NAME                       READY   STATUS    RESTARTS   AGE
    rancher-797f8646f6-4bcr9   1/1     Running   1          19m
    rancher-797f8646f6-fph2c   1/1     Running   2          19m
    rancher-797f8646f6-fvtnf   1/1     Running   2          19m
    
    #尝试重新启动rancher容器...(此期间,UI无法访问)
    kubectl -n cattle-system scale --replicas=0 deploy rancher
    kubectl -n cattle-system scale --replicas=3 deploy rancher
    
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 214,233评论 6 495
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,357评论 3 389
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 159,831评论 0 349
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,313评论 1 288
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,417评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,470评论 1 292
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,482评论 3 412
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,265评论 0 269
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,708评论 1 307
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,997评论 2 328
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,176评论 1 342
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,827评论 4 337
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,503评论 3 322
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,150评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,391评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,034评论 2 365
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,063评论 2 352

推荐阅读更多精彩内容