Prepare VM
Prepare OS: CentOS7
参考link: https://kubernetes.io/docs/setup/independent/install-kubeadm/
Before you begin
> OS: CentOS 7
> 2 GB or more of RAM per machine (any less will leave little room for your apps)(安装之前配置)
> 2 CPUs or more(安装之前配置)
> Full network connectivity between all machines in the cluster (public or private network is fine)
> Unique hostname, MAC address, and product_uuid for every node
> Certain ports are open on your machines. See the section below for more details
> Swap disabled. You **MUST** disable swap in order for the kubelet to work properly
### 每个node要有不同的host_name,MAC address和Product_UUID
虚拟机
> .Master: 带界面
> Node
> 方法一:在安装CentOS过程中,打开network 设置,在这里设置上述属性
> 方法二:可在shell中更改
> >>hostname 查询主机名y
>打开ssh:sudo yum install openssh-server
ifconfig command not found: sudo yum install net-tools
卸载安装的docker
1. 查询安装的docker
yum list installed | grep docker
docker-engine.x86_64
2.删除安装的软件包
yum -y remove docker-engine.x86_64
node安装docker
> 所有的机器以root账户运行
> >>yum install -y docker
> 如果出现无法联网,解决方法如下:
> 方法一
> 1、打开 vi /etc/sysconfig/network-scripts/ifcfg-eth0(每个机子都可能不一样,但格式会是“ifcfg-eth数字”),把ONBOOT=no,改为ONBOOT=yes
> 2、重启网络:service network restart
> 方法二
> 1、打开 vi /etc/resolv.conf,增加 nameserver 8.8.8.8
> 2、重启网络: service network restart
> 配置docker的group (可选?在我的实践中并没有进行这个配置,但是并没有出现问题)
> 通过docker inf查看Cgroup Driver: cgroupfs
> docker和kubelet的cgroup driver需要一致,如果docker不是cgroupfs,则执行
> cat << EOF > /etc/docker/daemon.json
> {
> "exec-opts": ["native.cgroupdriver=cgroupfs"]
> }
> EOF
> systemctl enable docker && systemctl start docker
安装 kubeadm, kubelet 和 kubectl
> **kubeadm the command to bootstrap the cluster**
> **kubelet the component that run on all of the machines in your cluster and does things like starting pods and containters**
> **kubectl the command line util to talk to your cluster**
安装kubeadm, kubelet和kubect前,需做如下配置
> 配置Kubernetes repo,为了能使用yum来安装kubenetes
> cat < <EOF < /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=[https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64](https://link.jianshu.com/?t=https%3A%2F%2Fpackages.cloud.google.com%2Fyum%2Frepos%2Fkubernetes-el7-x86_64)
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=[https://packages.cloud.google.com/yum/doc/yum-key.gpg
> [https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF
国内拉Google一般会不成功,可以去下载相关包,再local安装。
local安装细节: https://www.jianshu.com/p/9c7e1c957752
配置及命令
> setenforce 0 关闭SELinux -- 这个需要修改/etc/selinux/config来彻底关闭SELinux,否则无法正常启动服务,修改了/etc/selinux/config以后需要重启机器来使配置修改生效
> yum install -y kubelet kubeadm kubectl
> systemctl enable kubelet && systemctl start kubelet
> 关闭swap 在每个vi 机器上运行 swapoff -a
> systemctl stop firewalld.service
运行 创建 cluster
> kubeadm init 进行初始化
> kubeadm reset: 重置kubeadm状态
> 根据选择的pod network 选择使用一个相应的flag, 对于Flannel 需要使用 --pod-network-cidr=10.244.0.0/16 这是一个固定式无需修改
> kubeadm init --pod-network-cidr=10.244.0.0/16
> kubeadm 再次运行之前需要先 tearing down cluster: kubeadm reset
> 如果有两个网卡的话,需要指定network interface --apiserver-advertise-address=<ip-address>
> 由于使用了Flannel所有这个命令要变种为 kubeadm init --apiserver-advertise-address=192.168.56.102 --pod-network-cidr=10.244.0.0/16 --- 这个命令仅使用与我自己的配置了双网卡的机器
> 加载配置
> root 用户
> export KUBECONFIG=/etc/kubernetes/admin.conf
> 非 root 用户
> pod network --- 使DNS服务启动
> 这里我们用 Flannel配置 iptable sysctl net.bridge.bridge-nf-call-iptables=1
> 应用 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
节点:
> Note节点直接运行 kubuadm init输出的那个kubeadm join 命令来加入master节点
> 注意Node节点上也需要运行 systemctl start kubelet
> 前面一直到初始化,操作都一致
> 查看命令
> kubectl get nodes
> kubectl get pods --all-namespaces
安装Dashboard 服务
> kubectl apply -f[https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml](https://link.jianshu.com/?t=https%3A%2F%2Fraw.githubusercontent.com%2Fkubernetes%2Fdashboard%2Fmaster%2Fsrc%2Fdeploy%2Frecommended%2Fkubernetes-dashboard.yaml)
> 运行 kubectl proxy
> 在master机器上访问[http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/](https://link.jianshu.com/?t=http%3A%2F%2Flocalhost%3A8001%2Fapi%2Fv1%2Fnamespaces%2Fkube-system%2Fservices%2Fhttps%3Akubernetes-dashboard%3A%2Fproxy%2F)
> 运行Dashboard的时候可能需要只让系统保留一个网卡运行,之前两个网卡的时候打不开dashboard,用了一个网卡的环境可以打开
> dashborad网页打开需要输入token
> 参看网页[https://github.com/kubernetes/dashboard/wiki/Creating-sample-user](https://link.jianshu.com/?t=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fdashboard%2Fwiki%2FCreating-sample-user)
创建admin-user, 创建一个yaml文件并输入以下内容,并运行命令kubectl create -f xxx.yaml
> apiVersion: v1
> kind: ServiceAccount
> metadata:
> name: admin-user
> namespace: kube-system
创建 ClusterRoleBinding,创建一个yaml文件并输入以下内容,并运行命令kubectl create -f xxx.yaml
> apiVersion: rbac.authorization.k8s.io/v1beta1
> kind: ClusterRoleBinding
> metadata:
> name: admin-user
> roleRef:
> apiGroup: rbac.authorization.k8s.io
> kind: ClusterRole
> name: cluster-admin
> subjects:
> - kind: ServiceAccount
> name: admin-user
> namespace: kube-system
> kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
或者:
kubectl apply -f kubernetes-dashboard-http.yaml
kubectl apply -f admin-role.yaml
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-http.yaml
serviceaccount "kubernetes-dashboard" created
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
deployment.apps "kubernetes-dashboard" created service "kubernetes-dashboard" created
[root@master1 kubernetes1.10]# kubectl apply -f admin-role.yaml
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created
Others:
> 进入 heapster-1.5.0/deploy/kube-config/influxdb 修改grafana.yaml 中的 GF_SERVER_ROOT_URL uncomments apiservier设置
> 运行sh heapster-1.5.0/deploy/kube.sh start
> kubectl get pods --all-namespaces 查看所有运行的服务的状态
> 在使用ss 代理的情况下,需要先把需要的image pull下来
> docker pull k8s.gcr.io/kube-apiserver-amd64:v1.9.1
> docker pull k8s.gcr.io/kube-controller-manager-amd64:v1.9.1
> docker pull k8s.gcr.io/kube-scheduler-amd64:v1.9.1
> docker pull k8s.gcr.io/kube-proxy-amd64:v1.9.1
> docker pull k8s.gcr.io/etcd-amd64:3.1.10
> docker pull k8s.gcr.io/pause-amd64:3.0
> docker pull k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.7
> docker pull k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.7
> docker pull k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.7
> 在work节点上也需要pull 镜像,只需要pull docker pull k8s.gcr.io/kube-proxy-amd64:v1.9.1
> 要是服务能够整个的运行需要彻底关闭SELinux,这需要设置 /etc/selinux/config
> 问题速查:
> docker logs 查docker运行过程中的问题
> [https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/](https://link.jianshu.com/?t=https%3A%2F%2Fkubernetes.io%2Fdocs%2Freference%2Fsetup-tools%2Fkubeadm%2Fkubeadm-init%2F)
> 中文参考
> 参考[https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/#21%E5%AE%89%E8%A3%85%E5%8C%85%E4%BB%8E%E5%93%AA%E6%9D%A5](https://link.jianshu.com/?t=https%3A%2F%2Fmritd.me%2F2016%2F10%2F29%2Fset-up-kubernetes-cluster-by-kubeadm%2F%2321%25E5%25AE%2589%25E8%25A3%2585%25E5%258C%2585%25E4%25BB%258E%25E5%2593%25AA%25E6%259D%25A5)
> 运行init成功的例子:
> Your Kubernetes master has initialized successfully!
> You should now deploy a pod network to the cluster.
> Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
> [https://kubernetes.io/docs/concepts/cluster-administration/addons/](https://link.jianshu.com/?t=https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fcluster-administration%2Faddons%2F)
> You can now join any number of machines by running the following on each node
> as root:
> kubeadm join --token cb327c.f171b1e736a5184e 192.168.56.102:6443 --discovery-token-ca-cert-hash sha256:392545adad830f474ee0409691d1cb9a6d8f2499decdd903c788ccb60e2cb247