配置环境说明
- 主服务器
腾讯云,广州二区,1核1G,centos7.3 64bit
- 节点服务器
腾讯云,广州二区,1核1G,centos7.3 64bit
主服务器安装配置
关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
关闭selinux
setenforce 0
# 修改/etc/selinux/config文件的SELINUX=XXX
vi /etc/selinux/config
SELINUX=disabled
安装docker
yum install epel-release -y
yum install -y docker
安装k8s
# 添加一个k8s软件源
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
yum install -y kubectl kubelet kubeadm
修改k8s配置文件
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
原内容:
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
修改后内容
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --cgroup-driver=systemd"
修改主机名,主机名不能有_之类特殊字符
hostnamectl --static set-hostname master
启动docker
service docker start
初始化k8s集群
kubeadm init --kubernetes-version=v1.7.1 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=0.0.0.0
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
注意保留加入集群命令,上面初始化输出的:
# 初始化输出的
kubeadm join --token b6c727.96eca5ce6ff26bb6 10.135.10.97:6443
安装一些必要的服务
kubectl apply -f http://docs.projectcalico.org/v2.4/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
注意,可以提前安装好镜像,我已经同步一些镜像到腾讯云了
docker pull ccr.ccs.tencentyun.com/kubenetes/kube-controller-manager-amd64:v1.7.1
docker tag ccr.ccs.tencentyun.com/kubenetes/kube-controller-manager-amd64:v1.7.1 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.1
docker pull ccr.ccs.tencentyun.com/kubenetes/etcd-amd64:3.0.17
docker tag ccr.ccs.tencentyun.com/kubenetes/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17
docker pull ccr.ccs.tencentyun.com/kubenetes/kube-scheduler-amd64:v1.7.1
docker tag ccr.ccs.tencentyun.com/kubenetes/kube-scheduler-amd64:v1.7.1 gcr.io/google_containers/kube-scheduler-amd64:v1.7.1
docker pull ccr.ccs.tencentyun.com/kubenetes/kube-apiserver-amd64:v1.7.1
docker tag ccr.ccs.tencentyun.com/kubenetes/kube-apiserver-amd64:v1.7.1 gcr.io/google_containers/kube-apiserver-amd64:v1.7.1
docker pull ccr.ccs.tencentyun.com/kubenetes/pause-amd64:3.0
docker tag ccr.ccs.tencentyun.com/kubenetes/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker pull ccr.ccs.tencentyun.com/kubenetes/kube-proxy-amd64:v1.7.1
docker tag ccr.ccs.tencentyun.com/kubenetes/kube-proxy-amd64:v1.7.1 gcr.io/google_containers/kube-proxy-amd64:v1.7.1
docker pull ccr.ccs.tencentyun.com/kubenetes/k8s-dns-kube-dns-amd64:1.14.4
docker tag ccr.ccs.tencentyun.com/kubenetes/k8s-dns-kube-dns-amd64:1.14.4 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
docker pull ccr.ccs.tencentyun.com/kubenetes/k8s-dns-dnsmasq-nanny-amd64:1.14.4
docker tag ccr.ccs.tencentyun.com/kubenetes/k8s-dns-dnsmasq-nanny-amd64:1.14.4 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
docker pull ccr.ccs.tencentyun.com/kubenetes/k8s-dns-sidecar-amd64:1.14.4
docker tag ccr.ccs.tencentyun.com/kubenetes/k8s-dns-sidecar-amd64:1.14.4 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
docker pull ccr.ccs.tencentyun.com/calico/node:v2.4.1
docker tag ccr.ccs.tencentyun.com/calico/node:v2.4.1 quay.io/calico/node:v2.4.1
docker pull ccr.ccs.tencentyun.com/calico/cni:v1.10.0
docker tag ccr.ccs.tencentyun.com/calico/cni:v1.10.0 quay.io/calico/cni:v1.10.0
docker pull ccr.ccs.tencentyun.com/calico/kube-policy-controller:v0.7.0
docker tag ccr.ccs.tencentyun.com/calico/kube-policy-controller:v0.7.0 quay.io/calico/kube-policy-controller:v0.7.0
docker pull ccr.ccs.tencentyun.com/calico/etcd:v3.1.10
docker tag ccr.ccs.tencentyun.com/calico/etcd:v3.1.10 quay.io/calico/etcd:v3.1.10
docker pull ccr.ccs.tencentyun.com/kubenetes/kubernetes-dashboard-init-amd64:v1.0.1
docker tag ccr.ccs.tencentyun.com/kubenetes/kubernetes-dashboard-init-amd64:v1.0.1 gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.0.1
docker pull ccr.ccs.tencentyun.com/kubenetes/kubernetes-dashboard-amd64:v1.7.1
docker tag ccr.ccs.tencentyun.com/kubenetes/kubernetes-dashboard-amd64:v1.7.1 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1
docker pull ccr.ccs.tencentyun.com/kubenetes/heapster-grafana-amd64:v4.4.3
docker tag ccr.ccs.tencentyun.com/kubenetes/heapster-grafana-amd64:v4.4.3 gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
docker pull ccr.ccs.tencentyun.com/kubenetes/heapster-amd64:v1.4.0
docker tag ccr.ccs.tencentyun.com/kubenetes/heapster-amd64:v1.4.0 gcr.io/google_containers/heapster-amd64:v1.4.0
docker pull ccr.ccs.tencentyun.com/kubenetes/heapster-influxdb-amd64:v1.3.3
docker tag ccr.ccs.tencentyun.com/kubenetes/heapster-influxdb-amd64:v1.3.3 gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
docker pull ccr.ccs.tencentyun.com/coreos/flannel:v0.9.0-amd64
docker tag ccr.ccs.tencentyun.com/coreos/flannel:v0.9.0-amd64 quay.io/coreos/flannel:v0.9.0-amd64
安装面板(kubernetes-dashboard)
但是注意,我们为了面板可以在公网访问,需要修改一下这个文件,暴露出一个公网IP。
原来的部分代码
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
修改后部分代码(30000是暴露的端口)
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30000
type: NodePort
selector:
k8s-app: kubernetes-dashboard
然后我们直接通过访问https://{服务器ip}:30000的时候需要我们登录,我唯一成功实践过就是使用token登录,看了很多教程都没有其它登录方式成功过。
我们需要创建一个token,新建一个文件a.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
启用,并且获取token
kubectl apply -f a.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubectl describe -n kube-system secret kubernetes-dashboard-admin-token-{xxx}
配置hpa(Horizontal Pod Autoscaling)
wget https://github.com/kubernetes/heapster/archive/master.zip
unzip master.zip
cd heapster-master/
# 修改heapster-master/deploy/kube-config/influxdb/heapster.yaml
# - --source=kubernetes
# 连接apiserver仅仅需要上面这句
/bin/bash deploy/kube.sh start
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=system:serviceaccount:kube-system:heapster
使用hpa必须要有节点服务器,并且要看看启动的容器会不会显示i/o timeout之类的信息,如果出现了,我只能重装节点服务器系统再来一次。
节点服务器安装配置:
关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
关闭selinux
setenforce 0
# 修改/etc/selinux/config文件的SELINUX=XXX
vi /etc/selinux/config
SELINUX=disabled
安装docker
yum install epel-release -y
yum install -y docker
安装k8s
# 添加一个k8s软件源
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
yum install -y kubectl kubelet kubeadm
修改k8s配置文件
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
原内容:
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
修改后内容
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --cgroup-driver=systemd"
启动docker
service docker start
修改主机名,主机名不能有_之类特殊字符
hostnamectl --static set-hostname node1
特殊处理,不然会出错
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
执行加入集群命令
kubeadm join --token e7323f.8d87d7056b704900 10.135.10.97:6443
service docker restart
注意:可以提前下载好镜像
还有最总要的一步,在节点服务器不要执行systemctl start kubelet,在测试配置的时候,因为执行了这一步,执行kubeadm join就出错了,然后删除k8s相关目录,虽然成功加入集群,但是在hpa却不停报错,显示dial tcp xxx : i/o timeout。我只好重装了。
docker pull ccr.ccs.tencentyun.com/kubenetes/pause-amd64:3.0
docker tag ccr.ccs.tencentyun.com/kubenetes/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker pull ccr.ccs.tencentyun.com/kubenetes/kube-proxy-amd64:v1.7.1
docker tag ccr.ccs.tencentyun.com/kubenetes/kube-proxy-amd64:v1.7.1 gcr.io/google_containers/kube-proxy-amd64:v1.7.1
docker pull ccr.ccs.tencentyun.com/calico/cni:v1.10.0
docker tag ccr.ccs.tencentyun.com/calico/cni:v1.10.0 quay.io/calico/cni:v1.10.0
docker pull ccr.ccs.tencentyun.com/kubenetes/heapster-grafana-amd64:v4.4.3
docker tag ccr.ccs.tencentyun.com/kubenetes/heapster-grafana-amd64:v4.4.3 gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
docker pull ccr.ccs.tencentyun.com/kubenetes/heapster-amd64:v1.4.0
docker tag ccr.ccs.tencentyun.com/kubenetes/heapster-amd64:v1.4.0 gcr.io/google_containers/heapster-amd64:v1.4.0
docker pull ccr.ccs.tencentyun.com/kubenetes/heapster-influxdb-amd64:v1.3.3
docker tag ccr.ccs.tencentyun.com/kubenetes/heapster-influxdb-amd64:v1.3.3 gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
docker pull ccr.ccs.tencentyun.com/calico/node:v2.4.1
docker tag ccr.ccs.tencentyun.com/calico/node:v2.4.1 quay.io/calico/node:v2.4.1
docker pull ccr.ccs.tencentyun.com/coreos/flannel:v0.9.0-amd64
docker tag ccr.ccs.tencentyun.com/coreos/flannel:v0.9.0-amd64 quay.io/coreos/flannel:v0.9.0-amd64
文章首发于 http://blog.yubangweb.com/centos7-an-zhuang-kubernetes-1-7-1/ ,转载请注明来源