集群架构
hostname | os version | ip | cpu | men |
---|---|---|---|---|
master | centos 7.8 | 192.168.11.186 | 6 | 8g |
node1 | centos 7.8 | 192.168.11.184 | 4 | 4g |
node2 | centos 7.8 | 192.168.11.183 | 4 | 4g |
节点配置前准备
1.开启免密钥认证
$ ssh-keygen
$ ssh-coyid node1,node2
2.编辑/etc/hosts
192.168.11.186 master
192.168.11.184 node1
192.168.11.183 node2
$ scp /etc/hosts node1:/etc/
$ scp /etc/hosts node2:/etc/
3.编辑 /etc/sysctl.d/k8s.conf 将流量传递到iptables链
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
$ sysctl --system
$ scp /etc/sysctl.d/k8s.conf node1:/etc/
$ scp /etc/sysctl.d/k8s.conf node2:/etc/
4.关闭防火墙
for i in `cat /etc/hosts|awk '{print $2}'`;
do
ssh $i systemctl stop firewalld; systemctl disabled firewalld
done
5.关闭SELINX
for i in `cat /etc/hosts|awk '{print $2}'`;
do
ssh $i setenforce 0
ssh $i sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
done
部署docker
#一下步骤三个节点都需要
1.安装驱动工具
yum install -y yum-utils device-mapper-persistent-data lvm2
2.添加docker源
yum-config-manager --add-repo "http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo"
3.启用源
yum-config-manager --enable docker-ce-nightly
4.安装docker
yum install docker-ce docker-ce-cli containerd.io -y
5.启动docker
systemctl enable docker && systemctl start docker
部署kubernetes
#一下步骤三个节点都需要
1.添加源
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2.安装工具
yum install kubeadm kubelet kubectl -y
3.开机自启kubelet
systemctl enable kubelet
master初始化 kubernetes
$ kubeadm init --apiserver-advertise-address=192.168.11.186 --image-repository \
registry.aliyuncs.com/google_containers --kubernetes-version=v1.23.1 \
--service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
#输出一下信息
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.11.186:6443 --token 0fq2s6********* \
--discovery-token-ca-cert-hash sha256:aa861aabecf********************
2.部署flannel网络插件
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#注 有时应网络问题不能访问这个地址
添加dns >>/etc/hosts,或者提前下载
185.199.110.133 raw.githubusercontent.com
node节点jojn
$ kubeadm join 192.168.11.186:6443 --token 0fq2s6********* \
--discovery-token-ca-cert-hash sha256:aa861aabecf
查看集群信息
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d8c4cb4d-6q2l8 1/1 Running 0 30m
kube-system coredns-6d8c4cb4d-vgtr4 1/1 Running 0 30m
kube-system etcd-master 1/1 Running 0 31m
kube-system kube-apiserver-master 1/1 Running 0 31m
kube-system kube-controller-manager-master 1/1 Running 0 31m
kube-system kube-flannel-ds-6s4mn 1/1 Running 0 27m
kube-system kube-flannel-ds-nbc8f 1/1 Running 0 27m
kube-system kube-flannel-ds-rpp6h 1/1 Running 0 27m
kube-system kube-proxy-4cfh2 1/1 Running 0 30m
kube-system kube-proxy-nfbgk 1/1 Running 0 30m
kube-system kube-proxy-vj8m4 1/1 Running 0 30m
kube-system kube-scheduler-master 1/1 Running 0 31m
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 31m v1.23.1
node1 Ready <none> 31m v1.23.1
node2 Ready <none> 32m v1.23.1
$ kubectl get namespace
NAME STATUS AGE
default Active 38m
kube-node-lease Active 38m
kube-public Active 38m
kube-system Active 38m
测试
1.创建pod
$ kubectl create deployment nginx --image=nginx
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-85b98978db-njf42 1/1 Running 0 32s
2.修改pod副本数
$ kubectl scale deployment nginx --replicas=3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-85b98978db-88pmq 1/1 Running 0 27s
nginx-85b98978db-ksdzb 1/1 Running 0 27s
nginx-85b98978db-njf42 1/1 Running 0 8m49s
3.暴露端口
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
nginx NodePort 10.96.72.198 <none> 80:32335/TCP 5m2s
$ curl 10.96.72.198
<h1>Welcome to nginx!</h1>
4. 常用命令
kubectl cluster-info #获取集群信息
kubectl api-resources # 查看资源
kubectl get nodes # 查看node节点
kubectl get pods # 查看pod
kubectl get pods -n kube-system # 查看命名空间kube-system下的pod
遇到的问题
image.png
错误信息
“Failed to run kubelet” err=“failed to run Kubelet: misconfiguration: kubelet cgroup driver: “systemd” is different from dock…er: “cgroupfs””
原因
docker和kubernetes所使用的cgroup不一致导致
# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://yours.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
重启docker 重新kubeadm init && kubectl join