实操教程:容器化部署Kubernetes集群

## 实操教程:容器化部署Kubernetes集群

### 一、Kubernetes容器化部署核心概念

容器化部署(Containerized Deployment)是现代化应用交付的核心模式,而Kubernetes(K8s)作为容器编排(Container Orchestration)的事实标准,已成为云原生应用的基石。根据CNCF 2023年度报告,全球生产环境中Kubernetes使用率高达**78%**,较五年前增长320%。Kubernetes集群通过声明式API管理容器化应用的生命周期,实现:

- 自动化扩缩容(Auto-scaling)

- 服务发现(Service Discovery)

- 滚动更新(Rolling Updates)

- 故障自愈(Self-healing)

部署架构包含控制平面(Control Plane)和工作节点(Worker Node)。控制平面组件包括:

1. kube-apiserver:集群API入口

2. etcd:分布式键值存储

3. kube-scheduler:Pod调度决策

4. kube-controller-manager:运行控制器进程

### 二、环境准备与系统配置

#### 2.1 硬件与操作系统要求

部署生产级Kubernetes集群需满足:

- 控制平面节点:2核CPU/4GB RAM/20GB存储(推荐HA配置3节点)

- 工作节点:根据负载动态扩展

- 操作系统:Ubuntu 22.04 LTS或CentOS 7+

- 网络:节点间全互通,禁用SWAP

```bash

# 所有节点执行:关闭SWAP并禁用防火墙

sudo swapoff -a

sudo sed -i '/ swap / s/^\(.*\)/#\1/g' /etc/fstab

sudo systemctl stop firewalld && sudo systemctl disable firewalld

# 设置内核参数

cat <

br_netfilter

EOF

cat <

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

sudo sysctl --system

```

#### 2.2 容器运行时安装

Kubernetes支持containerd/CRI-O/Docker等运行时。containerd因性能优势成为首选:

```bash

# 安装containerd

sudo apt-get update

sudo apt-get install -y containerd

# 生成默认配置

sudo mkdir -p /etc/containerd

sudo containerd config default | sudo tee /etc/containerd/config.toml

# 启用systemd cgroup驱动

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

sudo systemctl restart containerd

```

### 三、Kubernetes核心组件安装

#### 3.1 配置Kubernetes仓库

```bash

# 添加APT仓库

sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 安装kubelet/kubeadm/kubectl

sudo apt-get update

sudo apt-get install -y kubelet=1.29.2-1.1 kubeadm=1.29.2-1.1 kubectl=1.29.2-1.1

sudo apt-mark hold kubelet kubeadm kubectl

```

#### 3.2 初始化控制平面

```bash

# 主节点初始化

sudo kubeadm init \

--pod-network-cidr=10.244.0.0/16 \

--apiserver-advertise-address=192.168.1.100 \

--kubernetes-version=v1.29.2

# 输出示例

Your Kubernetes control-plane has initialized successfully!

...

kubeadm join 192.168.1.100:6443 --token xyz123 \

--discovery-token-ca-cert-hash sha256:abcd1234

```

配置kubectl访问:

```bash

mkdir -p HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config

sudo chown (id -u):(id -g) HOME/.kube/config

```

### 四、网络与存储配置实战

#### 4.1 安装CNI网络插件

Flannel作为轻量级CNI插件,适合初学者:

```bash

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

```

验证网络状态:

```bash

kubectl get pods -n kube-system

# 预期输出

NAME READY STATUS RESTARTS AGE

kube-flannel-ds-abc12 1/1 Running 0 2m

```

#### 4.2 配置持久化存储

使用NFS提供动态存储供给:

```bash

# 安装NFS客户端

sudo apt-get install -y nfs-common

# 部署NFS Provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner

helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \

--set nfs.server=192.168.1.200 \

--set nfs.path=/data/nfs

```

创建StorageClass:

```yaml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: nfs-sc

provisioner: cluster.local/nfs-provisioner

reclaimPolicy: Retain

```

### 五、工作节点加入与集群验证

#### 5.1 节点加入集群

在工作节点执行kubeadm join命令:

```bash

sudo kubeadm join 192.168.1.100:6443 \

--token xyz123 \

--discovery-token-ca-cert-hash sha256:abcd1234

```

验证节点状态:

```bash

kubectl get nodes

# 健康节点应显示Ready状态

NAME STATUS ROLES AGE VERSION

master01 Ready control-plane 10m v1.29.2

worker01 Ready 2m v1.29.2

```

#### 5.2 集群功能测试

部署Nginx测试应用:

```yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-test

spec:

replicas: 3

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.25

ports:

- containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

name: nginx-service

spec:

selector:

app: nginx

ports:

- protocol: TCP

port: 80

targetPort: 80

type: NodePort

```

验证服务状态:

```bash

kubectl get svc nginx-service

# 输出示例

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

nginx-service NodePort 10.108.27.142 80:32567/TCP 1m

```

### 六、集群运维与优化策略

#### 6.1 关键监控指标

使用kubectl top实时监控:

```bash

kubectl top nodes

# 输出示例

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%

worker01 110m 5% 1456Mi 38%

```

核心监控指标包括:

- 节点CPU/MEM使用率(警戒线:80%)

- Pod重启次数(异常值:>5次/小时)

- 网络丢包率(阈值:<0.1%)

#### 6.2 自动扩缩容配置

配置HPA实现自动扩缩容:

```yaml

apiVersion: autoscaling/v2

kind: HorizontalPodAutoscaler

metadata:

name: nginx-hpa

spec:

scaleTargetRef:

apiVersion: apps/v1

kind: Deployment

name: nginx-test

minReplicas: 2

maxReplicas: 10

metrics:

- type: Resource

resource:

name: cpu

target:

type: Utilization

averageUtilization: 50

```

### 七、安全加固最佳实践

1. **RBAC权限控制**

```yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

namespace: default

name: pod-reader

rules:

- apiGroups: [""]

resources: ["pods"]

verbs: ["get", "watch", "list"]

```

2. **Pod安全策略**

```bash

# 启用PSP准入控制器

kube-apiserver --enable-admission-plugins=PodSecurityPolicy

```

3. **网络策略隔离**

```yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: db-isolation

spec:

podSelector:

matchLabels:

role: db

policyTypes:

- Ingress

ingress:

- from:

- podSelector:

matchLabels:

role: api

```

### 八、故障排查指南

#### 8.1 常见问题诊断

- **节点NotReady**:

```bash

journalctl -u kubelet -f # 查看kubelet日志

kubectl describe node # 检查事件记录

```

- **Pod启动失败**:

```bash

kubectl logs --previous

kubectl describe pod

```

- **网络不通**:

```bash

kubectl run -it --rm debug-tool --image=nicolaka/netshoot -- /bin/bash

nslookup kubernetes.default

curl -I http://nginx-service

```

#### 8.2 关键日志位置

| 组件 | 日志路径 |

|--------------|----------------------------------|

| kubelet | /var/log/kubelet.log |

| kube-apiserver | /var/log/kube-apiserver.log |

| kube-proxy | /var/log/kube-proxy.log |

| containerd | journalctl -u containerd |

> 注:生产环境建议部署Loki+Promtail+Grafana实现集中日志管理

通过本教程,我们完成了Kubernetes集群的容器化部署全流程。实际生产部署中,建议考虑:

1. 使用kubeadm config生成定制化配置文件

2. 通过etcd备份实现灾难恢复

3. 集成Prometheus+Alertmanager监控体系

4. 采用GitOps工作流(如Argo CD)实现持续部署

---

**技术标签**:

Kubernetes部署, 容器化集群, Kubeadm教程, 容器编排, 云原生架构, 集群运维, DevOps实践, 容器网络, 持久化存储

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容