物联网轻量级开发方案:在K3s部署Shifu,实现云边端闭环

Shifu 是一个 Kubernetes 原生的IoT设备虚拟化框架。Shifu 希望帮助IoT应用开发者以即插即用的方式实现IoT设备的虚拟化、监视、管控和自动化。本文通过在本地集群中部署Shifu,完成通过MQTT、Http等方式接入物联设备,体验了用容器开发的路径完成物联网应用开发。

背景

  • K3s 是SUSE基于Kubernetes做的一款开源的轻量发行版。它可以在边缘端算力受限制的场景中运行,非常适合边缘端有IoT设备的场景。
  • Shifu 作为Kubernetes原生的开源物联网开发框架,利用分布式的思维将每一个IoT设备进行结构性的虚拟化,并将他们的能力通过Kubernetes服务的方式开放给上层应用。

总体架构图

file

实施指南

需要:

软件:

file

硬件:

file

步骤:

1. 在服务器端部署wireguard server

a. 利用一件脚本

https://github.com/angristan/wireguard-install

b. 执行以下命令

curl -O https://raw.githubusercontent.com/angristan/wireguard-install/master/wireguard-install.sh
chmod +x wireguard-install.sh
./wireguard-install.sh 

c. 按照命令输入服务器的公网IP,按需添加用户,以下为实际输出,请按需更改:

root@localhost:~# ./wireguard-install.sh 
Welcome to the WireGuard installer!
The git repository is available at: https://github.com/angristan/wireguard-install
 
I need to ask you a few questions before starting the setup.
You can leave the default options and just press enter if you are ok with them.
 
IPv4 or IPv6 public address: 192.168.0.1 # 这里修改为你的公网IP, 可以通过"curl ip.sb"来获取
Public interface: ens5
WireGuard interface name: wg0
Server's WireGuard IPv4: 10.66.66.1 # wireguard 服务器接口的IPv4地址,如无需求,默认即可
Server's WireGuard IPv6: fd42:42:42::1 # wireguard 服务器接口的IPv6地址,如无需求,默认即可
Server's WireGuard port [1-65535]: 64191 # 这里修改为你的端口,开启端口后需要在主机的防火墙开始允许UDP
First DNS resolver to use for the clients: 114.114.114.114
Second DNS resolver to use for the clients (optional): 119.29.29.29
 
Okay, that was all I needed. We are ready to setup your WireGuard server now.
.................................
这里输出省略
.................................
Tell me a name for the client.
The name must consist of alphanumeric character. It may also include an underscore or a dash and can't exceed 15 chars.
Client name: client1 # 安装完毕提示输入用户名子,自定义即可
Client's WireGuard IPv4: 10.66.66.2 # wireguard 客户端接口的IPv4地址,如无需求,默认即可
Client's WireGuard IPv6: fd42:42:42::2 # wireguard 客户端接口的IPv6地址,如无需求,默认即可
.................................
这里输出省略
.................................
It is also available in /home/ubuntu/wg0-client-client1.conf # 生成给worker节点的配置文件

d. 请将脚本最后生成的配置文件 /home/ubuntu/wg0-client-client1.conf 保存下来,之后会放到worker节点上

e. 脚本运行完成后接口添加完毕, 可以通过 wg show all 来查看状态:

root@localhost:~# wg show all
interface: wg0
  public key: adsdadhkaskdhadkjhs12312kl3j1l2o
  private key: (hidden)
  listening port: 64191
 
peer: adsdadhkaskdhadkjhs12312kl3j1l2odsada2
  preshared key: (hidden)
  allowed ips: 10.66.66.2/32, fd42:42:42::2/128

f. 至此,服务器端配置完毕,如果需要生成更多客户端只需再次执行 ./wireguard-install.sh 按需添加即可

2. 在服务器部署K3s server

a. 完成步骤1后,我们可以在服务器端部署K3s,利用wireguard的接口,命令如下:

curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_TOKEN=token INSTALL_K3S_EXEC="--advertise-address=10.66.66.1 --flannel-iface=wg0"  sh -

b. 其中的配置项:

i. K3S_TOKEN=token

这里的token 按需更换,但是worker节点加入的时候需要一致

ii. INSTALL_K3S_EXEC="--advertise-address=10.66.66.1 --flannel-iface=wg0"

这里我们配置了两项:

a. --advertise-address=10.66.66.1

i. 以wireguard的接口来作为连接的IP,而不是服务器IP

b. --flannel-iface=wg0

i. 告诉K3s的flannel组件使用wg0接口

c. 命令执行后的输出参考:

[INFO]  Finding release for channel stable
[INFO]  Using v1.24.4+k3s1 as release
[INFO]  Downloading hash rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
root@localhost:~#

d. 没有问题可以尝试执行 kubectl get pods -A , 所有Pod是running的状态即可

~# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-b96499967-hs6bn                   1/1     Running     0          4m14s
kube-system   local-path-provisioner-7b7dc8d6f5-8szzd   1/1     Running     0          4m14s
kube-system   helm-install-traefik-crd-9bhdp            0/1     Completed   0          4m14s
kube-system   helm-install-traefik-h5q4h                0/1     Completed   1          4m14s
kube-system   metrics-server-668d979685-tlvzc           1/1     Running     0          4m14s
kube-system   svclb-traefik-99c87d41-cqcnb              2/2     Running     0          3m49s
kube-system   traefik-7cd4fcff68-b6cjj                  1/1     Running     0          3m49s

e. 查看master节点状态 kubectl get nodes

# kubectl get nodes
NAME               STATUS   ROLES                  AGE     VERSION
ip-172-31-37-138   Ready    control-plane,master   8m35s   v1.24.4+k3s1

f. 至此,服务器端K3s部署完毕

3. 在worker节点配置wireguard

注,本教程中使用的是一个运行在ARM64上的Ubuntu 20.04.5 LTS的server来进行演示

a. 更新软件列表,安装resolvconfwireguard

apt-get update && apt-get install resolvconf wireguard -y

b. 将以下配置放到 /etc/wireguard/wg0.conf

  1. 注意,配置文件中最后一行 AllowedIPs 默认是 0.0.0.0/0,::/0 ,需要将其修改为服务器wireguard的IP段,也就是 10.66.66.0/24
[Interface]
PrivateKey = casasdlaijo()(hjdsasdasdihasddad
Address = 10.66.66.2/32,fd42:42:42::2/128
DNS = 114.114.114.114,119.29.29.29
 
[Peer]
PublicKey = asdasd21edawd3resaedserw3rawd
PresharedKey = dasda23e134e3edwadw3reqwda
Endpoint = 192.168.0.1:64191 # 这里应该是服务器的gongwangIP以及开放的UDP端口
AllowedIPs = 10.66.66.0/24 # 注意,这里默认是0.0.0.0/0,需要修改

c. 执行以下命令,拉起wg0接口

wg-quick up /etc/wireguard/wg0.conf 

d. 测试接口是否通,通过 ping 10.66.66.1, 可以Ping通即生效

root@k3s:~# ping 10.66.66.1
PING 10.66.66.1 (10.66.66.1) 56(84) bytes of data.
64 bytes from 10.66.66.1: icmp_seq=1 ttl=64 time=12.9 ms
64 bytes from 10.66.66.1: icmp_seq=2 ttl=64 time=13.1 ms
64 bytes from 10.66.66.1: icmp_seq=3 ttl=64 time=18.9 ms
64 bytes from 10.66.66.1: icmp_seq=4 ttl=64 time=8.21 ms
64 bytes from 10.66.66.1: icmp_seq=5 ttl=64 time=13.3 ms
64 bytes from 10.66.66.1: icmp_seq=6 ttl=64 time=7.66 ms
^C
--- 10.66.66.1 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5316ms
rtt min/avg/max/mdev = 7.659/12.345/18.863/3.729 ms

4. 在worker节点配置K3s agent

a. 安装k3s,加入集群

curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_TOKEN=token K3S_URL=https://10.66.66.1:6443 INSTALL_K3S_EXEC="--node-ip=10.66.66.3 --flannel-iface=wg0" sh -

b. 其中的配置项

  1. K3S_TOKEN=token
    i. 这里的token需要更改为服务器创建时填写的token

  2. INSTALL_K3S_EXEC="--advertise-address=10.66.66.3 --flannel-iface=wg0"
    i. 这里我们配置了三项
    K3S_URL=https://10.66.66.1:6443
    主节点的IP,这里是10.66.66.1
    --advertise-address=10.66.66.3
    以wireguard的接口来作为连接的IP,而不是worker IP
    --flannel-iface=wg0
    告诉K3s的flannel组件使用wg0接口

c. 执行成功后的输出如下:

[INFO]  Finding release for channel stable
[INFO]  Using v1.24.4+k3s1 as release
[INFO]  Downloading hash rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/sha256sum-arm64.txt
[INFO]  Downloading binary rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/k3s-arm64
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent
root@k3s:~# 

d. 在服务器端可以通过 kubectl get nodes 验证是否加入成功,显示节点ready即可

# kubectl get nodes
NAME               STATUS   ROLES                  AGE     VERSION
ip-172-31-37-138   Ready    control-plane,master   24m     v1.24.4+k3s1
k3s                Ready    <none>                 2m52s   v1.24.4+k3s1

5. 利用云边协同部署Shifu

a. 克隆Shifu

git clone https://gitee.com/edgenesis/shifu.git

i. 修改controller里面的镜像(国内可能拉不下来)

  1. vim shifu/pkg/k8s/crd/install/shifu_install.yml

将428行改为:

image: bitnami/kube-rbac-proxy:latest

b. 安装Shifu:

i. kubectl apply -f shifu/pkg/k8s/crd/install/shifu_install.yml

c. 将k3s的worker节点打上标记:

i. kubectl label nodes k3s type=worker

d. 尝试将Pod运行在指定节点上,比如一个 nginx Pod

kubectl  run nginx --image=nginx -n deviceshifu --overrides='{"spec": { "nodeSelector": {"type": "worker"}}}'

e. 再通过 kubectl get pods -n deviceshifu -owide , 可以看到我们成功将Pod运行在了边缘节点 k3s 上

# kubectl get pods -n deviceshifu -owide
NAME    READY   STATUS    RESTARTS   AGE   IP          NODE   NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          42s   10.42.1.3   k3s    <none>           <none>
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容