问题描述
对KubeEdge整体框架了解之后,对于mapper有几个细节上的问题还不清楚
- 对于多个同类型的device,是共用一个mapper还是会起多个mapper实例
- cloud如何下发device profile、更新profile到mapper
官方文档说明
The idea behind using config map to store device properties and visitors is that these metadata are only required by the mapper applications running on the edge node in order to connect to the device and collect data. Mappers if run as containers can load these properties as config maps . Any additions , deletions or updates to properties , visitors etc in the cloud are watched upon by the downstream controller and config maps are updated in etcd. The existing edge controller already has the mechanism to watch on config map updates and push them to the edge node. A mapper application can get these updates and then adjust the data collection process. A separate design proposal can be prepared to illustrate the details of how mappers can leverage these config maps.
If the mapper wants to discover what properties a device supports, it can get the model information from the device instance. Also , it can get the protocol information to connect to the device from the device instace. Once it has access to the device model , it can get the properties supported by the device. In order to access the property , the mapper needs to get the corresponding visitor information. This can be retrieved from the propertyVisitors list. Finally , using the visitorConfig, the mapper can read/write the data associated with the property.
下面是官方给的configMap的例子
apiVersion: v1
kind: ConfigMap
metadata:
name: device-profile-config-01 // needs to be generated by device controller.
namespace: foo
data:
deviceProfile.json: |-
{
"deviceInstances": [
{
"id": "1",
"name": "device1",
"protocol": "modbus-rtu-01", // needs to be generated by device controller.
"model": "SensorTagModel"
}
],
"deviceModels": [
{
"name": "SensorTagModel",
"description": "TI Simplelink SensorTag Device Attributes Model",
"properties": [
{
"name": "temperature",
"datatype": "int",
"accessMode": "r",
"unit": "Degree Celsius",
"maximum": "100",
},
{
"name": "temperature-enable",
"datatype": "string",
"accessMode": "rw",
"defaultValue": "OFF",
}
]
}
],
"protocols": [
{
"name": "modbus-rtu-01",
"protocol": "modbus-rtu",
"protocolConfig": {
"serialPort": "1",
"baudRate": "115200",
"dataBits": "8",
"parity": "even",
"stopBits": "1",
"slaveID": "1"
}
}
],
"propertyVisitors": [
{
"name": "temperature",
"propertyName": "temperature",
"modelName": "SensorTagModel",
"protocol": "modbus-rtu",
"visitorConfig": {
"register": "CoilRegister",
"offset": "2",
"limit": "1",
"scale": "1.0",
"isSwap": "true",
"isRegisterSwap": "true"
}
},
{
"name": "temperatureEnable",
"propertyName": "temperature-enable",
"modelName": "SensorTagModel",
"protocol": "modbus-rtu",
"visitorConfig": {
"register": "DiscreteInputRegister",
"offset": "3",
"limit": "1",
"scale": "1.0",
"isSwap": "true",
"isRegisterSwap": "true"
}
}
]
}
从官方的说明看,cloud是通过config map将device profile传递给mapper的。
从官方给的config mapper的例子看,deviceInstances、deviceModels这些都是以List形式出现的,说明一个config map应该是能描述多个device profile的。这一点在官方的Scability说明中其实也有提到
Currently, we have only one config map per node which stores all the device instances, device models, protocols and visitors for all the devices connected to the edge node. Mappers running on an edge node managing different devices now need to access one global configmap in order to extract information about the device properties and visitors. What should be the best way to partition a monolithic config map into smaller config maps ? Should the partitioning be based on the protocol type or based on device model ?
twin
这里注意下device instance中的twin没有被放到configmap中,这就导致了mapper在report property的值的时候,不会考虑这个值是否在twin中。少了一个筛选的过程。
Mapper代码分析
DockerFile
COPY src/ /opt/src
COPY conf/ /opt/src/conf
COPY scripts/ /opt/scripts
这里看到src目录被放到了/opt/src下
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: modbus-device-mapper-deployment
spec:
replicas: 1
selector:
matchLabels:
app: modbus-mapper
template:
metadata:
labels:
app: modbus-mapper
spec:
hostNetwork: true
containers:
- name: modbus-mapper-container
image: <your_dockerhub_username>/modbus_mapper:v1.0
env:
- name: CONNECTOR_MQTT_PORT
value: "1883"
- name: CONNECTOR_MQTT_IP
value: 127.0.0.1
- name: CONNECTOR_DPL_NAME
value: dpl/deviceProfile.json
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
volumeMounts:
- name: dpl-config-volume
mountPath: /opt/src/dpl
nodeSelector:
modbus: "true"
volumes:
- name: dpl-config-volume
configMap:
name: device-profile-config-<edge_node_name>
restartPolicy: Always
这里注意下volumes和volumeMounts的配置,可以看出,dpl-config-volume这个config map会被挂载到/opt/src/dpl目录,结合config profile中的key:deviceProfile.json,因此,edge core会在dpl目录下生成并更新deviceProfile.json文件
src/index.js
function(callback) {
WatchFiles.loadDpl(options.dpl_name, (devInsMap, devModMap, devProMap, modVistrMap)=>{
devIns = devInsMap;
devMod = devModMap;
devPro = devProMap;
modVistr = modVistrMap;
callback();
});
},
options.dpl_name配置的就是dpl/deviceProfile.json,因此,modbus_mapper初始化的时候,就加载了deviceProfile.json文件。
WatchFiles.watchChange(path.join(__dirname, 'dpl'), ()=>{
async.series([
function(callback) {
WatchFiles.loadDpl(options.dpl_name, (devInsMap, devModMap, devProMap, modVistrMap)=>{
devIns = devInsMap;
devMod = devModMap;
devPro = devProMap;
modVistr = modVistrMap;
callback();
});
},
...
除了启动的时候,一次性读取deviceProfile.json以外,还创建了一个线程用于监测deviceProfile.json文件的更新(删除并重新添加),如果检测到更新,那么用新的config map来更新本地缓存,然后新建一个和mqtt broker的连接,监听并处理消息。
如何处理消息见《KubeEdge分析-mapper与deviceTwin交互流程》