tritonserver启动

NVIDIA 推出的一款机器学习推理 server,用于部署 AI 模型服务,包括以下特性:

1 安装 docker

Install Docker Engine on CentOS

amz 2 机器安装 docker 需要使用 amazon_linux_extras:

How to install Docker on Amazon Linux 2 - nixCraft (cyberciti.biz)

可能会出现其依赖的 python 库不存在于当前默认的 python 的问题

可以将 python2 里面的链接到当前默认 python 来解决,参考:

「amazon-linux-extras」→「No module named amazon_linux_extras」になる場合の対処法 #AWS - Qiita

2 安装 nvidia 容器驱动层

目的是使得 docker 容器内能使用到 gpu 资源
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#configuring-docker

3 拉取 nvidia 官方 tritonserver docker 镜像

xx.yy-py3 是支持 Tensorflow, PyTorch, TensorRT, ONNX and OpenVINO models 的版本
NVIDIA 官方镜像集合:https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver/tags
使用 docker 拉取镜像,如当前的最新版本
docker pull nvcr.io/nvidia/tritonserver:24.07-py3
拉取后,使用
docker image ls
可以看到已经拉到的镜像

使用

1 准备要部署的模型仓库
目录结构:
model_repository
|
+-- 模型1名称
|
+-- config.pbtxt # 配置文件
+-- 1 # 模型某版本目录
|
+-- model.pt # 可以支持 .pt, .onnx, trt ...

+-- 模型2名称
|
+-- config.pbtxt
+-- 1
|
+-- model.py # 也可以支持 python 写的脚本(python backend)
+-- model.py 依赖的其他代码
...

...
model.py

# -*- coding utf-8 -*-
import time
import triton_python_backend_utils as pb_utils
import numpy as np
class TritonPythonModel:
    def __int__(self):
        self.app_name = 'TritonPythonModel'
        self.descript = 'test'

    async def execute(self, requests):
        responses = []
        for request in requests:
            text = pb_utils.get_input_tensor_by_name(request, "text").as_numpy()[0]
            text = text.decode("utf-8")
            output_wav_path = pb_utils.get_input_tensor_by_name(request, "output_wav_path").as_numpy()[0]
            output_wav_path = output_wav_path.decode("utf-8")
            output_wav_path = "/host" + output_wav_path
            speed = pb_utils.get_input_tensor_by_name(request, "speed").as_numpy()[0]
            emo_id = pb_utils.get_input_tensor_by_name(request, "emot").as_numpy()[0]
            voice_tone = pb_utils.get_input_tensor_by_name(request, "voice_tone").as_numpy()[0]
            responses.append(
                pb_utils.InferenceResponse(
                    output_tensors=[
                        pb_utils.Tensor(
                            "text",
                            np.array([bytes(text, encoding='utf-8')], dtype=np.bytes_)
                        ),
                        pb_utils.Tensor(
                            "output_wav_path",
                            np.array([bytes(output_wav_path, encoding='utf-8')], dtype=np.bytes_)
                        ),
                        pb_utils.Tensor(
                            "speed",
                            np.array([speed], dtype=np.float)
                        ),
                        pb_utils.Tensor(
                            "emot",
                            np.array([emo_id], dtype=np.int16)
                        ),
                        pb_utils.Tensor(
                            "voice_tone",
                            np.array([voice_tone], dtype=np.bytes_)
                        ),
                    ],
                )
            )
        assert len(requests) == len(responses)
        return responses

config.pbtxt

name: "simple_web"
backend: "python"

max_batch_size: 1

input [
  {
    name: "text"
    data_type: TYPE_STRING
    dims: [1]
    reshape: { shape: [] }
  },
  {
    name: "output_wav_path"
    data_type: TYPE_STRING
    dims: [1]
    reshape: { shape: [] }
  },
  {
    name: "speed"
    data_type: TYPE_FP32
    dims: [1]
  },
  {
    name: "emot"
    data_type: TYPE_INT8
    dims: [1]
  },
  {
    name: "voice_tone"
    data_type: TYPE_STRING
    dims: [1]
    reshape: { shape: [] }
  }
]
output [
  {
    name: "text"
    data_type: TYPE_STRING
    dims: [-1]
  },
  {
    name: "output_wav_path"
    data_type: TYPE_STRING
    dims: [-1]
  },
  {
    name: "speed"
    data_type: TYPE_FP32
    dims: [-1]
  },
  {
    name: "emot"
    data_type: TYPE_FP32
    dims: [-1]
  },
  {
    name: "voice_tone"
    data_type: TYPE_STRING
    dims: [-1]
  }
]
instance_group [
    {
      count: 2
      kind: KIND_CPU
    }
]

启动命令,

docker run --gpus all -itd -v /data/lsk/:/root/data --shm-size "5g" --memory "5g" --net host --name simple_web b89e3ec43674
其中b89e3ec43674是tritonserver的id --name是config.pbtxt中的name字段
docker attach simple_web
tritonserver --model-repository=/root/data/--http-port 8892 --grpc-port 8893 --metrics-port 8894

image.png

测试

request.py

import requests
if __name__ == '__main__':
    text = 'fasdsfd'
    output_wav_path = 'fasdf'
    speed = 32
    emot = 2
    voice_tone = 'fsdf'
    
    body = {
        "inputs": [
            {
                "name": "text",
                "data": [
                    text],
                "datatype": "BYTES",
                "shape": [1, 1]
            },
            {
                "name": "output_wav_path",
                "data": [output_wav_path],
                "datatype": "BYTES",
                "shape": [1, 1]
            },
            {
                "name": "speed",
                "data": [speed],
                "datatype": "FP32",
                "shape": [1, 1]
            },
            {
                "name": "emot",
                "data": [emot],
                "datatype": "INT8",
                "shape": [1, 1]
            },
            {
                "name": "voice_tone",
                "data": [voice_tone],
                "datatype": "BYTES",
                "shape": [1, 1]
            }
        ],
        "outputs": [
            {
                "name": "text"
            },
            {
                "name": "output_wav_path"
            },
            {
                "name": "speed"
            },
            {
                "name": "emot"
            },
            {
                "name": "voice_tone"
            }
        ]
    }
    url = 'http://10.20.0.66:8892/v2/models/simple_web/versions/1/infer'
    response = requests.post(url=url, json=body, timeout=(30, 30))
    res = response.json()
    print(res)
image.png
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容