最近在Jetson TX2/AGX/NX/Nano 上尝试编译安装了PaddlePaddle。
主要步骤参考:
https://www.jianshu.com/p/09a0fd569247
但是因为我使用的是Jetpack4.3, 细节上稍微有些出入。
烧录之后,先切换Python环境至Python3,请参考:
https://www.jianshu.com/p/4d28325889e6
更换国内源:
Xavier清华的:
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main restricted universe multiverse
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main restricted universe multiverse
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main restricted universe multiverse
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main restricted universe multiverse
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main universe restricted
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main universe restricted
如果pip不是用的pip3, 可以创建一个软链接。同时更换pip3的源:
修改~/.pip/pip.conf,如果没有这个文件,就创建一个:
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
[install]
trusted-host=mirrors.aliyun.com
- 在开始编译PaddlePaddle之前,先安装依赖包
sudo apt-get install -y gcc g++ make wget unzip cmake libgflags-dev libgoogle-glog-dev python3-pip python-dev libfreetype6-dev patchelf libjpeg-dev zlib1g-dev libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev libblas-dev liblapack-dev gfortran cython
- 更新pip并安装Pillow
python -m pip install --upgrade
python -m pip install --upgrade pip
pip install cython
pip install Pillow
- 然后根据安装步骤进行编译安装。
中间可能遇到的问题解决:
- error: class nvinfer1::IPluginFactory has accessible no-virtual destructor.
因为我使用的是新版本的tensorrt,和之前的不太一样,需要将析构函数添加至NvInferRuntime.h
virtual ~IPluginFactory() {}
运行样例
在根据官网https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/native_infer.html
运行样例时,同样需要做更改:
run.sh
# 设置是否开启MKL、GPU、TensorRT,如果要使用TensorRT,必须打开GPU
WITH_MKL=OFF
WITH_GPU=ON
USE_TENSORRT=ON
# 按照运行环境设置预测库路径、CUDA库路径、CUDNN库路径、模型路径
LIB_DIR=~/Paddle/build/fluid_inference_install_dir
CUDA_LIB_DIR=/usr/local/cuda/lib64
CUDNN_LIB_DIR=/usr/lib/aarch64-linux-gnu
MODEL_DIR=~/Paddle/sample/paddle-TRT/mobilenetv1
sh run_impl.sh ${LIB_DIR} mobilenet_test ${MODEL_DIR} ${WITH_MKL} ${WITH_GPU} ${CUDNN_LIB_DIR} ${CUDA_LIB_DIR} ${USE_TENSORRT}
CMakeFiles.txt中的tensorrt路径需要做相应的更改