记录源码编译Tensorflow的曲折弯路
前言
通过tensorflow训练深度学习神经网络模型一般是python实现的,因为在python环境中安装tensorflow非常方便且tensorflow针对python的接口也非常友好,但有些时候我们又必须在C ++环境中进行开发。所以我们希望利用python去训练网络,训练完后将网络冻结生成pb文件,然后通过C ++版的tensorflow进行调用。
但是编译C ++版的tensorflow相对python要麻烦些,所以这里主要是做个编译记录方便以后查看。当然网上也有很多参考文献,但自己在编译过程中总是会遇到一些问题,所以这里会记录的更加细致。
首先说一下配置环境
系统:ubuntu16.04
软件版本:Tensorflow1.9-GPU+CUDA9.0+cuDNN v7.5.1
Python:Anaconda Python3.6
nvidia-smi查看GPU信息状态
nvcc -V 正常显示CUDA版本信息
本来想安装服务器对应python安装的tensorflow1.12-gpu版本,但是在编译到最后出现了类似错误
ERROR: tensorflow 1.12.0 has requirement tensorboard<1.13.0,>=1.12.0, but you'll have tensorboard 1.10.0 which is incompatible.
编译1.12版本需要的tensorboard与之前安装的不对应,解决问题,需要重装python版本的tensorflow,那有多麻烦~~。索性直接安装tensorflow-gpu1.9版本的吧
1、安装Protobuf
安装Protobuf这一点非常重要,版本一定要对应!!!要不后面会出现很多莫名其妙的错误,搞得很麻烦,我在走了很多坑之后选择了protobuf3.5这个版本(对应tensorflow1.9)。贴出protobuf官方的链接
选择最下方的源代码(tar.gz)下载,解压到对应文件夹,打开此文件,在安装protobuf之前需要先安装一些工具(automake libtool)
在终端输入如下指令:
sudo apt-get install automake libtool
./autogen.sh
./configure
make
sudo make install
sudo ldconfig
# sudo make uninstall 安装错版本后卸载指令
protoc --version # 查看protobuf版本
Tensorflow官方文档中记录,JDK的安装是可选项,这里就不安装了。后面证明,不安装确实可行。
2、安装Bazel
Bazel的版本也要与tensorflow 相对应,这里我使用的Bazel版本是0.15.2。
按照官网给出的版本,tensorflow-gpu1.9应该对应bazel的0.11,由于之前安装tensorflow1.12使用的bazel版本为0.15,这里就没有更换,试用了一下,发现bazel0.15版本竟然也可以~~
附一下Bazel的下载链接
下载二进制文件bazel-0.15.2-installer-linux-x86_64.sh.
打开终端运行Bazel安装程序
chmod +x bazel-0.15.2-installer-linux-x86_64.sh
./bazel-0.15.2-installer-linux-x86_64.sh --user
vim 打开~/.bashrc 文件,在文件最后加入如下指令
exportPATH="$PATH:$HOME/bin"
#关闭后
source ~/.bashrc
终端输入bazel 会有如下显示,证明Bazel安装成功了
3、下载TensorFlow源码并运行编译
下载tensorflow源码,介于网速问题,时间可能会较长
git clone https://github.com/tensorflow/tensorflow
配置编译,选择GPU支持,配置过程中基本都可以选择否,但需要注意的是,CUDA和cuDNN版本一定要选择正确。
./configure
Please specify the location of python. [Default is /data2/wcl1/tensorflow/venv/bin/python]
## 其余选N或n
Do you wish to build TensorFlow with CUDA support? [y/N]: y # 选择GPU版本,y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.0 # CUDA版本,与服务器一致,9.0
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.5.1 # cuDNN版本,与服务器一致,7.5.1
## 其余选N,或默认
Do you want to use clang as CUDA compiler? [y/N]: N
nvcc will be used as CUDA compiler.
## 其余选N,或默认
Configuration finished
编译pip软件包
bazel build命令会创建一个build_pip_package的可执行文件,此文件用于编译pip软件包
#编译
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package # CPU版
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package #GPU版
编译了好长一段时间,可能会出现异常终端,找到出错原因,搜寻对应方法解决即可。
最后出现Build complete successfully,大功告成!
编译软件包
如下所示地运行该可执行文件,在 /tmp/tensorflow_pkg 目录中会生成对应 .whl 软件包
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip 安装生成的文件
pip install tensorflow-1.9.0-cp36-cp36m-linux_x86_64.whl
注:可以在同一源代码下编译CUDA和非CUDA两种,但在切换之前需要执行bazel clean命令
# 编译C++动态链接库
bazel build --config=opt //tensorflow:libtensorflow_cc.so # CPU版
bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so # GPU版
我在执行此命令之后,出现了莫名奇妙的问题
搜寻资料,这篇文章中说是网络问题,于是我设置了proxychains代理,发现还是不行,最后只能设置缓存,重新编译bazel,在重复多次之后,终于解决了。。。。
注意:看下路径./tensorflow/tensorflow/contrib/makefile下有没有download文件夹。如果没有的话需要在./tensorflow/tensorflow/contrib/makefile文件夹下打开终端执行一个sh脚本文件:
./download_dependencies.sh
执行脚本文件后,会开始下载一些依赖文件,下载完后就会有下载文件夹了。
4、安装Eigen库
Eigen库文件夹也要源码编译,附一下Eigen的链接,我这里下载的是Eigen3.3.4版本.
运行命令:
#解压到指定目录
sudo tar -xzvf eigen-eigen-5a0156e40feb.tar.gz -C /usr/local/include
#更改文件名
sudo mv /usr/local/include/eigen-eigen-5a0156e40feb /usr/local/include/eigen3
sudo cp -r /usr/local/include/eigen3/Eigen /usr/local/include
注:因为eigen3 被默认安装到了usr/local/include里了(或者是usr/include里,这两个都差不多,都是系统默认的路径),在很多程序中include时经常使用#include <Eigen/Dense>而不是使用#include <eigen3/Eigen/Dense>所以要做下处理,否则一些程序在编译时会因找不到Eigen/Dense而报错
5、Cmake读取训练好的pb文件
建立一个Python项目,在文件夹中加入如下Python代码:
import tensorflow as tf
import numpy as np
import os
tf.app.flags.DEFINE_integer('training_iteration', 1000,
'number of training iterations.')
tf.app.flags.DEFINE_integer('model_version', 1, 'version number of the model.')
tf.app.flags.DEFINE_string('work_dir', 'model/', 'Working directory.')
FLAGS = tf.app.flags.FLAGS
sess = tf.InteractiveSession()
x = tf.placeholder('float', shape=[None, 5],name="inputs")
y_ = tf.placeholder('float', shape=[None, 1])
w = tf.get_variable('w', shape=[5, 1], initializer=tf.truncated_normal_initializer)
b = tf.get_variable('b', shape=[1], initializer=tf.zeros_initializer)
sess.run(tf.global_variables_initializer())
y = tf.add(tf.matmul(x, w) , b,name="outputs")
ms_loss = tf.reduce_mean((y - y_) ** 2)
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(ms_loss)
train_x = np.random.randn(1000, 5)
# let the model learn the equation of y = x1 * 1 + x2 * 2 + x3 * 3
train_y = np.sum(train_x * np.array([1, 2, 3,4,5]) + np.random.randn(1000, 5) / 100, axis=1).reshape(-1, 1)
for i in range(FLAGS.training_iteration):
loss, _ = sess.run([ms_loss, train_step], feed_dict={x: train_x, y_: train_y})
if i%100==0:
print("loss is:",loss)
graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def,
["inputs", "outputs"])
tf.train.write_graph(graph, ".", FLAGS.work_dir + "liner.pb",
as_text=False)
print('Done exporting!')
print('Done training!')
执行之后会在model文件夹下生成liner.pb文件,使用C++调用Pb文件之前需要升级一下Cmake(升级到3.10以上)
我这里下载的Cmake3.11版本附上下载链接
下载完成并解压之后,将解压之后的文件夹目录添加到~/.bashrc文件中
vim~/.bashrc
exportPATH=/home/path/cmake-3.11.0/bin:$PATH
spirce ~/.bashrc
#查看cmake版本
cmake --version
然后建立C++项目
建立如下几个文件夹:
vim ann_model_loader.cpp
#include <iostream>
#include <vector>
#include <map>
#include "ann_model_loader.h"
//#include <tensor_shape.h>
using namespace tensorflow;
namespace tf_model {
/**
* ANNFeatureAdapter Implementation
* */
ANNFeatureAdapter::ANNFeatureAdapter() {
}
ANNFeatureAdapter::~ANNFeatureAdapter() {
}
/*
* @brief: Feature Adapter: convert 1-D double vector to Tensor, shape [1, ndim]
* @param: std::string tname, tensor name;
* @parma: std::vector<double>*, input vector;
* */
void ANNFeatureAdapter::assign(std::string tname, std::vector<double>* vec) {
//Convert input 1-D double vector to Tensor
int ndim = vec->size();
if (ndim == 0) {
std::cout << "WARNING: Input Vec size is 0 ..." << std::endl;
return;
}
// Create New tensor and set value
Tensor x(tensorflow::DT_FLOAT, tensorflow::TensorShape({1, ndim})); // New Tensor shape [1, ndim]
auto x_map = x.tensor<float, 2>();
for (int j = 0; j < ndim; j++) {
x_map(0, j) = (*vec)[j];
}
// Append <tname, Tensor> to input
input.push_back(std::pair<std::string, tensorflow::Tensor>(tname, x));
}
/**
* ANN Model Loader Implementation
* */
ANNModelLoader::ANNModelLoader() {
}
ANNModelLoader::~ANNModelLoader() {
}
/**
* @brief: load the graph and add to Session
* @param: Session* session, add the graph to the session
* @param: model_path absolute path to exported protobuf file *.pb
* */
int ANNModelLoader::load(tensorflow::Session* session, const std::string model_path) {
//Read the pb file into the grapgdef member
tensorflow::Status status_load = ReadBinaryProto(Env::Default(), model_path, &graphdef);
if (!status_load.ok()) {
std::cout << "ERROR: Loading model failed..." << model_path << std::endl;
std::cout << status_load.ToString() << "\n";
return -1;
}
// Add the graph to the session
tensorflow::Status status_create = session->Create(graphdef);
if (!status_create.ok()) {
std::cout << "ERROR: Creating graph in session failed..." << status_create.ToString() << std::endl;
return -1;
}
return 0;
}
/**
* @brief: Making new prediction
* @param: Session* session
* @param: FeatureAdapterBase, common interface of input feature
* @param: std::string, output_node, tensorname of output node
* @param: double, prediction values
* */
int ANNModelLoader::predict(tensorflow::Session* session, const FeatureAdapterBase& input_feature,
const std::string output_node, double* prediction) {
// The session will initialize the outputs
std::vector<tensorflow::Tensor> outputs; //shape [batch_size]
// @input: vector<pair<string, tensor> >, feed_dict
// @output_node: std::string, name of the output node op, defined in the protobuf file
tensorflow::Status status = session->Run(input_feature.input, {output_node}, {}, &outputs);
if (!status.ok()) {
std::cout << "ERROR: prediction failed..." << status.ToString() << std::endl;
return -1;
}
//Fetch output value
std::cout << "Output tensor size:" << outputs.size() << std::endl;
for (std::size_t i = 0; i < outputs.size(); i++) {
std::cout << outputs[i].DebugString();
}
std::cout << std::endl;
Tensor t = outputs[0]; // Fetch the first tensor
int ndim = t.shape().dims(); // Get the dimension of the tensor
auto tmap = t.tensor<float, 2>(); // Tensor Shape: [batch_size, target_class_num]
int output_dim = t.shape().dim_size(1); // Get the target_class_num from 1st dimension
std::vector<double> tout;
// Argmax: Get Final Prediction Label and Probability
int output_class_id = -1;
double output_prob = 0.0;
for (int j = 0; j < output_dim; j++) {
std::cout << "Class " << j << " prob:" << tmap(0, j) << "," << std::endl;
if (tmap(0, j) >= output_prob) {
output_class_id = j;
output_prob = tmap(0, j);
}
}
// Log
std::cout << "Final class id: " << output_class_id << std::endl;
std::cout << "Final value is: " << output_prob << std::endl;
(*prediction) = output_prob; // Assign the probability to prediction
return 0;
}
}
vim ann_model_loader.h
#ifndef CPPTENSORFLOW_ANN_MODEL_LOADER_H
#define CPPTENSORFLOW_ANN_MODEL_LOADER_H
#include "model_loader_base.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
using namespace tensorflow;
namespace tf_model {
/**
* @brief: Model Loader for Feed Forward Neural Network
* */
class ANNFeatureAdapter: public FeatureAdapterBase {
public:
ANNFeatureAdapter();
~ANNFeatureAdapter();
void assign(std::string tname, std::vector<double>*) override; // (tensor_name, tensor)
};
class ANNModelLoader: public ModelLoaderBase {
public:
ANNModelLoader();
~ANNModelLoader();
int load(tensorflow::Session*, const std::string) override; //Load graph file and new session
int predict(tensorflow::Session*, const FeatureAdapterBase&, const std::string, double*) override;
};
}
#endif //CPPTENSORFLOW_ANN_MODEL_LOADER_H
vim main.cpp
#include <iostream>
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "ann_model_loader.h"
using namespace tensorflow;
int main(int argc, char* argv[]) {
if (argc != 2) {
std::cout << "WARNING: Input Args missing" << std::endl;
return 0;
}
std::string model_path = argv[1]; // Model_path *.pb file
// TensorName pre-defined in python file, Need to extract values from tensors
std::string input_tensor_name = "inputs";
std::string output_tensor_name = "outputs";
// Create New Session
Session* session;
Status status = NewSession(SessionOptions(), &session);
if (!status.ok()) {
std::cout << status.ToString() << "\n";
return 0;
}
// Create prediction demo
tf_model::ANNModelLoader model; //Create demo for prediction
if (0 != model.load(session, model_path)) {
std::cout << "Error: Model Loading failed..." << std::endl;
return 0;
}
// Define Input tensor and Feature Adapter
// Demo example: [1.0, 1.0, 1.0, 1.0, 1.0] for Iris Example, including bias
int ndim = 5;
std::vector<double> input;
for (int i = 0; i < ndim; i++) {
input.push_back(1.0);
}
// New Feature Adapter to convert vector to tensors dictionary
tf_model::ANNFeatureAdapter input_feat;
input_feat.assign(input_tensor_name, &input); //Assign vec<double> to tensor
// Make New Prediction
double prediction = 0.0;
if (0 != model.predict(session, input_feat, output_tensor_name, &prediction)) {
std::cout << "WARNING: Prediction failed..." << std::endl;
}
std::cout << "Output Prediction Value:" << prediction << std::endl;
return 0;
}
vim model_loader_base.h
#ifndef CPPTENSORFLOW_MODEL_LOADER_BASE_H
#define CPPTENSORFLOW_MODEL_LOADER_BASE_H
#include <iostream>
#include <vector>
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
using namespace tensorflow;
namespace tf_model {
/**
* Base Class for feature adapter, common interface convert input format to tensors
* */
class FeatureAdapterBase{
public:
FeatureAdapterBase() {};
virtual ~FeatureAdapterBase() {};
virtual void assign(std::string, std::vector<double>*) = 0; // tensor_name, tensor_double_vector
std::vector<std::pair<std::string, tensorflow::Tensor> > input;
};
class ModelLoaderBase {
public:
ModelLoaderBase() {};
virtual ~ModelLoaderBase() {};
virtual int load(tensorflow::Session*, const std::string) = 0; //pure virutal function load method
virtual int predict(tensorflow::Session*, const FeatureAdapterBase&, const std::string, double*) = 0;
tensorflow::GraphDef graphdef; //Graph Definition for current model
};
}
#endif //CPPTENSORFLOW_MODEL_LOADER_BASE_H
在文件夹中建立CMakeLists.txt文件
cmake_minimum_required(VERSION 3.10)
project(cpptensorflow)
set(CMAKE_CXX_STANDARD 11)
link_directories(/home/local/I/xintingkai/workspace/ccc/aa/tensorflow/bazel-bin/tensorflow)
include_directories(
/home/local/I/xintingkai/workspace/ccc/aa/tensorflow
/home/local/I/xintingkai/workspace/ccc/aa/tensorflow/bazel-genfiles
/home/local/I/xintingkai/workspace/ccc/aa/tensorflow/bazel-bin/tensorflow
/usr/local/include/eigen3
)
add_executable(cpptensorflow main.cpp ann_model_loader.h model_loader_base.h ann_model_loader.cpp)
target_link_libraries(cpptensorflow tensorflow_cc tensorflow_framework)
对应目录修改为自己的工作目录
然后执行如下命令,开始编译项目:
mkdir build
cd build
cmake ..
make
编译之后,会在新建的build文件下生成cpptensorflow启动文件
文件结构如下所示:
./cpptensorflow /home/local/I/xintingkai/workspace/ccc/aa/workspace/model/model/liner.pb
至此,就算成功了
注意
遇到在编译TensorFlow之后,无法执行nvidia-smi,卡住(Stuck),在重启服务器之后,恢复正常,原因不明,可能GPU资源未完全释放,又进行二次加载,导致异常。
此流程只记录了编译大体流程,其中还会出现很多莫名其妙的错误,需要耐心去Google解决。
总之,走了很多坑,可以说是"这里的山路十八弯",终于编译完成了。。。。。。
参考文献
tensorflow c ++接口,python训练模型,c ++调用
Ubuntu16.04安装/更新/升级cmake到cmake3.9.1的具体安装过程
彩蛋
经验证,在cuda9.0下编译安装成功tensorflow,之后,可以将cuda、cudnn版本换成其他版本,只要编译时的cuda和cudnn版本在PATH和LD_LIBRARY_PATH下即可,验证可以正常使用,解决了大家对cuda版本要求各异的问题。