本文首发于个人博客https://kezunlin.me/post/54e7a3d8/,欢迎阅读最新内容!

tutorial to compile and use pytorch on ubuntu 16.04

PyTorch for Python

install pytorch from anaconda

conda info --envs
conda activate py35 # newest version
# 1.1.0 pytorch/0.3.0 torchvision
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch # old version [NOT]
# 0.4.1 pytorch/0.2.1 torchvision
conda install pytorch=0.4.1 cuda90 -c pytorch

output

The following NEW packages will be INSTALLED:

  pytorch            pytorch/linux-64::pytorch-1.1.0-py3.5_cuda9.0.176_cudnn7.5.1_0
torchvision pytorch/linux-64::torchvision-0.3.0-py35_cu9.0.176_1

download from channel pytorch will cost much time!

下载pytorch/linux-64::pytorch-1.1.0-py3.5_cuda9.0.176_cudnn7.5.1_0速度非常慢!

install pytorch from tsinghua

add tsinghua pytorch channels

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
# for legacy win-64
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/peterjc123/
conda config --set show_channel_urls yes

使用anaconda官方pytorch源非常慢,用清华源代替。

see tsinghua anaconda

cat ~/.condarc

channels:
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
- defaults

install pytorch from tsinghua

conda create --name torch python==3.7
conda activate torch conda install -y pytorch torchvision
conda install -y scikit-learn scikit-image pandas matplotlib pillow opencv

The following NEW packages will be INSTALLED:

  pytorch            anaconda/cloud/pytorch/linux-64::pytorch-1.1.0-py3.5_cuda9.0.176_cudnn7.5.1_0
torchvision anaconda/cloud/pytorch/linux-64::torchvision-0.3.0-py35_cu9.0.176_1

test pytorch

import torch
print(torch.__version__)
'1.1.0'

or

python -c 'import torch; print(torch.cuda.is_available())'
True

pre-trained models

pre-trained model saved to /home/kezunlin/.cache/torch/checkpoints/

Downloading: "https://download.pytorch.org/models/shufflenetv2_x0.5-f707e7126e.pth" to /home/kezunlin/.cache/torch/checkpoints/shufflenetv2_x0.5-f707e7126e.pth

PyTorch for C++

download LibTorch

download from LibTorch

compile from source

compile pytorch

# method 1
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch # method 2, if you are updating an existing checkout
git clone https://github.com/pytorch/pytorch
cd pytorch
git submodule sync
git submodule update --init --recursive

check tags

git tag -l 

v0.4.0
v0.4.1
v1.0.0
v1.0.1
v1.0rc0
v1.0rc1
v1.1.0

now compile

git checkout v1.1.0

# method 1: offical build will generate lots of errors
#python setup.py install # method 2: normal make
mkdir build && cd build && cmake-gui ..

with configs

BUILD_PYTHON OFF

be sure to use stable version 1.1.0 from here instead of latest version 20190724 (unstable version 1.2.0)

because error will occurs when load models.

  • for 1.1.0:

    std::shared_ptr<torch::jit::script::Module> module = torch::jit::load("./model.pt");
  • for latest 1.2.0

    torch::jit::script::Module module = torch::jit::load("./model.pt");

configure output

******** Summary ********
General:
CMake version : 3.5.1
CMake command : /usr/bin/cmake
System : Linux
C++ compiler : /usr/bin/c++
C++ compiler id : GNU
C++ compiler version : 5.4.0
BLAS : MKL
CXX flags : -fvisibility-inlines-hidden -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math
Build type : Release
Compile definitions : ONNX_ML=1;ONNX_NAMESPACE=onnx_torch;USE_GCC_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
CMAKE_PREFIX_PATH :
CMAKE_INSTALL_PREFIX : /usr/local TORCH_VERSION : 1.1.0
CAFFE2_VERSION : 1.1.0
BUILD_CAFFE2_MOBILE : ON
BUILD_ATEN_ONLY : OFF
BUILD_BINARY : OFF
BUILD_CUSTOM_PROTOBUF : ON
Link local protobuf : ON
BUILD_DOCS : OFF
BUILD_PYTHON : OFF
BUILD_CAFFE2_OPS : ON
BUILD_SHARED_LIBS : ON
BUILD_TEST : OFF
INTERN_BUILD_MOBILE :
USE_ASAN : OFF
USE_CUDA : ON
CUDA static link : OFF
USE_CUDNN : ON
CUDA version : 9.2
cuDNN version : 7.1.4
CUDA root directory : /usr/local/cuda
CUDA library : /usr/local/cuda/lib64/stubs/libcuda.so
cudart library : /usr/local/cuda/lib64/libcudart.so
cublas library : /usr/local/cuda/lib64/libcublas.so
cufft library : /usr/local/cuda/lib64/libcufft.so
curand library : /usr/local/cuda/lib64/libcurand.so
cuDNN library : /usr/local/cuda/lib64/libcudnn.so
nvrtc : /usr/local/cuda/lib64/libnvrtc.so
CUDA include path : /usr/local/cuda/include
NVCC executable : /usr/local/cuda/bin/nvcc
CUDA host compiler : /usr/bin/cc
USE_TENSORRT : OFF
USE_ROCM : OFF
USE_EIGEN_FOR_BLAS : ON
USE_FBGEMM : OFF
USE_FFMPEG : OFF
USE_GFLAGS : OFF
USE_GLOG : OFF
USE_LEVELDB : OFF
USE_LITE_PROTO : OFF
USE_LMDB : OFF
USE_METAL : OFF
USE_MKL : OFF
USE_MKLDNN : OFF
USE_NCCL : ON
USE_SYSTEM_NCCL : OFF
USE_NNPACK : ON
USE_NUMPY : ON
USE_OBSERVERS : ON
USE_OPENCL : OFF
USE_OPENCV : OFF
USE_OPENMP : ON
USE_TBB : OFF
USE_PROF : OFF
USE_QNNPACK : ON
USE_REDIS : OFF
USE_ROCKSDB : OFF
USE_ZMQ : OFF
USE_DISTRIBUTED : ON
USE_MPI : ON
USE_GLOO : ON
USE_GLOO_IBVERBS : OFF
NAMEDTENSOR_ENABLED : OFF
Public Dependencies : Threads::Threads
Private Dependencies : qnnpack;nnpack;cpuinfo;/usr/lib/x86_64-linux-gnu/libnuma.so;fp16;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;aten_op_header_gen;foxi_loader;rt;gcc_s;gcc;dl
Configuring done

install pytorch

now compile and install

make -j8
sudo make install

output

Install the project...
-- Install configuration: "Release"
-- Old export file "/usr/local/share/cmake/Caffe2/Caffe2Targets.cmake" will be replaced. Removing files [/usr/local/share/cmake/Caffe2/Caffe2Targets-release.cmake].
-- Set runtime path of "/usr/local/bin/protoc" to "$ORIGIN"
-- Old export file "/usr/local/share/cmake/Gloo/GlooTargets.cmake" will be replaced. Removing files [/usr/local/share/cmake/Gloo/GlooTargets-release.cmake].
-- Set runtime path of "/usr/local/lib/libonnxifi_dummy.so" to "$ORIGIN"
-- Set runtime path of "/usr/local/lib/libonnxifi.so" to "$ORIGIN"
-- Set runtime path of "/usr/local/lib/libfoxi_dummy.so" to "$ORIGIN"
-- Set runtime path of "/usr/local/lib/libfoxi.so" to "$ORIGIN"
-- Set runtime path of "/usr/local/lib/libc10.so" to "$ORIGIN"
-- Set runtime path of "/usr/local/lib/libc10_cuda.so" to "$ORIGIN:/usr/local/cuda/lib64"
-- Set runtime path of "/usr/local/lib/libthnvrtc.so" to "$ORIGIN:/usr/local/cuda/lib64/stubs:/usr/local/cuda/lib64"
-- Set runtime path of "/usr/local/lib/libtorch.so" to "$ORIGIN:/usr/local/cuda/lib64:/usr/lib/openmpi/lib"
-- Set runtime path of "/usr/local/lib/libcaffe2_detectron_ops_gpu.so" to "$ORIGIN:/usr/local/cuda/lib64"
-- Set runtime path of "/usr/local/lib/libcaffe2_observers.so" to "$ORIGIN:/usr/local/cuda/lib64"

pytorch 1.1.0

compile and install will cost more than 2 hours

lib install to /usr/local/lib/libtorch.so

cmake install to /usr/local/share/cmake/Torch

C++ example

load pytorch model in c++

see load pytorch model in c++

cpp

#include <torch/script.h> // One-stop header.

#include <iostream>
#include <memory> int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
} // Deserialize the ScriptModule from a file using torch::jit::load().
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]); assert(module != nullptr);
std::cout << "ok\n"; // Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 224, 224})); // Execute the model and turn its output into a tensor.
at::Tensor output = module->forward(inputs).toTensor(); std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';
}

CMakeLists.txt

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(custom_ops) # /usr/local/share/cmake/Torch
find_package(Torch REQUIRED)
MESSAGE( [Main] " TORCH_INCLUDE_DIRS = ${TORCH_INCLUDE_DIRS}")
MESSAGE( [Main] " TORCH_LIBRARIES = ${TORCH_LIBRARIES}")
include_directories(${TORCH_INCLUDE_DIRS}) add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 11)

output

Found torch: /usr/local/lib/libtorch.so
[Main] TORCH_INCLUDE_DIRS = /usr/local/include;/usr/local/include/torch/csrc/api/include
[Main] TORCH_LIBRARIES = torch;torch_library;/usr/local/lib/libc10.so;/usr/local/cuda/lib64/stubs/libcuda.so;/usr/local/cuda/lib64/libnvrtc.so;/usr/local/cuda/lib64/libnvToolsExt.so;/usr/local/cuda/lib64/libcudart.so;/usr/local/lib/libc10_cuda.so
[TOLOWER] ALGORITHM_TARGET = algorithm

make

mkdir build
cd build && cmake-gui ..
make -j8

set Torch_DIR to /home/kezunlin/program/libtorch/share/cmake/Torch

auto-set Torch_DIR to /usr/local/share/cmake/Torch

run

./example-app model.pt
-0.2698 -0.0381 0.4023 -0.3010 -0.0448

errors and solutions

compile errors with libtorch

@soumith

You might be building libtorch with a compiler that is incompatible with the compiler building your final app.

For example, you built libtorch with gcc 4.9.2 and your final app with gcc 5.1, and the C++ ABI between both of them is not the same, so you are seeing linker errors like these

@christianperone

if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set(TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=0")
endif()

Which forces GCC to use the old C++11 ABI.

@ smth

we have that flag set because we build with gcc 4.9.x, which only has the old ABI.

In GCC 5.1, the ABI for std::string was changed, and binaries compiling with gcc >= 5.1 are not ABI-compatible with binaries build with gcc < 5.1 (like pytorch) unless you set that flag.

resons and solutions

  • Reasons: ** LibTorch compiled with GCC-4.9.X (only has the old ABI), and binaries compiling with gcc >= 5.1 are not ABI-compatible**

  • Solution: compile pytorch from source instead of using LibTroch downloaded from the website.

runtime errors with pytorch

errors

/usr/local/lib/libopencv_imgcodecs.so.3.1.0: undefined reference to `TIFFReadRGBAStrip@LIBTIFF_4.0'

which means opencv link against libtiff 4.0.6

ldd check

ldd /usr/local/lib/libopencv_imgcodecs.so.3.1.0
linux-vdso.so.1 => (0x00007ffc92ffc000)
libopencv_imgproc.so.3.1 => /usr/local/lib/libopencv_imgproc.so.3.1 (0x00007f32afbca000)
libjpeg.so.8 => /usr/local/lib/libjpeg.so.8 (0x00007f32af948000)
libpng12.so.0 => /lib/x86_64-linux-gnu/libpng12.so.0 (0x00007f32af723000)
libtiff.so.5 => /usr/lib/x86_64-linux-gnu/libtiff.so.5 (0x00007f32af4ae000)

when compile opencv-3.1.0, cmake find /usr/lib/x86_64-linux-gnu/libtiff.so.5

locate libtiff

locate libtiff.so

/home/kezunlin/anaconda3/envs/py35/lib/libtiff.so
/home/kezunlin/anaconda3/envs/py35/lib/libtiff.so.5
/home/kezunlin/anaconda3/envs/py35/lib/libtiff.so.5.4.0
/home/kezunlin/anaconda3/lib/libtiff.so
/home/kezunlin/anaconda3/lib/libtiff.so.5
/home/kezunlin/anaconda3/lib/libtiff.so.5.4.0
/home/kezunlin/anaconda3/pkgs/libtiff-4.0.10-h2733197_2/lib/libtiff.so
/home/kezunlin/anaconda3/pkgs/libtiff-4.0.10-h2733197_2/lib/libtiff.so.5
/home/kezunlin/anaconda3/pkgs/libtiff-4.0.10-h2733197_2/lib/libtiff.so.5.4.0
/opt/MATLAB/R2016b/bin/glnxa64/libtiff.so.5
/opt/MATLAB/R2016b/bin/glnxa64/libtiff.so.5.0.5
/usr/lib/x86_64-linux-gnu/libtiff.so
/usr/lib/x86_64-linux-gnu/libtiff.so.5
/usr/lib/x86_64-linux-gnu/libtiff.so.5.2.4

It seems that my OpenCV was compiled against libtiff 4, but I have libtiff 5, how to solve this problem?

re-compile opencv-3.1.0 again, new errors occur

see here

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_nppi_LIBRARY (ADVANCED)
linked by target "opencv_cudev" in directory /home/kezunlin/program/opencv-3.1.0/modules/cudev
linked by target "opencv_cudev" in directory /home/kezunlin/program/opencv-3.1.0/modules/cudev
linked by target "opencv_test_cudev" in directory /home/kezunlin/program/opencv-3.1.0/modules/cudev/test

solutions:

WITH_CUDA OFF
WITH_VTK OFF
WITH_TIFF OFF
BUILD_PERF_TESTS OFF

for python2, use default /usr/bin/python2.7

for python3, NOT USE anaconda version

编译的过程中,尽量避免使用anaconda目录下的lib

install libwebp

sudo apt-get -y install libwebp-dev

Reference

History

  • 20190626: created.

Copyright

Ubuntu 16.04上源码编译和安装pytorch教程,并编写C++ Demo CMakeLists.txt | tutorial to compile and use pytorch on ubuntu 16.04的更多相关文章

  1. ubuntu 16.04上源码编译libjpeg-turbo和使用教程 | compile and use libjpeg-turbo on ubuntu 16.04

    本文首发于个人博客https://kezunlin.me/post/9f626e7a/,欢迎阅读! compile and use libjpeg-turbo on ubuntu 16.04 Seri ...

  2. ubuntu 16.04上源码编译和安装cgal并编写CMakeLists.txt | compile and install cgal on ubuntu 16.04

    本文首发于个人博客https://kezunlin.me/post/39ab7ed9/,欢迎阅读最新内容! compile and install cgal on ubuntu 16.04 Guide ...

  3. ubuntu 16.04上源码编译dlib教程 | compile dlib on ubuntu 16.04

    本文首发于个人博客https://kezunlin.me/post/c6ead512/,欢迎阅读! compile dlib on ubuntu 16.04 Series Part 1: compil ...

  4. ubuntu 16.04上源码编译glog和gflags 编写glog-config.cmake和gflags-config.cmake | compile glog and glags on ubuntu 16.04

    本文首发于个人博客https://kezunlin.me/post/977f5125/,欢迎阅读! compile glog and glags on ubuntu 16.04 Series comp ...

  5. Ubuntu 16.04上源码编译Poco并编写cmake文件 | guide to compile and install poco cpp library on ubuntu 16.04

    本文首发于个人博客https://kezunlin.me/post/281dd8cd/,欢迎阅读! guide to compile and install poco cpp library on u ...

  6. windows 10上源码编译libjpeg-turbo和使用教程 | compile and use libjpeg-turbo on windows 10

    本文首发于个人博客https://kezunlin.me/post/83828674/,欢迎阅读! compile and use libjpeg-turbo on windows 10 Series ...

  7. ubuntu 16.04上源码编译opengv | compile opengv on ubuntu 16.04

    本文首发于个人博客https://kezunlin.me/post/1e5d14ee/,欢迎阅读! compile opengv on ubuntu 16.04 Series compile open ...

  8. ubuntu 14.04上源码编译安装php7

    wget https://downloads.php.net/~ab/php-7.0.0alpha2.tar.bz2 //用winscp把下载好的文件上传到网站中 tar jxf php-7.0.0a ...

  9. CentOS 7上源码编译安装和配置LNMP Web+phpMyAdmin服务器环境

    CentOS 7上源码编译安装和配置LNMP Web+phpMyAdmin服务器环境 什么是LNMP? LNMP(别名LEMP)是指由Linux, Nginx, MySQL/MariaDB, PHP/ ...

随机推荐

  1. LINUX CFS 调度tick逻辑,即check_preemt_tick解析

    计算当前task在这个tick周期实际用时delta_exetime, 更新当前task的vruntime; 根据权重,重新计算调度period,计算当前task的应得时间片slice(idle_ru ...

  2. Java正则表达式验证IP,邮箱,电话

     引言     java中我们会常用一些判断如IP.电子邮箱.电话号码的是不是合法,那么我们怎么来判断呢,答案就是利用正则表达式来判断了,废话不多说,下面就是上代码. 1:判断是否是正确的IP  1 ...

  3. 一文告诉你,Kafka在性能优化方面做了哪些举措!

    很多粉丝私信问我Kafka在性能优化方面做了哪些举措,对于相关问题的答案其实我早就写过了,就是没有系统的整理一篇,最近思考着花点时间来整理一下,下次再有粉丝问我相关的问题我就可以潇洒的甩个链接了.这个 ...

  4. javascript检索某个字符或字符串在源字符串中的位置(下标)

    indexOf()方法 JavaScript中的String对象提供了一个indexOf(searchValue, fromIndex)方法用于检索某个字符或字符串在源字符串中第一次出现的位置(下标) ...

  5. C#上手练习6(方法语句1)

    方法是将完成同一功能的内容放到一起,方便书写和调用的一种方式,也体现了面向对象语言中封装的特性. 定义方法的语法形式如下. 访问修饰符    修饰符    返回值类型    方法名(参数列表){    ...

  6. C# read dll config

    public static SqlConnection GetSqlConnection() { Configuration myDllConfig = ConfigurationManager.Op ...

  7. python基础(12):函数(二)

    1. 函数参数 之前我们说过了传参,如果我们需要给⼀个函数传参,⽽参数⼜是不确定的,或者我给⼀个函数传很多参数,我的形参就要写很多,很⿇烦,怎么办呢,我们可以考虑使⽤动态参数. 形参的第三种: 动态参 ...

  8. Java技巧——将前端的对象数组通过Json字符串传到后端并转换为对象集合

    Java技巧——将前端的对象数组通过Json字符串传到后端并转换为对象集合 摘要:本文主要记录了如何将将前端的对象数组通过Json字符串传到后端,并在后端将Json字符串转换为对象集合. 前端代码 前 ...

  9. Java日期时间API系列4-----Jdk7及以前的日期时间类的线程安全问题

    1.Date类为可变的,在多线程并发环境中会有线程安全问题. (1)可以使用锁来处理并发问题. (2)使用JDK8  Instant 或 LocalDateTime替代. 2.Calendar的子类为 ...

  10. Java基础—面向对象的三大特性

    面向对象有三大特性分别是继承.封装和多态. (1)继承:继承是一种联结类的层次模型,并且允许和鼓励类的重用,它提供了一种明确表述共性的方法.对象的一个新类可以从现有的类中派生,这个过程称为类继承.新类 ...