在docker容器中python3.5环境下使用DIGITS训练caffe模型
*********
此处使用的基础镜像为 nvcr.io/nvidia/digits:18.06,镜像大小为6.04GB,可从nvidia官方pull此镜像;
容器配置:
CUDA:9.0
CUDNN:7.0
注:此文档建立在已会使用python2.7版本的DIGITS基础之上
使用CUDA9是因为要使用tensorflow_hub,版本需要兼容
tensorflow-gpu==1.12.0
tensorflow-hub==0.5.0
镜像中含有python3.5与python2.7两个版本,直接使用python3.5
修改系统python默认值,使用python3为默认启动:
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 100
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 150
*********
一、编译安装caffe
从github下载caffe源码,准备编译,下载地址:https://github.com/BVLC/caffe.git
【CUDA与CUDNN请查找对应的安装教程,此处忽略】
进入caffe目录
1、安装依赖:
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install —no-install-recommends libboost-all-dev
sudo apt-get install libopenblas-dev liblapack-dev libatlas-base-dev
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
2、修改Makefile.config文件:
sudo cp Makefile.config.example Makefile.config
根据需求修改Makefile.config中的内容,我修改之后的内容如下:
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
# uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 1
# USE_LEVELDB := 0
# USE_LMDB := 0
# This code is taken from https://github.com/sh1r0/caffe-android-lib
# USE_HDF5 := 0
# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1
# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3
# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH :=-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_52,code=sm_52 \
-gencode arch=compute_60,code=sm_60 \
-gencode arch=compute_61,code=sm_61 \
-gencode arch=compute_61,code=compute_61
# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
# /usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include
# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python3 python3.5m
PYTHON_INCLUDE := /usr/include/python3.5 \
/usr/include/ \
/usr/lib/python3.5/dist-packages/numpy/core/include
# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib \
/usr/lib/python3.5 \
/usr/local/lib/python3.5
# PYTHON_LIB := $(ANACONDA_HOME)/lib
# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib
# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib
# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1
# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0
# enable pretty build (comment to see full commands)
Q ?= @
3、修改Makefile文件:
①、大概在427行
将:
NVCCFLAGS +=-ccbin=$(CXX) -Xcompiler-fPIC $(COMMON_FLAGS)
替换为:
NVCCFLAGS += -D_FORCE_INLINES -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
②、大概在182行
将:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5
改为:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial
4、开始编译:
①、make all
②、make runtest
③、make pycaffe
注:中途不出错,则证明编译安装成功python3.5的caffe,如果抛出异常,则根据error搜索对应解决方案。
追加几个我遇到的异常与对应解决方案:
异常:fatal error: pyconfig.h: No such file or directory
解决:export CPLUS_INCLUDE_PATH=/usr/include/python2.7
异常:/usr/bin/ld: cannot find -lboost_python3
解决:cd /usr/lib/x86_64-linux-gnu
sudo ln -s libboost_python-py35.so libboost_python3.so
5、使用caffe:
进入python解释器:python
import caffe
异常:ImportError: dynamic module does not define module export function (PyInit__caffe)
解决:将编译的caffe路径添加到环境变量中:export PYTHONPATH=/opt/caffe/python/:$PYTHONPATH
二、安装DIGITS,地址:https://github.com/NVIDIA/DIGITS.git
1、digits官方没有推出Python3版本,需要自己把py2的代码升级到py3
代码语法问题可以使用python3自带的升级脚本(脚本位置很好找,找不到的请百度):python 3.5/Tools/scripts/2to3.py
2、使用2to3脚本升级完语法之后,代码里还有很多py3不兼容的地方,请逐一修改
3、下载DIGITS的python依赖,pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
3、代码修改完成之后,设置最后一步,将caffe的启动文件软连接到/usr/local/bin/下:
ln -s 源地址 目标地址
示例:ln -s /opt/caffe/build/tools/caffe /usr/local/bin/caffe
(把自己编译的opt下的caffe启动文件连接到系统path路径中)
三、启动DIGITS,使用caffe训练模型
1、进入DIGITS目录,启动服务
python -m digits
2、用浏览器访问服务,创建数据集,创建模型即可。
PS:有DIGITS二次开发经验的朋友可以联系交流,目前主要开发tensorflow_hub,tensorflow_pb,tensorflow,tensorRT等相关功能。
在docker容器中python3.5环境下使用DIGITS训练caffe模型的更多相关文章
- docker_facenet_image在Docker容器中运行Facenet环境搭建
对开发和运维人员来说,可能最梦寐以求的就是一次性地创建或配置,可以在任意环境.任意时间让应用正常运行.而Docker恰恰是可以实现这一终极目标的瑞士军刀. 具体来说,Docker在开发和运维过程中,具 ...
- Linux下将.Asp Core 部署到 Docker容器中
我们来部署一个简单的例子: 将一个简单的.Aps Core项目部署到Docker容器中并被外网访问 说明: 下面的步骤都是建立在宿主服务器系统已经安装配置过Docker容器,安装Docker相对比较简 ...
- .NetCore下使用IdentityServer4 & JwtBearer认证授权在CentOS Docker容器中运行遇到的坑及填坑
今天我把WebAPI部署到CentOS Docker容器中运行,发现原有在Windows下允许的JWTBearer配置出现了问题 在Window下我一直使用这个配置,没有问题 services.Add ...
- Docker容器中开始.NETCore之路
一.引言 开始写这篇博客前,已经尝试练习过好多次Docker环境安装,.Net Core环境安装了,在这里替腾讯云做一个推广,假如我们想学习.练手.net core 或是Docker却苦于没有开发环境 ...
- 隔离 docker 容器中的用户
笔者在前文<理解 docker 容器中的 uid 和 gid>介绍了 docker 容器中的用户与宿主机上用户的关系,得出的结论是:docker 默认没有隔离宿主机用户和容器中的用户.如果 ...
- Docker容器中开始.Net Core之路
开始写这篇博客前,已经尝试练习过好多次Docker环境安装,.Net Core环境安装了,在这里替腾讯云做一个推广,假如我们想学习.练手.net core 或是Docker却苦于没有开发环境,服务器也 ...
- 无需安装 vsftpd , 直接使用 FTP 来管理 docker 容器中的文件
无图无真相,先放个效果图: 背景 使用 docker 来跑一些服务很方便,但是有的时候想管理容器里面的文件却很麻烦 -- 一般常规做法有3种: 通过数据卷或数据卷容器的方式 启动容器的时候时候 ...
- 在Docker容器中安装jdk和spark
在Docker容器中安装jdk和spark 1.下载jdk和spark 可以使用迅雷等专业下载软件下载jdk和spark软件包,注意是linux版,这里直接给出下载地址: JDK下载地址 JDK进入后 ...
- Elasticsearch核心技术(1)--- Docker容器中运行ES、Kibana、Cerebro
Docker容器中运行ES,Kibana,Cerebro和Logstash安装与数据导入ES 想加强ES有关的知识,看了阮一鸣老师讲的<Elasticsearch核心技术与实战>收获很大, ...
随机推荐
- vc 网络编程(socket)
在网上找了很多的资料,现将这些资料整合起来,详细介绍一下VC下的socket编程,并提供一个服务器客户端具体的实例.希望对您有所帮助 一.原理部分 (个人觉得这篇写的可以,所以转与此,原文地址:htt ...
- Sharepoint2010设置自定义母版页
前言 这个文档是为Microsoft Sharepoint2010 上海文档库公司站点设计的母版页,其版本为1.0,为相关的源文件编写的使用说明书. 使用SharePoint Designer 201 ...
- asp.net core In Docker(Image)
原文地址:https://www.cnblogs.com/stulzq/p/9059108.html 大家应该知道目前.NET Core(2.0)还是没有System.Drawing程序集,如果我们要 ...
- rabbimq 生产消费者
composer.json { "require": { "php-amqplib/php-amqplib": "^2.9" } } com ...
- SpringMVC【一、概述】
今天是端午前最后一天上班,今天开始加上端午3天学习SpringMVC~! 参考资料: http://blog.csdn.net/swingpyzf/article/details/8885459 概述 ...
- ASP.config配置
使用ASP.NET搭建三层时候, 有Model (模型)DAL(数据访问层) BLL(业务逻辑层) 连接数据库的DBhelper 放在DAL层 假如 你数据库密码改了,你要打开VS 找到DBh ...
- nginx解决浏览器跨域问题
1.跨域问题 浏览器出于安全方面的考虑,只允许与本域下的接口交互.不同源的客户端脚本在没有明确授权的情况下,不能读写对方的资源. 例如访问www.test1.com 页面, 返回的文件中需要ajax向 ...
- 04_Redis_Hash命令
一:Redis 哈希(Hash) 1.1:Redis hash 是一个string类型的field和value的映射表,hash特别适合用于存储对象. 1.2:Redis 中每个 hash 可以存储 ...
- STM32唯一ID(Unique Device ID)的读取方法
每一个STM32微控制器都自带一个96位的唯一ID,也就是Unique Device ID或称为UID,这个唯一ID在任何情况下都是唯一的且不允许修改. 在开发过程中,可能需要用到这个UID,比 ...
- 算法---FaceNet+mtcnn的使用记录
FaceNet+mtcnn---ubutntu系统下的使用记录 @WP20190307 由于先配置了FaceNet算法,中途遇到了点问题,单独又配置了mtcnn进行学习,没有深入,蜻蜓点水.今天,在尝 ...