在docker容器中python3.5环境下使用DIGITS训练caffe模型
*********
此处使用的基础镜像为 nvcr.io/nvidia/digits:18.06,镜像大小为6.04GB,可从nvidia官方pull此镜像;
容器配置:
CUDA:9.0
CUDNN:7.0
注:此文档建立在已会使用python2.7版本的DIGITS基础之上
使用CUDA9是因为要使用tensorflow_hub,版本需要兼容
tensorflow-gpu==1.12.0
tensorflow-hub==0.5.0
镜像中含有python3.5与python2.7两个版本,直接使用python3.5
修改系统python默认值,使用python3为默认启动:
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 100
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 150
*********
一、编译安装caffe
从github下载caffe源码,准备编译,下载地址:https://github.com/BVLC/caffe.git
【CUDA与CUDNN请查找对应的安装教程,此处忽略】
进入caffe目录
1、安装依赖:
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install —no-install-recommends libboost-all-dev
sudo apt-get install libopenblas-dev liblapack-dev libatlas-base-dev
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
2、修改Makefile.config文件:
sudo cp Makefile.config.example Makefile.config
根据需求修改Makefile.config中的内容,我修改之后的内容如下:
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
# uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 1
# USE_LEVELDB := 0
# USE_LMDB := 0
# This code is taken from https://github.com/sh1r0/caffe-android-lib
# USE_HDF5 := 0
# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1
# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3
# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH :=-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_52,code=sm_52 \
-gencode arch=compute_60,code=sm_60 \
-gencode arch=compute_61,code=sm_61 \
-gencode arch=compute_61,code=compute_61
# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
# /usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include
# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python3 python3.5m
PYTHON_INCLUDE := /usr/include/python3.5 \
/usr/include/ \
/usr/lib/python3.5/dist-packages/numpy/core/include
# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib \
/usr/lib/python3.5 \
/usr/local/lib/python3.5
# PYTHON_LIB := $(ANACONDA_HOME)/lib
# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib
# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib
# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1
# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0
# enable pretty build (comment to see full commands)
Q ?= @
3、修改Makefile文件:
①、大概在427行
将:
NVCCFLAGS +=-ccbin=$(CXX) -Xcompiler-fPIC $(COMMON_FLAGS)
替换为:
NVCCFLAGS += -D_FORCE_INLINES -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
②、大概在182行
将:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5
改为:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial
4、开始编译:
①、make all
②、make runtest
③、make pycaffe
注:中途不出错,则证明编译安装成功python3.5的caffe,如果抛出异常,则根据error搜索对应解决方案。
追加几个我遇到的异常与对应解决方案:
异常:fatal error: pyconfig.h: No such file or directory
解决:export CPLUS_INCLUDE_PATH=/usr/include/python2.7
异常:/usr/bin/ld: cannot find -lboost_python3
解决:cd /usr/lib/x86_64-linux-gnu
sudo ln -s libboost_python-py35.so libboost_python3.so
5、使用caffe:
进入python解释器:python
import caffe
异常:ImportError: dynamic module does not define module export function (PyInit__caffe)
解决:将编译的caffe路径添加到环境变量中:export PYTHONPATH=/opt/caffe/python/:$PYTHONPATH
二、安装DIGITS,地址:https://github.com/NVIDIA/DIGITS.git
1、digits官方没有推出Python3版本,需要自己把py2的代码升级到py3
代码语法问题可以使用python3自带的升级脚本(脚本位置很好找,找不到的请百度):python 3.5/Tools/scripts/2to3.py
2、使用2to3脚本升级完语法之后,代码里还有很多py3不兼容的地方,请逐一修改
3、下载DIGITS的python依赖,pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
3、代码修改完成之后,设置最后一步,将caffe的启动文件软连接到/usr/local/bin/下:
ln -s 源地址 目标地址
示例:ln -s /opt/caffe/build/tools/caffe /usr/local/bin/caffe
(把自己编译的opt下的caffe启动文件连接到系统path路径中)
三、启动DIGITS,使用caffe训练模型
1、进入DIGITS目录,启动服务
python -m digits
2、用浏览器访问服务,创建数据集,创建模型即可。
PS:有DIGITS二次开发经验的朋友可以联系交流,目前主要开发tensorflow_hub,tensorflow_pb,tensorflow,tensorRT等相关功能。
在docker容器中python3.5环境下使用DIGITS训练caffe模型的更多相关文章
- docker_facenet_image在Docker容器中运行Facenet环境搭建
对开发和运维人员来说,可能最梦寐以求的就是一次性地创建或配置,可以在任意环境.任意时间让应用正常运行.而Docker恰恰是可以实现这一终极目标的瑞士军刀. 具体来说,Docker在开发和运维过程中,具 ...
- Linux下将.Asp Core 部署到 Docker容器中
我们来部署一个简单的例子: 将一个简单的.Aps Core项目部署到Docker容器中并被外网访问 说明: 下面的步骤都是建立在宿主服务器系统已经安装配置过Docker容器,安装Docker相对比较简 ...
- .NetCore下使用IdentityServer4 & JwtBearer认证授权在CentOS Docker容器中运行遇到的坑及填坑
今天我把WebAPI部署到CentOS Docker容器中运行,发现原有在Windows下允许的JWTBearer配置出现了问题 在Window下我一直使用这个配置,没有问题 services.Add ...
- Docker容器中开始.NETCore之路
一.引言 开始写这篇博客前,已经尝试练习过好多次Docker环境安装,.Net Core环境安装了,在这里替腾讯云做一个推广,假如我们想学习.练手.net core 或是Docker却苦于没有开发环境 ...
- 隔离 docker 容器中的用户
笔者在前文<理解 docker 容器中的 uid 和 gid>介绍了 docker 容器中的用户与宿主机上用户的关系,得出的结论是:docker 默认没有隔离宿主机用户和容器中的用户.如果 ...
- Docker容器中开始.Net Core之路
开始写这篇博客前,已经尝试练习过好多次Docker环境安装,.Net Core环境安装了,在这里替腾讯云做一个推广,假如我们想学习.练手.net core 或是Docker却苦于没有开发环境,服务器也 ...
- 无需安装 vsftpd , 直接使用 FTP 来管理 docker 容器中的文件
无图无真相,先放个效果图: 背景 使用 docker 来跑一些服务很方便,但是有的时候想管理容器里面的文件却很麻烦 -- 一般常规做法有3种: 通过数据卷或数据卷容器的方式 启动容器的时候时候 ...
- 在Docker容器中安装jdk和spark
在Docker容器中安装jdk和spark 1.下载jdk和spark 可以使用迅雷等专业下载软件下载jdk和spark软件包,注意是linux版,这里直接给出下载地址: JDK下载地址 JDK进入后 ...
- Elasticsearch核心技术(1)--- Docker容器中运行ES、Kibana、Cerebro
Docker容器中运行ES,Kibana,Cerebro和Logstash安装与数据导入ES 想加强ES有关的知识,看了阮一鸣老师讲的<Elasticsearch核心技术与实战>收获很大, ...
随机推荐
- log4j application.properties 配置文件
log4j.rootLogger = info,stdout log4j.appender.stdout = org.apache.log4j.ConsoleAppenderlog4j.appende ...
- 7-MySQL DBA笔记-研发规范
第7章 研发规范 本章将为读者解读一份研发规范.为了更好地协同工作和确保所开发的应用尽可能的稳定.高效,建立一套数据库相关的研发规范是很有必要的,虽然研发规范的确立和推广是一项很耗时的工作,但所取得的 ...
- 图解Java继承内存分配
图解Java继承内存分配 继承的基本概念: (1)Java不支持多继承,也就是说子类至多只能有一个父类. (2)子类继承了其父类中不是私有的成员变量和成员方法,作为自己的成员变量和方法. (3)子 ...
- 路由器WAN口IP显示为10、100、172开头,网络被电信联通等运营商做了NAT转发
摘要:路由器WAN口IP显示为10.100.172开头,网络被电信联通等运营商做了NAT转发 ... 路由器WAN口IP显示为10.100.172开头的解决方法方法一:找电信(10000号)或者联通( ...
- LEANGOO成员
转自:https://www.leangoo.com/leangoo_guide/leangoo_guide_member.html 1. 看板成员及权限 一个看板上的最大成员限制为200个. 看板的 ...
- PHP中pdo的使用
<?php /** *下面代码中information为表名 * */ //1.先要连数据库 $pdo=new PDO('mysql:host=localhost;dbname=数据库名','用 ...
- pxc5.7 报错:WSREP_SST: [ERROR] xtrabackup_checkpoints missing
PXC 5.7 WSREP_SST: [ERROR] xtrabackup_checkpoints missing PXC5.7,在启动其中的一个节点,碰到了 [ERROR] xtrabackup_c ...
- web开发:动画及阴影
一.小米拼接 二.过渡动画 三.过度案例 四.盒子阴影 五.伪类设计边框 一.小米拼接 将区域整体划分起名 => 对其他区域布局不产生影响提出公共css => reset操作当有区域发送显 ...
- List.ForEach批量新增并发异常解决
批量新增操作在业务系统中十分常见,尤其是主从表中对从表的批量处理.昨天在对wms主从表进行业务操作时使用了c#中list自带的函数ForEach对从表批量新增,代码如下: 在无并发的情况下接口请求正常 ...
- VLC 可能的 XML parser error 解决
由于 VLC 设置不当 (通常是动了 skin 选项……),再次加载时 VLC 不能正常启动,并报如下错误: [00007f7dd003b670] xml xml reader error: XML ...