sea@sea-X550JK:/media/sea/wsWin10/wsUbuntu16.04/DlFrames/caffe$ cat  Makefile.config:  
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome! BUILD_PYTHON:=1
BUILD_MATLAB:=1
BUILD_docs:=1
BUILD_SHARELIB:=1 # cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1 # CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1 # uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 1
USE_LEVELDB := 1
USE_LMDB := 1 # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1 # Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3 # To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
CUSTOM_CXX := g++ # CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr # CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_20,code=sm_21 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_52,code=sm_52 \
-gencode arch=compute_60,code=sm_60 \
-gencode arch=compute_61,code=sm_61 \
-gencode arch=compute_61,code=compute_61 # BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
BLAS_INCLUDE := /usr/include
BLAS_LIB := /usr/lib # Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib # This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
MATLAB_DIR := /usr/local/MATLAB/R2017b
# MATLAB_DIR := /Applications/MATLAB_R2012b.app # NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include #PYTHON_LIB:=/usr/lib/x86_64-linux-gnu/libpython2.7.so
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include # Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
# /usr/lib/python3.5/dist-packages/numpy/core/include # We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib /usr/local/lib /usr/lib/x86_64-linux-gnu/
# PYTHON_LIB := $(ANACONDA_HOME)/lib # Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib # Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1 # Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib # NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1 # Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1 # N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1 # The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0 # enable pretty build (comment to see full commands)
Q ?= @ #INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/
INCLUDE_DIRS := $(INCLUDE_DIRS) /usr/local/include /usr/include/hdf5/serial/
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial LIBRARY_DIRS:=$(LIBRARIES_DIRS) /usr/lib/x86_64-linux-gnu/hdf5/serial

makefile.config

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome! BUILD_PYTHON:=1
BUILD_MATLAB:=0
BUILD_docs:=1
BUILD_SHARELIB:=1 # cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1 # CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1 # uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 1
USE_LEVELDB := 1
USE_LMDB := 1 # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1 # Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3 # To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
CUSTOM_CXX := g++ # CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr # CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_20,code=sm_21 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_52,code=sm_52 \
-gencode arch=compute_60,code=sm_60 \
-gencode arch=compute_61,code=sm_61 \
-gencode arch=compute_61,code=compute_61 # BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
BLAS_INCLUDE := /usr/include
BLAS_LIB := /usr/lib # Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib # This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
#MATLAB_DIR := /usr/local/MATLAB/R2016b
# MATLAB_DIR := /Applications/MATLAB_R2012b.app # NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
# /usr/lib/python2.7/dist-packages/numpy/core/include
#PYTHON_INCLUDE := /home/whale/anaconda2/include \
/home/whale/anaconda2/include/python2.7 \
/home/whale/anaconda2/lib/python2.7/site-packages/numpy/core/include #PYTHON_LIB:=/usr/lib/x86_64-linux-gnu/libpython2.7.so
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
HOME:=/home/whale
ANACONDA_HOME := $(HOME)/anaconda2
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
$(ANACONDA_HOME)/include/python2.7 \
$(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include # Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
# /usr/lib/python3.5/dist-packages/numpy/core/include # We need to be able to find libpythonX.X.so or .dylib.
## PYTHON_LIB := /usr/lib /usr/local/lib /usr/lib/x86_64-linux-gnu/
## PYTHON_LIB := /home/sea/anaconda2/lib
PYTHON_LIB := $(ANACONDA_HOME)/lib # Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib # Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1 # Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib # NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1 # Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1 # N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1 # The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0 # enable pretty build (comment to see full commands)
Q ?= @ #INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/
INCLUDE_DIRS := $(PYTHON_INCLUDE) $(INCLUDE_DIRS) /usr/local/include /usr/include/hdf5/serial/
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial LIBRARY_DIRS:=$(LIBRARIES_DIRS) /usr/lib/x86_64-linux-gnu/hdf5/serial

今天python3.6 不支持2017年的caffe源码----时间关系,不打算测试最新的代码。

下面是python3.5的配置文件:

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome! BUILD_PYTHON:=1
BUILD_MATLAB:=0
BUILD_docs:=1
BUILD_SHARELIB:=1 # cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1 # CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1 # uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 1
USE_LEVELDB := 1
USE_LMDB := 1 # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1 # Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3 # To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
CUSTOM_CXX := g++ # CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr # CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_20,code=sm_21 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_52,code=sm_52 \
-gencode arch=compute_60,code=sm_60 \
-gencode arch=compute_61,code=sm_61 \
-gencode arch=compute_61,code=compute_61 # BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
BLAS_INCLUDE := /usr/include
BLAS_LIB := /usr/lib # Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib # This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
#MATLAB_DIR := /usr/local/MATLAB/R2016b
# MATLAB_DIR := /Applications/MATLAB_R2012b.app # NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
# /usr/lib/python2.7/dist-packages/numpy/core/include
#PYTHON_INCLUDE := /home/whale/anaconda3/include \
# /home/whale/anaconda3/include/python2.7 \
# /home/whale/anaconda3/lib/python2.7/site-packages/numpy/core/include #PYTHON_LIB:=/usr/lib/x86_64-linux-gnu/libpython2.7.so
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
HOME:=/home/sea
ANACONDA_HOME := /media/sea/wsWin10/Ubun/env/py35/anaconda3
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
$(ANACONDA_HOME)/include/python3.5m \
$(ANACONDA_HOME)/lib/python3.5/site-packages/numpy/core/include # Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
# /usr/lib/python3.5/dist-packages/numpy/core/include # We need to be able to find libpythonX.X.so or .dylib.
## PYTHON_LIB := /usr/lib /usr/local/lib /usr/lib/x86_64-linux-gnu/
## PYTHON_LIB := /home/sea/anaconda3/lib
PYTHON_LIB := $(ANACONDA_HOME)/lib # Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib # Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1 # Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib # NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1 # Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1 # N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1 # The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0 # enable pretty build (comment to see full commands)
Q ?= @ #INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/
INCLUDE_DIRS := $(PYTHON_INCLUDE) $(INCLUDE_DIRS) /usr/local/include /usr/include/hdf5/serial/
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial LIBRARY_DIRS:=$(LIBRARIES_DIRS) /usr/lib/x86_64-linux-gnu/hdf5/serial

caffe--anaconda2--makefile.config--compile --ubuntu16.04的更多相关文章

  1. caffe中Makefile.config详解

    ## Refer to http://caffe.berkeleyvision.org/installation.html # Contributions simplifying and improv ...

  2. Ubuntu16.04 +cuda8.0+cudnn+caffe+theano+tensorflow配置明细

      本文为原创作品,未经本人同意,禁止转载,禁止用于商业用途!本人对博客使用拥有最终解释权 欢迎关注我的博客:http://blog.csdn.net/hit2015spring和http://www ...

  3. ubuntu16.04下caffe以cpu运行faster rcnn demo

    参考https://haoyu.love/blog404.html 获取并修改代码 首先,我们需要获取源代码: git clone --recursive https://github.com/rbg ...

  4. ubuntu16.04 + caffe + SSD + gpu 安装

    昨天我们买好了硬件,今天我们开始安装caffe了,我本人安装过caffe不下10次,每次都是一大堆问题,后来终于总结了关键要点,就是操作系统. 1. 千万不要用ubuntu17.10来安装, 2. 最 ...

  5. ubuntu16.04+caffe+GPU+cuda+cudnn安装教程

    步骤简述: 1.安装GPU驱动(系统适配,不采取手动安装的方式) 2.安装依赖(cuda依赖库,caffe依赖) 3.安装cuda 4.安装cudnn(只是复制文件加链接,不需要编译安装的过程) 5. ...

  6. Ubuntu16.04 + gtx1060 + cuda8.0 + cudnn5.1 + caffe + Theano + Tensorflow

    参考 ubuntu16.04+gtx1060+cuda8.0+caffe安装.测试经历 ,细节处有差异. 首先说明,这是在台式机上的安装测试经历,首先安装的win10,然后安装ubuntu16.04双 ...

  7. ubuntu16.04 caffe(GPU模式)安装

    历时5天终于完成了,配置中出现了各种各样的Error,这里记录一下,希望能为正在安装的人提供一点帮助. 配置中主要参考博客:http://blog.csdn.net/yhaolpz/article/d ...

  8. Ubuntu16.04安装编译caffe以及一些问题记录

    前期准备: 最好是python虚拟环境 [anaconda的创建虚拟环境] 创建 conda create -n caffeEnv(虚拟环境名字) python=3.6 激活环境 source act ...

  9. Caffe初学者第一部:Ubuntu14.04上安装caffe(CPU)+Python的详细过程 (亲测成功, 20180524更新)

    前言: 最近在学习深度学习,最先要解决的当然是开源框架的环境安装了.之前一直在学习谷歌的Tensorflow开源框架,最近实验中需要跟别人的算法比较,下载的别人的代码很多都是Caffe的,所以想着搭建 ...

随机推荐

  1. 【09】node 之 fs流读写

    前面我们已经学习了如何使用fs模块中的readFile方法.readFileSync方法读取文件中内容,及如何使用fs模块中的writeFile方法.writeFileSync方法向一个文件写入内容. ...

  2. hdu 4502 dp

    吉哥系列故事——临时工计划 Time Limit: 3000/1000 MS (Java/Others)    Memory Limit: 65535/32768 K (Java/Others)Tot ...

  3. pat 甲级 1022. Digital Library (30)

    1022. Digital Library (30) 时间限制 1000 ms 内存限制 65536 kB 代码长度限制 16000 B 判题程序 Standard 作者 CHEN, Yue A Di ...

  4. 【CF778C】Peterson Polyglot(Trie树,启发式合并)

    题意:有一棵n个结点的只由小写字母组成的Trie树,给定它的具体形态,问删除哪一层后剩下Trie树的结点数最少 n<=3e5 思路:先建出原Trie树,对于每一层的每一个结点计算删除后对答案的贡 ...

  5. android程序入口

    android程序的真正入口是Application类的onCreate方法

  6. Current Sourcing (拉電流) and Current Sinking(灌電流)

    Current Sourcing and Sinking Current sourcing and sinking is often mentioned in relation to electron ...

  7. TDictionary字典 记录 的赋值。

    type TRen = record age: Integer; //把name定义成结构的属性. private Fname: string; procedure Setname(const Val ...

  8. ios圆角优化-不掉帧

    因网络图片加载用的是SDWebImage所以下面以sd加载图片为例 //普通的加载网络图片方式(已不能满足需求,需要改进) [self sd_setImageWithURL:url placehold ...

  9. Gym 101064 D Black Hills golden jewels 【二分套二分/给定一个序列,从序列中任意取两个数形成一个和,两个数不可相同,要求求出第k小的组合】

    D. Black Hills golden jewels time limit per test 2 seconds memory limit per test 256 megabytes input ...

  10. 大话Spark(4)-一文理解MapReduce Shuffle和Spark Shuffle

    Shuffle本意是 混洗, 洗牌的意思, 在MapReduce过程中需要各节点上同一类数据汇集到某一节点进行计算,把这些分布在不同节点的数据按照一定的规则聚集到一起的过程成为Shuffle. 在Ha ...