参考https://haoyu.love/blog404.html

获取并修改代码

首先,我们需要获取源代码:

git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git

消除一个编译错误

找到下面两个文件

$FRCN_ROOT/caffe-fast-rcnn/src/caffe/test/test_smooth_L1_loss_layer.cpp
$FRCN_ROOT/caffe-fast-rcnn/src/caffe/test/test_roi_pooling_layer.cpp

去掉最前面的

typedef ::testing::Types<GPUDevice<float>, GPUDevice<double> > TestDtypesGPU;

并将文件里面的 TestDtypesGPU 改为 TestDtypesAndDevices

另外,在 $FRCN_ROOT/caffe-fast-rcnn/src/caffe/test/test_smooth_L1_loss_layer.cpp 中,我们还需要去掉下面这行代码才能通过编译:

#include "caffe/vision_layers.hpp"

修正几个 Typo

  1. $FRCN_ROOT/lib/fast_rcnn/train.py 中添加 import google.protobuf.text_format   (虽然我认为没有用)
  2. $FRCN_ROOT/lib/roi_data_layer/minibatch.py 里面的约 25 行:fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image).astype(np.int)
  3. $FRCN_ROOT/lib/datasets/ds_utils.py 里面的约 12 行:hashes = np.round(boxes * scale).dot(v).astype(np.int)
  4. $FRCN_ROOT/lib/fast_rcnn/test.py 里面的约 129 行:hashes = np.round(blobs['rois'] * cfg.DEDUP_BOXES).dot(v).astype(np.int)
  5. $FRCN_ROOT/lib/rpn/proposal_target_layer.py 里面:
    1. 约 60 行:fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image).astype(np.int)
    2. 约 124 行:cls = int(clss[ind])
    3. 约 166 行:size=int(fg_rois_per_this_image)
    4. 约 177 行:size=int(bg_rois_per_this_image)
    5. 约 184 行:labels[int(fg_rois_per_this_image):] = 0

添加 CPU 支持

因为网络比较大,rbg 大神压根没想让你用 cpu 来跑。不过为了完整一点,我们还是加上 CPU 的支持吧。

在源代码的 pull-request 里面可以找到几个 cpu 的实现。经过测试, 这个版本 的代码可以拿来直接使用。其他几个版本,例如 这个版本 ,就需要把 base_lr 设置得非常非常低,特被难以训练。

如果希望使用纯 CPU

这是个奇怪的需求…… 对…… 而且特别麻烦。也就是说,我们得剔除一些 GPU 的代码。 在 $FRCN_ROOT/lib/setup.py 中,注释掉

CUDA = locate_cuda() 

self.set_executable('compiler_so', CUDA['nvcc'])

Extension('nms.gpu_nms',
['nms/nms_kernel.cu', 'nms/gpu_nms.pyx'],
library_dirs=[CUDA['lib64']],
libraries=['cudart'],
language='c++',
runtime_library_dirs=[CUDA['lib64']],
# this syntax is specific to this build system
# we're only going to use certain compiler args with nvcc and not with
# gcc the implementation of this trick is in customize_compiler() below
extra_compile_args={'gcc': ["-Wno-unused-function"],
'nvcc': ['-arch=sm_35',
'--ptxas-options=-v',
'-c',
'--compiler-options',
"'-fPIC'"]},
include_dirs = [numpy_include, CUDA['include']]
),

$FRCN_ROOT/lib/fast_rcnn/config.py 中,将 __C.USE_GPU_NMS = True 改为 False

$FRCN_ROOT/lib/fast_rcnn/nms_wrapper.py 替换成如下代码

# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# -------------------------------------------------------- from fast_rcnn.config import cfg def nms(dets, thresh, force_cpu=False):
"""Dispatch to either CPU or GPU NMS implementations.""" if dets.shape[0] == 0:
return []
if cfg.USE_GPU_NMS and not force_cpu:
from nms.gpu_nms import gpu_nms
return gpu_nms(dets, thresh, device_id=cfg.GPU_ID)
else:
import pyximport
pyximport.install()

from nms.cpu_nms import cpu_nms
return cpu_nms(dets, thresh)

这里再加一个小 trick:打开下面几个文件

$FRCN_ROOT/tools/test_net.py
$FRCN_ROOT/tools/train_net.py

找到

caffe.set_mode_gpu()
caffe.set_device(args.gpu_id)

改成

if args.gpu_id>=0 :
caffe.set_mode_gpu()
caffe.set_device(args.gpu_id)
else:
caffe.set_mode_cpu()

打开

$FRCN_ROOT/tools/train_faster_rcnn_alt_opt.py

找到

caffe.set_mode_gpu()
caffe.set_device(cfg.GPU_ID)

改成

if cfg.GPU_ID>=0 :
caffe.set_mode_gpu()
caffe.set_device(cfg.GPU_ID)
else:
caffe.set_mode_cpu()

由于 GPU_ID 是一个必须填写的参数,这样修改的话,我们只要把 GPU_ID 填写成一个负数就可以使用纯 CPU 来跑了,代码更改量比较少。虽然我知道有些地方的规定是「-1 means all」。

Let's 编译 it !

编译 Caffe 这个坑算是跳出来了。直接按照 这个笔记 来进行编译就好了。 在这里有几点需要注意:

  1. 必须开启 USE_PYTHON_LAYER = 1,py-faster-rcnn 的有几个层是拿 Python 写的,不开启的话 一定 会出问题。
  2. Python 请使用 Python2 而不是 Python3。
  3. 如果没有升级 Caffe,那么请不要使用 CUDA8.0。
  4. 如果使用 GPU,必须使用 USE_CUDNN := 1,否则无论你显存多大,都会报 “显存溢出” 的错误。

在DIR/caffe-fast-rcnn/Makefile.config 中:修改

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome! # cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1 # CPU-only switch (uncomment to build without GPU support).
CPU_ONLY := 1 # uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0 # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1 # Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3 # To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++ # CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr # CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_20,code=sm_21 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_50,code=compute_50 # BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
BLAS_INCLUDE := /usr/include/atlas
BLAS_LIB := /usr/lib/atlas-base
LIBRARIES += glog gflags protobuf leveldb snappy lmdb boost_system hdf5_hl hdf5 m opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib # This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app # NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \ # Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
# /usr/lib/python3.5/dist-packages/numpy/core/include # We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib # Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib # Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1 # Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib # Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1 BUILD_DIR := build
DISTRIBUTE_DIR := distribute # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1 # The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0 # enable pretty build (comment to see full commands)
Q ?= @

重点是:要用python2.7,用3.5各种报错,

接下来就是make 了 ,运行:

make all -j4
make test
make runtest -j4
make pycaffe

测试网络配置(只要能跑完就没有问题)

发现问题详见caffe网络的生成:

Ubuntu16.04下配置caffe(仅CPU)

还有

NOT_IMPLEMENTED

接下来在.bashrc 中添加

119 export rcnn_path=/home/pis/py-faster-rcnn/caffe-fast-rcnn
120 export PYTHONPATH=$rcnn_path/python:$PYTHONPATH

当然source .bashrc

跑一下测试 Demo

这个是必须的!用来检验上面的成果。 首先下载训练好的模型

./data/scripts/fetch_faster_rcnn_models.sh

不过应该是直接下载不了的,我放到了百度云:

链接:https://pan.baidu.com/s/1Vab0mSvSCyxWFeSjkFW1_A 密码:r4ty

然后

./tools/demo.py --cpu

纯 CPU(--cpu)的话,应该不到五分钟就能出来结果了…… 嗯……

这一阶段完啦,下一阶段见!

./data/scripts/fetch_faster_rcnn_models.sh

ubuntu16.04下caffe以cpu运行faster rcnn demo的更多相关文章

  1. Ubuntu16.04下caffe CPU版的详细安装步骤

    一.caffe简介 Caffe,是一个兼具表达性.速度和思维模块化的深度学习框架. 由伯克利人工智能研究小组和伯克利视觉和学习中心开发. 虽然其内核是用C++编写的,但Caffe有Python和Mat ...

  2. Ubuntu16.04下编译安装及运行单目ORBSLAM2

    官网有源代码和配置教程,地址是 https://github.com/raulmur/ORB_SLAM2 1 安装必要工具 首先,有两个工具是需要提前安装的.即cmake和Git. sudo apt- ...

  3. Ubuntu16.04下caffe CPU版的图片训练和测试

    一 数据准备 二.转换为lmdb格式 1.首先,在examples下面创建一个myfile的文件夹,来用存放配置文件和脚本文件.然后编写一个脚本create_filelist.sh,用来生成train ...

  4. 配置ubuntu16.04下Theano使用GPU运行程序的环境

    ubuntu16.04默认安装了python2.7和python3.5 .本教程使用python3.5 第一步:将ubuntu16.04默认的python2修改成默认使用python3 . sudo ...

  5. Ubuntu16.04下安装Tensorflow CPU版本(图文详解)

    不多说,直接上干货! 推荐 全网最详细的基于Ubuntu14.04/16.04 + Anaconda2 / Anaconda3 + Python2.7/3.4/3.5/3.6安装Tensorflow详 ...

  6. 【原创】Octovis在Ubuntu16.04下运行出现core dump的解决方案

    本人SLAM研究新手,使用系统为Ubuntu16.04.本文原址:http://www.cnblogs.com/hitlrk/p/6667253.html 在学习SLAM的过程中,使用Octomap进 ...

  7. Ubuntu16.04下写的Qt程序,调试时没问题,运行时偶现崩溃 (需要在运行时生成core dump文件,QMAKE_CC += -g)

    记录一下 Ubuntu16.04下写的Qt程序,调试时没问题,运行时偶现崩溃 需要在运行时生成core dump文件 首先在pro结尾里加入 QMAKE_CC += -g QMAKE_CXX += - ...

  8. ubuntu16.04下笔记本自带摄像头编译运行PTAM

    ubuntu16.04下笔记本自带摄像头编译运行PTAM 转载请注明链接:https://i.cnblogs.com/EditPosts.aspx?postid=9014147 个人邮箱:feifan ...

  9. Ubuntu16.04下部署 nginx+uwsgi+django1.9.7(虚拟环境pyenv+virtualenv)

    由于用的新版本系统,和旧的稍有差别,在网上搜了很多相关资料,搞了三天终于搞好在Ubuntu16.04下的部署,接下来就详细写写步骤以及其中遇到的问题.前提是安装有虚拟环境pyenv+virtualen ...

随机推荐

  1. web05-CounterServlet

    电影网站:www.aikan66.com 项目网站:www.aikan66.com 游戏网站:www.aikan66.com 图片网站:www.aikan66.com 书籍网站:www.aikan66 ...

  2. 2018软工实践第八次作业-团队项目UML设计

    团队信息 队员姓名与学号 学号 姓名 博客链接 124 王彬(组长) 点击这里 206 赵畅 点击这里 215 胡展瑞 点击这里 320 李恒达 点击这里 131 佘岳昕 点击这里 431 王源 点击 ...

  3. Mac 下搭建 Apache 服务器

    Apache作为最流行的Web服务器端软件之一,它的优点与地位不言而喻.下面介绍下在Mac下搭建Apache服务器的步骤: (1)“前往” –>”个人” (2)在你的个人目录下新建一个文件夹,改 ...

  4. Enterprise Library 3.1 参考源码索引

    http://www.projky.com/entlib/3.1/Microsoft/Practices/EnterpriseLibrary/AppSettings/Configuration/Des ...

  5. vue 实战语法错误导致的问题记录

    1. 在组件中引入mapActions import mapActions  from 'vuex'   而不是  getActions 2.引入 mutations-type.js import   ...

  6. 关于js基本类型与引用类型(堆内存、栈内存的理解)

    js 基本类型与引用类型的区别 ECMAScirpt 变量有两种不同的数据类型:基本类型,引用类型.也有其他的叫法,比如原始类型和对象类型,拥有方法的类型和不能拥有方法的类型,还可以分为可变类型和不可 ...

  7. MiniUI合并单元格

    function onload(e){ var grid = e.sender; var len = grid.data.length; var data= grid.data; ,num=; var ...

  8. [官网]SQLSERVER ON linux 的最低要求 以及安装方法

    快速入门:在 Red Hat 上安装 SQL Server 并创建数据库 总体说明: 适用于: SQL Server (仅限 Linux)Azure SQL 数据库Azure SQL 数据仓库并行数据 ...

  9. Java 文件下载功能 解决中文乱码

    Html部分 <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <ti ...

  10. elasticsearch6 学习之安装

    安装环境:centos6.5  64位      jdk1.8      elasticsearch6.1.1 一.启动 [root@localhost bin]# ./elasticsearch - ...