py-faster-rcnn 训练参数修改(转)
faster rcnn默认有三种网络模型 ZF(小)、VGG_CNN_M_1024(中)、VGG16 (大)
训练图片大小为500*500,类别数1。
一. 修改VGG_CNN_M_1024模型配置文件
4) 测试模型时需要改的文件faster_rcnn_test.pt
cls_score层的num_output数值由21改为2;
bbox_pred层的num_output数值由84改为8;
二. 解读训练测试配置参数文件config.py
import os
import os.path as osp
import numpy as np
# `pip install easydict` if you don't have it
from easydict import EasyDict as edict __C = edict()
# Consumers can get config by:
# 在其他文件使用config要加的命令,例子见train_net.py
# from fast_rcnn_config import cfg
cfg = __C #
# Training options
# 训练的选项
# __C.TRAIN = edict() # Scales to use during training (can list multiple scales)
# Each scale is the pixel size of an image's shortest side
# 最短边Scale成600
__C.TRAIN.SCALES = (600,) # Max pixel size of the longest side of a scaled input image
# 最长边最大为1000
__C.TRAIN.MAX_SIZE = 1000 # Images to use per minibatch
# 一个minibatch包含两张图片
__C.TRAIN.IMS_PER_BATCH = 2 # Minibatch size (number of regions of interest [ROIs])
# Minibatch大小,即ROI的数量
__C.TRAIN.BATCH_SIZE = 128 # Fraction of minibatch that is labeled foreground (i.e. class > 0)
# minibatch中前景样本所占的比例
__C.TRAIN.FG_FRACTION = 0.25 # Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)
# 与前景的overlap大于等于0.5认为该ROI为前景样本
__C.TRAIN.FG_THRESH = 0.5 # Overlap threshold for a ROI to be considered background (class = 0 if
# overlap in [LO, HI))
# 与前景的overlap在0.1-0.5认为该ROI为背景样本
__C.TRAIN.BG_THRESH_HI = 0.5
__C.TRAIN.BG_THRESH_LO = 0.1 # Use horizontally-flipped images during training?
# 水平翻转图像,增加数据量
__C.TRAIN.USE_FLIPPED = True # Train bounding-box regressors
# 训练bb回归器
__C.TRAIN.BBOX_REG = True # Overlap required between a ROI and ground-truth box in order for that ROI to
# be used as a bounding-box regression training example
# BBOX阈值,只有ROI与gt的重叠度大于阈值,这样的ROI才能用作bb回归的训练样本
__C.TRAIN.BBOX_THRESH = 0.5 # Iterations between snapshots
# 每迭代1000次产生一次snapshot
__C.TRAIN.SNAPSHOT_ITERS = 10000 # solver.prototxt specifies the snapshot path prefix, this adds an optional
# infix to yield the path: <prefix>[_<infix>]_iters_XYZ.caffemodel
# 为产生的snapshot文件名称添加一个可选的infix. solver.prototxt指定了snapshot名称的前缀
__C.TRAIN.SNAPSHOT_INFIX = '' # Use a prefetch thread in roi_data_layer.layer
# So far I haven't found this useful; likely more engineering work is required
# 在roi_data_layer.layer使用预取线程,作者认为不太有效,因此设为False
__C.TRAIN.USE_PREFETCH = False # Normalize the targets (subtract empirical mean, divide by empirical stddev)
# 归一化目标BBOX_NORMALIZE_TARGETS,减去经验均值,除以标准差
__C.TRAIN.BBOX_NORMALIZE_TARGETS = True
# Deprecated (inside weights)
# 弃用
__C.TRAIN.BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
# Normalize the targets using "precomputed" (or made up) means and stdevs
# (BBOX_NORMALIZE_TARGETS must also be True)
# 在BBOX_NORMALIZE_TARGETS为True时,归一化targets,使用经验均值和方差
__C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = False
__C.TRAIN.BBOX_NORMALIZE_MEANS = (0.0, 0.0, 0.0, 0.0)
__C.TRAIN.BBOX_NORMALIZE_STDS = (0.1, 0.1, 0.2, 0.2) # Train using these proposals
# 使用'selective_search'的proposal训练!注意该文件来自fast rcnn,下文提到RPN
__C.TRAIN.PROPOSAL_METHOD = 'selective_search' # Make minibatches from images that have similar aspect ratios (i.e. both
# tall and thin or both short and wide) in order to avoid wasting computation
# on zero-padding.
# minibatch的两个图片应该有相似的宽高比,以避免冗余的zero-padding计算
__C.TRAIN.ASPECT_GROUPING = True # Use RPN to detect objects
# 使用RPN检测目标
__C.TRAIN.HAS_RPN = False
# IOU >= thresh: positive example
# RPN的正样本阈值
__C.TRAIN.RPN_POSITIVE_OVERLAP = 0.7
# IOU < thresh: negative example
# RPN的负样本阈值
__C.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3
# If an anchor statisfied by positive and negative conditions set to negative
# 如果一个anchor同时满足正负样本条件,设为负样本(应该用不到)
__C.TRAIN.RPN_CLOBBER_POSITIVES = False
# Max number of foreground examples
# 前景样本的比例
__C.TRAIN.RPN_FG_FRACTION = 0.5
# Total number of examples
# batch size大小
__C.TRAIN.RPN_BATCHSIZE = 256
# NMS threshold used on RPN proposals
# 非极大值抑制的阈值
__C.TRAIN.RPN_NMS_THRESH = 0.7
# Number of top scoring boxes to keep before apply NMS to RPN proposals
# 在对RPN proposal使用NMS前,要保留的top scores的box数量
__C.TRAIN.RPN_PRE_NMS_TOP_N = 12000
# Number of top scoring boxes to keep after applying NMS to RPN proposals
# 在对RPN proposal使用NMS后,要保留的top scores的box数量
__C.TRAIN.RPN_POST_NMS_TOP_N = 2000
# Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale)
# proposal的高和宽都应该大于RPN_MIN_SIZE,否则,映射到conv5上不足一个像素点
__C.TRAIN.RPN_MIN_SIZE = 16
# Deprecated (outside weights)
# 弃用
__C.TRAIN.RPN_BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
# Give the positive RPN examples weight of p * 1 / {num positives}
# 给定正RPN样本的权重
# and give negatives a weight of (1 - p)
# 给定负RPN样本的权重
# Set to -1.0 to use uniform example weighting
# 这里正负样本使用相同权重
__C.TRAIN.RPN_POSITIVE_WEIGHT = -1.0 #
# Testing options
# 测试选项 ,类同
# __C.TEST = edict() # Scales to use during testing (can list multiple scales)
# Each scale is the pixel size of an image's shortest side
__C.TEST.SCALES = (600,) # Max pixel size of the longest side of a scaled input image
__C.TEST.MAX_SIZE = 1000 # Overlap threshold used for non-maximum suppression (suppress boxes with
# IoU >= this threshold)
# 测试时非极大值抑制的阈值
__C.TEST.NMS = 0.3 # Experimental: treat the (K+1) units in the cls_score layer as linear
# predictors (trained, eg, with one-vs-rest SVMs).
# 分类不再用SVM,设置为False
__C.TEST.SVM = False # Test using bounding-box regressors
# 使用bb回归
__C.TEST.BBOX_REG = True # Propose boxes
# 不使用RPN生成proposal
__C.TEST.HAS_RPN = False # Test using these proposals
# 使用selective_search生成proposal
__C.TEST.PROPOSAL_METHOD = 'selective_search' ## NMS threshold used on RPN proposals
# RPN proposal的NMS阈值
__C.TEST.RPN_NMS_THRESH = 0.7
## Number of top scoring boxes to keep before apply NMS to RPN proposals
__C.TEST.RPN_PRE_NMS_TOP_N = 6000
## Number of top scoring boxes to keep after applying NMS to RPN proposals
__C.TEST.RPN_POST_NMS_TOP_N = 300
# Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale)
__C.TEST.RPN_MIN_SIZE = 16 #
# MISC
# # The mapping from image coordinates to feature map coordinates might cause
# 从原图到feature map的坐标映射,可能会造成在原图上不同的box到了feature map坐标系上变得相同了
# some boxes that are distinct in image space to become identical in feature
# coordinates. If DEDUP_BOXES > 0, then DEDUP_BOXES is used as the scale factor
# for identifying duplicate boxes.
# 1/16 is correct for {Alex,Caffe}Net, VGG_CNN_M_1024, and VGG16
# 缩放因子
__C.DEDUP_BOXES = 1./16. # Pixel mean values (BGR order) as a (1, 1, 3) array
# We use the same pixel mean for all networks even though it's not exactly what
# they were trained with
# 所有network所用的像素均值设为相同
__C.PIXEL_MEANS = np.array([[[102.9801, 115.9465, 122.7717]]]) # For reproducibility
__C.RNG_SEED = 3 # A small number that's used many times
# 极小的数
__C.EPS = 1e-14 # Root directory of project
# 项目根路径
__C.ROOT_DIR = osp.abspath(osp.join(osp.dirname(__file__), '..', '..')) # Data directory
# 数据路径
__C.DATA_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'data')) # Model directory
# 模型路径
__C.MODELS_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'models', 'pascal_voc')) # Name (or path to) the matlab executable
# matlab executable
__C.MATLAB = 'matlab' # Place outputs under an experiments directory
# 输出在experiments路径下
__C.EXP_DIR = 'default' # Use GPU implementation of non-maximum suppression
# GPU实施非极大值抑制
__C.USE_GPU_NMS = True # Default GPU device id
# 默认GPU id
__C.GPU_ID = 0 def get_output_dir(imdb, net=None):
#返回输出路径,在experiments路径下
"""Return the directory where experimental artifacts are placed.
If the directory does not exist, it is created. A canonical标准 path is built using the name from an imdb and a network
(if not None).
"""
outdir = osp.abspath(osp.join(__C.ROOT_DIR, 'output', __C.EXP_DIR, imdb.name))
if net is not None:
outdir = osp.join(outdir, net.name)
if not os.path.exists(outdir):
os.makedirs(outdir)
return outdir def _merge_a_into_b(a, b):
#两个配置文件融合
"""Merge config dictionary a into config dictionary b, clobbering the
options in b whenever they are also specified in a.
"""
if type(a) is not edict:
return for k, v in a.iteritems():
# a must specify keys that are in b
if not b.has_key(k):
raise KeyError('{} is not a valid config key'.format(k)) # the types must match, too
old_type = type(b[k])
if old_type is not type(v):
if isinstance(b[k], np.ndarray):
v = np.array(v, dtype=b[k].dtype)
else:
raise ValueError(('Type mismatch ({} vs. {}) '
'for config key: {}').format(type(b[k]),
type(v), k)) # recursively merge dicts
if type(v) is edict:
try:
_merge_a_into_b(a[k], b[k])
except:
print('Error under config key: {}'.format(k))
raise
#用配置a更新配置b的对应项
else:
b[k] = v def cfg_from_file(filename):
"""Load a config file and merge it into the default options."""
# 导入配置文件并与默认选项融合
import yaml
with open(filename, 'r') as f:
yaml_cfg = edict(yaml.load(f)) _merge_a_into_b(yaml_cfg, __C) def cfg_from_list(cfg_list):
# 命令行设置config
"""Set config keys via list (e.g., from command line)."""
from ast import literal_eval
assert len(cfg_list) % 2 == 0
for k, v in zip(cfg_list[0::2], cfg_list[1::2]):
key_list = k.split('.')
d = __C
for subkey in key_list[:-1]:
assert d.has_key(subkey)
d = d[subkey]
subkey = key_list[-1]
assert d.has_key(subkey)
try:
value = literal_eval(v)
except:
# handle the case when v is a string literal
value = v
assert type(value) == type(d[subkey]), \
'type {} does not match original type {}'.format(
type(value), type(d[subkey]))
d[subkey] = value
三. cache问题
在重新训练新的数据之前将cache删除
1) py-faster-rcnn/output
2) py-faster-rcnn/data/cache
四. 超参数
py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/stage_fast_rcnn_solver*.pt
base_lr:
0.001
lr_policy:
'step'
step_size:
30000
display:
20
....
总结solver文件个参数的意义
iteration: 数据进行一次前向-后向的训练
batchsize:每次迭代训练图片的数量
epoch:1个epoch就是将所有的训练图像全部通过网络训练一次
例如:假如有1280000张图片,batchsize=256,则1个epoch需要1280000/256=5000次iteration
它的max-iteration=450000,则共有450000/5000=90个epoch
而lr什么时候衰减与stepsize有关,减少多少与gamma有关,即:若stepsize=500, base_lr=0.01, gamma=0.1,则当迭代到第一个500次时,lr第一次衰减,衰减后的lr=lr*gamma=0.01*0.1=0.001,以后重复该过程,所以
stepsize是lr的衰减步长,gamma是lr的衰减系数。
在训练过程中,每到一定的迭代次数都会测试,迭代次数是由test-interval决定的,如test_interval=1000,则训练集每迭代1000次测试一遍网络,而
test_size, test_iter, 和test图片的数量决定了怎样test, test-size决定了test时每次迭代输入图片的数量,test_iter就是test所有的图片的迭代次数,如:500张test图片,test_iter=100,则test_size=5, 而solver文档里只需要根据test图片总数量来设置test_iter,以及根据需要设置test_interval即可。
迭代次数在文件py-faster-rcnn/tools/train_faster_rcnn_alt_opt.py中进行修改
max_iters=[80000, 40000, 80000, 40000]
分别对应rpn第1阶段,fast rcnn第1阶段,rpn第2阶段,fast rcnn第2阶段的迭代次数。
预训练的ImageNet模型,放在下面的文件夹下,我的是VGG_CNN_M_1024.v2.caffemodel

- 两种训练数据的算法
(1) 使用交替优化(alternating optimization)算法来训练和测试Faster R-CNN1
2
3
4
5
6
7
8cd $FRCN_ROOT
./experiments/scripts/faster_rcnn_alt_opt.sh [GPU_ID] [NET] [--set ...]
# GPU_ID是你想要训练的GPUID
# 你可以选择如下的网络之一进行训练:ZF, VGG_CNN_M_1024, VGG16
# --set ... 运行你自定义fast_rcnn.config参数,例如.
# --set EXP_DIR seed_rng1701 RNG_SEED 1701
#例如命令
./experiments/scripts/faster_rcnn_alt_opt.sh 0 ZF pascal_voc
输出的结果在 $FRCN_ROOT/output下。训练过程截图:
(2) 使用近似联合训练( approximate joint training)
cd $FRCN_ROOT
./experiments/scripts/faster_rcnn_end2end.sh [GPU_ID] [NET] [--set ...]
这个方法是联合RPN模型和Fast R-CNN网络训练。而不是交替训练。用此种方法比交替优化快1.5倍,但是准确率相近。所以推荐使用这种方法
开始训练:
cd py-faster-rcnn
./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc
参数表明使用第一块GPU(0);模型是VGG_CNN_M_1024;训练数据是pascal_voc(voc2007)。
训练Fast R-CNN网络的结果保存在这个目录下:
output/<experiment directory>/<dataset name>/
测试保存在这个目录下:
output/<experiment directory>/<dataset name>/<network snapshot name>/
py-faster-rcnn 训练参数修改(转)的更多相关文章
- caffe学习三:使用Faster RCNN训练自己的数据
本文假设你已经完成了安装,并可以运行demo.py 不会安装且用PASCAL VOC数据集的请看另来两篇博客. caffe学习一:ubuntu16.04下跑Faster R-CNN demo (基于c ...
- 如何才能将Faster R-CNN训练起来?
如何才能将Faster R-CNN训练起来? 首先进入 Faster RCNN 的官网啦,即:https://github.com/rbgirshick/py-faster-rcnn#installa ...
- py faster rcnn+ 1080Ti+cudnn5.0
看了py-faster-rcnn上的issue,原来大家都遇到各种问题. 我要好好琢磨一下,看看到底怎么样才能更好地把GPU卡发挥出来.最近真是和GPU卡较上劲了. 上午解决了g++的问题不是. 然后 ...
- python3 + Tensorflow + Faster R-CNN训练自己的数据
之前实现过faster rcnn, 但是因为各种原因,有需要实现一次,而且发现许多博客都不全面.现在发现了一个比较全面的博客.自己根据这篇博客实现的也比较顺利.在此记录一下(照搬). 原博客:http ...
- faster rcnn训练详解
http://blog.csdn.net/zy1034092330/article/details/62044941 py-faster-rcnn训练自己的数据:流程很详细并附代码 https://h ...
- faster rcnn训练自己的数据集
采用Pascal VOC数据集的组织结构,来构建自己的数据集,这种方法是faster rcnn最便捷的训练方式
- Faster Rcnn训练自己的数据集过程大白话记录
声明:每人都有自己的理解,动手实践才能对细节更加理解! 一.算法理解 此处省略一万字.................. 二.训练及源码理解 首先配置: 在./lib/utils文件下....运行 p ...
- py faster rcnn的lib编译出错问题
真是好事多磨啊,计算机系统依然是14.04,而cuda依然是8.0,唯一不同的是时间不一样,下载的各种库版本有差别,GPU的driver不一样. 但是这样就出问题了,py-faster rcnn的li ...
- caffe 用faster rcnn 训练自己的数据 遇到的问题
1 . 怎么处理那些pyx和.c .h文件 在lib下有一些文件为.pyx文件,遇到不能import可以cython 那个文件,然后把lib文件夹重新make一下. 遇到.c 和 .h一样的操作. 2 ...
随机推荐
- 186. [USACO Oct08] 牧场旅行
186. [USACO Oct08] 牧场旅行(点击转到COGS) 输入文件:pwalk.in 输出文件:pwalk.out 时间限制:1 s 内存限制:128 MB 描述 n个被自然地编号为 ...
- 理解Hadoop脚本hadoop-2.5.0/bin/hadoop
1 #!/usr/bin/env bash 此处为什么不是 #!/bin/bash ? 考虑到程序的可移植性,env的作用就是为了找到正确的脚本解释器(这里就是bash),在不同的Linux ...
- [P2119]魔法阵 (模拟?搜索?)
很玄学 我暴力都没做出来 #include <cstdio> ],vis[],a[],b[],c[],d[]; int main() { //freopen("magic.in& ...
- 为什么要DevOps?
Boss:「项目经常延期」「做东西太慢」 产品: 「老板的想法总变」 「事情太多,忙成狗」 「开发说这个实现不了」 开发: 「需求总变」 「UI 方案给的太晚」 「活儿太多」 测试: 「需求变了没提前 ...
- diff比较两个文件 linux
功能:比较两个文件的差异,并把不同地方的信息显示出来.默认diff格式的信息. diff比较两个文件或文件集合的差异,并记录下来,生成一个diff文件,这也是我们常说的补丁文件.也使用patch命令对 ...
- Linux之临时配置网络(ip,网关,dns)+永久配置
作业一:临时配置网络(ip,网关,dns)+永久配置 配置网络信息 [root@localhost ~]# ifconfig eno16777736: flags=4163<UP,BROADCA ...
- Error opening terminal: xterm-256color
在使用gdb调试linux内核时,提示如下错误: arm-none-linux-gnueabi-gdb --tui vmlinux Error opening terminal: xterm-256c ...
- 20190131 经验总结:如何从rst文件编译出自己的sqlalchemy的文档
20190131 经验总结:如何编译sqlalchemy的文档 起因 www.sqlalchemy.org官网上不去了,不管是直接上,还是用代理都不行. sqlalchemy属于常用工具,看不到官方的 ...
- linux-网站收藏
创建sudo无密码登陆 http://www.aboutyun.com/blog-61-428.html
- Swagger Annotation 详解(建议收藏)
转载:https://www.jianshu.com/p/b0b19368e4a8 在软件开发行业,管理文档是件头疼的事.不是文档难于撰写,而是文档难于维护,因为需求与代码会经常变动,尤其在采用敏捷软 ...