VGG_19 train_vali.prototxt file
name: "VGG_ILSVRC_19_layer"
layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TRAIN
}
image_data_param {
batch_size: 12
source: "../../fine_tuning_data/HAT_fineTuning_data/train_data_fineTuning.txt"
root_folder: "../../fine_tuning_data/HAT_fineTuning_data/train_data/"
}
}
layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
}
image_data_param {
batch_size: 10
source: "../../fine_tuning_data/HAT_fineTuning_data/test_data_fineTuning.txt"
root_folder: "../../fine_tuning_data/HAT_fineTuning_data/test_data/"
}
}
layer {
bottom:"data"
top:"conv1_1"
name:"conv1_1"
type:"Convolution"
convolution_param {
num_output:64
pad:1
kernel_size:3
}
}
layer {
bottom:"conv1_1"
top:"conv1_1"
name:"relu1_1"
type:"ReLU"
}
layer {
bottom:"conv1_1"
top:"conv1_2"
name:"conv1_2"
type:"Convolution"
convolution_param {
num_output:64
pad:1
kernel_size:3
}
}
layer {
bottom:"conv1_2"
top:"conv1_2"
name:"relu1_2"
type:"ReLU"
}
layer {
bottom:"conv1_2"
top:"pool1"
name:"pool1"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size:2
stride:2
}
}
layer {
bottom:"pool1"
top:"conv2_1"
name:"conv2_1"
type:"Convolution"
convolution_param {
num_output:128
pad:1
kernel_size:3
}
}
layer {
bottom:"conv2_1"
top:"conv2_1"
name:"relu2_1"
type:"ReLU"
}
layer {
bottom:"conv2_1"
top:"conv2_2"
name:"conv2_2"
type:"Convolution"
convolution_param {
num_output:128
pad:1
kernel_size:3
}
}
layer {
bottom:"conv2_2"
top:"conv2_2"
name:"relu2_2"
type:"ReLU"
}
layer {
bottom:"conv2_2"
top:"pool2"
name:"pool2"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size:2
stride:2
}
}
layer {
bottom:"pool2"
top:"conv3_1"
name: "conv3_1"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_1"
top:"conv3_1"
name:"relu3_1"
type:"ReLU"
}
layer {
bottom:"conv3_1"
top:"conv3_2"
name:"conv3_2"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_2"
top:"conv3_2"
name:"relu3_2"
type:"ReLU"
}
layer {
bottom:"conv3_2"
top:"conv3_3"
name:"conv3_3"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_3"
top:"conv3_3"
name:"relu3_3"
type:"ReLU"
}
layer {
bottom:"conv3_3"
top:"conv3_4"
name:"conv3_4"
type:"Convolution"
convolution_param {
num_output:256
pad:1
kernel_size:3
}
}
layer {
bottom:"conv3_4"
top:"conv3_4"
name:"relu3_4"
type:"ReLU"
}
layer {
bottom:"conv3_4"
top:"pool3"
name:"pool3"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool3"
top:"conv4_1"
name:"conv4_1"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_1"
top:"conv4_1"
name:"relu4_1"
type:"ReLU"
}
layer {
bottom:"conv4_1"
top:"conv4_2"
name:"conv4_2"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_2"
top:"conv4_2"
name:"relu4_2"
type:"ReLU"
}
layer {
bottom:"conv4_2"
top:"conv4_3"
name:"conv4_3"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_3"
top:"conv4_3"
name:"relu4_3"
type:"ReLU"
}
layer {
bottom:"conv4_3"
top:"conv4_4"
name:"conv4_4"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv4_4"
top:"conv4_4"
name:"relu4_4"
type:"ReLU"
}
layer {
bottom:"conv4_4"
top:"pool4"
name:"pool4"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool4"
top:"conv5_1"
name:"conv5_1"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_1"
top:"conv5_1"
name:"relu5_1"
type:"ReLU"
}
layer {
bottom:"conv5_1"
top:"conv5_2"
name:"conv5_2"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_2"
top:"conv5_2"
name:"relu5_2"
type:"ReLU"
}
layer {
bottom:"conv5_2"
top:"conv5_3"
name:"conv5_3"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_3"
top:"conv5_3"
name:"relu5_3"
type:"ReLU"
}
layer {
bottom:"conv5_3"
top:"conv5_4"
name:"conv5_4"
type:"Convolution"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
}
}
layer {
bottom:"conv5_4"
top:"conv5_4"
name:"relu5_4"
type:"ReLU"
}
layer {
bottom:"conv5_4"
top:"pool5"
name:"pool5"
type:"Pooling"
pooling_param {
pool:MAX
kernel_size: 2
stride: 2
}
}
layer {
bottom:"pool5"
top:"fc6_"
name:"fc6_"
type:"InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom:"fc6_"
top:"fc6_"
name:"relu6"
type:"ReLU"
}
layer {
bottom:"fc6_"
top:"fc6_"
name:"drop6"
type:"Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom:"fc6_"
top:"fc7"
name:"fc7"
type:"InnerProduct"
inner_product_param {
num_output: 4096
}
}
layer {
bottom:"fc7"
top:"fc7"
name:"relu7"
type:"ReLU"
}
layer {
bottom:"fc7"
top:"fc7"
name:"drop7"
type:"Dropout"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
bottom:"fc7"
top:"fc8_"
name:"fc8_"
type:"InnerProduct"
inner_product_param {
num_output: 27
}
}
layer {
name: "sigmoid"
type: "Sigmoid"
bottom: "fc8_"
top: "fc8_"
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "fc8_"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "fc8_"
bottom: "label"
top: "loss"
}
VGG_19 train_vali.prototxt file的更多相关文章
- 如何才能将Faster R-CNN训练起来?
如何才能将Faster R-CNN训练起来? 首先进入 Faster RCNN 的官网啦,即:https://github.com/rbgirshick/py-faster-rcnn#installa ...
- SSD框架训练自己的数据集
SSD demo中详细介绍了如何在VOC数据集上使用SSD进行物体检测的训练和验证.本文介绍如何使用SSD实现对自己数据集的训练和验证过程,内容包括: 1 数据集的标注2 数据集的转换3 使用SSD如 ...
- Faster-RCNN 训练自己的数据
在前一篇随笔中,数据制作成了VOC2007格式,可以用于Faster-RCNN的训练. 1.针对数据的修改 修改datasets\VOCdevkit2007\VOCcode\VOCinit.m,我只做 ...
- caffe drawnet.py 用Python画网络框架
在caffe中可以使用draw_net.py轻松地绘制卷积神经网络(CNN,Convolutional Neural Networks)的架构图.这个工具对于我们理解.学习甚至查错都有很大的帮助. 1 ...
- caffe实际运行中遇到的问题
https://blog.csdn.net/u010417185/article/details/52649178 1.均值计算是否需要统一图像的尺寸? 在图像计算均值时,应该先统一图像的尺寸,否则会 ...
- [OpenCV] Install OpenCV 3.3 with DNN
OpenCV 3.3 Aug 3, 2017 OpenCV 3.3 has been released with greatly improved Deep Learning module and l ...
- [PyImageSearch] Ubuntu16.04 使用深度学习和OpenCV实现物体检测
上一篇博文中讲到如何用OpenCV实现物体分类,但是接下来这篇博文将会告诉你图片中物体的位置具体在哪里. 我们将会知道如何使用OpenCV‘s的dnn模块去加载一个预训练的物体检测网络,它能使得我们将 ...
- 【PyImageSearch】Ubuntu16.04使用OpenCV3.3.0实现图像分类
这篇博文将会展示如何采用一个预训练的深度学习网络(模型)在ImageNet的数据集并把它当作输入图像. 首先说明,运行环境为Ubuntu16.04(或者MacOS),windows暂不支持,已经编译好 ...
- 机器学习进阶-目标追踪-SSD多进程执行 1.cv2.dnn.readnetFromCaffe(用于读取已经训练好的caffe模型) 2.delib.correlation_tracker(生成追踪器) 5.cv2.writer(将图片写入视频中) 6.cv2.dnn.blobFromImage(图片归一化) 10.multiprocessing.process(生成进程)
1. cv2.dnn.readNetFromCaffe(prototxt, model) 用于进行SSD网络的caffe框架的加载 参数说明:prototxt表示caffe网络的结构文本,model ...
随机推荐
- centos虚拟机,环境配置
yum安装 yum -y install 包名(支持*) :自动选择y,全自动yum install 包名(支持*) :手动选择y or n 1.安装vim Centos默认自带VI,功能没VIM丰富 ...
- iOS弹框
IOS 弹框 如果直接弹出一个自定义的视图 可以选用第三方: MJPopup 弹出: if(!bandview) { bandview=[[[NSBundle mainBundle]loadNibNa ...
- Design Patterns----简单的工厂模式
实例: 实现一个简单的计算器.实现加减乘除等操作.. operator.h 文件 // copyright @ L.J.SHOU Mar.13, 2014 // a simple calculator ...
- Mac环境下装node.js,npm,express
1. 下载node.js for Mac 地址: http://nodejs.org/ 直接下载 pkg的,双击安装,一路点next,很容易就搞定了. 安装完会提醒注意 node和npm的路径是 /u ...
- CODEVS1001 舒适的路线 (并查集)
对所有边从大到小排序,枚举最大边,O(m)验证,用并查集维护图是否联通. program CODEVS1001; ; maxn=; INF=; type arr=record u,v,w:int64; ...
- nno_setup制作升级包必须面临的几个问题 2
这两天的时间一直在制作应用程序的升级包,期间碰到一些问题这里一并记录下来,相信这是制作升级包必须面临和解决的问题: 1. 升级包安装程序如何不再产生新的安装.卸载程序 Inno_setup中AppId ...
- Java 中空指针处理方法
空指针异常(Null Pointer Exception)是我们平时最容易碰到的,也是最令人讨厌的异常.本文介绍如何避免出现空指针异常. 首先我们看如下的示例: private Boolean isF ...
- CentOS安装时小坑记录
在安装CentOS的时候,由于第一次安装小白,将VM虚拟机的内存设置为512M,导致进行安装的时候无法进入正常的画面安装模式,只能使用简版安装界面,可能对于很多小白不是很熟悉,特此记录,安装CentO ...
- Hashing filters for very fast massive filtering
If you have a need for thousands of rules, for example if you have a lot of clients or computers, al ...
- 五 浅谈CPU 并行编程和 GPU 并行编程的区别
前言 CPU 的并行编程技术,也是高性能计算中的热点,也是今后要努力学习的方向.那么它和 GPU 并行编程有何区别呢? 本文将做出详细的对比,分析各自的特点,为将来深入学习 CPU 并行编程技术打下铺 ...