浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
| name: "CaffeNet" layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" } # mean pixel / channel-wise mean instead of mean image # transform_param { # crop_size: 227 # mean_value: 104 # mean_value: 117 # mean_value: 123 # mirror: true # } data_param { source: "examples/imagenet/ilsvrc12_train_lmdb" batch_size: 256 backend: LMDB } } layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { mirror: false crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" } # mean pixel / channel-wise mean instead of mean image # transform_param { # crop_size: 227 # mean_value: 104 # mean_value: 117 # mean_value: 123 # mirror: false # } data_param { source: "examples/imagenet/ilsvrc12_val_lmdb" batch_size: 50 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm1" type: "LRN" bottom: "pool1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv2" type: "Convolution" bottom: "norm1" top: "conv2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm2" type: "LRN" bottom: "pool2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv3" type: "Convolution" bottom: "norm2" top: "conv3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8" type: "InnerProduct" bottom: "fc7" top: "fc8" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 1000 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "accuracy" type: "Accuracy" bottom: "fc8" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" } |
name: "CaffeNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "norm1"
type: "LRN"
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "norm1"
top: "conv2"
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "norm2"
type: "LRN"
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "norm2"
top: "conv3"
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "conv4"
type: "Convolution"
bottom: "conv3"
top: "conv4"
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "conv5"
type: "Convolution"
bottom: "conv4"
top: "conv5"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
}
}
layer {
name: "relu5"
type: "ReLU"
bottom: "conv5"
top: "conv5"
}
layer {
name: "pool5"
type: "Pooling"
bottom: "conv5"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "fc6"
type: "InnerProduct"
bottom: "pool5"
top: "fc6"
inner_product_param {
num_output: 4096
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}
layer {
name: "drop6"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc7"
type: "InnerProduct"
bottom: "fc6"
top: "fc7"
inner_product_param {
num_output: 4096
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "fc7"
top: "fc7"
}
layer {
name: "drop7"
type: "Dropout"
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc8"
type: "InnerProduct"
bottom: "fc7"
top: "fc8"
inner_product_param {
num_output: 1000
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "fc8"
top: "prob"
}
浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别的更多相关文章
- 浅谈HTTP中GET、POST用法以及它们的区别
浅谈HTTP中GET.POST用法以及它们的区别 HTTP定义了与服务器交互的不同方法,最基本的方法有4种,分别是GET,POST,PUT,DELETE.URL全称是资源描述符.我们可以这样认为: 一 ...
- 浅谈JS中的!=、== 、!==、===的用法和区别 JS中Null与Undefined的区别 读取XML文件 获取路径的方式 C#中Cookie,Session,Application的用法与区别? c#反射 抽象工厂
浅谈JS中的!=.== .!==.===的用法和区别 var num = 1; var str = '1'; var test = 1; test == num //tr ...
- Java基础学习总结(29)——浅谈Java中的Set、List、Map的区别
就学习经验,浅谈Java中的Set,List,Map的区别,对JAVA的集合的理解是想对于数组: 数组是大小固定的,并且同一个数组只能存放类型一样的数据(基本类型/引用类型),JAVA集合可以存储和操 ...
- 浅谈JSP中include指令与include动作标识的区别
JSP中主要包含三大指令,分别是page,include,taglib.本篇主要提及include指令. include指令使用格式:<%@ include file="文件的绝对路径 ...
- 浅谈Java中的Set、List、Map的区别(转)
对JAVA的集合的理解是想对于数组: 数组是大小固定的,并且同一个数组只能存放类型一样的数据(基本类型/引用类型),JAVA集合可以存储和操作数目不固定的一组数据. 所有的JAVA集合都位于 java ...
- [转]浅谈HTTP中GET、POST用法以及它们的区别
HTTP定义了与服务器交互的不同方法,最基本的方法有4种,分别是GET,POST,PUT,DELETE.URL全称是资源描述符.我们可以这样认为: 一个URL地址,它用于描述一个网络上的资源,而HTT ...
- 浅谈Java中的Set、List、Map的区别
http://developer.51cto.com/art/201309/410205_all.htm
- 浅谈css中单位px和em,rem的区别-转载
px是你屏幕设备物理上能显示出的最小的一个点,这个点不是固定宽度的,不同设备上点的长宽.比例有可能会不同.假设:你现在用的显示器上1px宽=1毫米,但我用的显示器1px宽=两毫米,那么你定义一个div ...
- 浅谈JS中的!=、== 、!==、===的用法和区别
var num = 1; var str = '1'; var test = 1; test == num //true 相同类型 相同值 test === num ...
随机推荐
- Codeforces 1053 C - Putting Boxes Together
C - Putting Boxes Together 思路: 求带权中位数 用树状数组维护修改 代码: #pragma GCC optimize(2) #pragma GCC optimize(3) ...
- lua中产生 1 - n 之间不重复随机数
local function GetRandomNumList(len) local rsList = {} ,len do table.insert(rsList,i) end local num, ...
- python+selenium2(一)
一.安装python (1)在官网下载python的安装包,这里使用的是python3.5.安装一路下一步,安装路径我的是D:\Python35. (2)在计算机的path变量中添加D:\Python ...
- 雷林鹏分享:查看 XML 文件
查看 XML 文件 在所有主流的浏览器中,均能够查看原始的 XML 文件. 不要指望 XML 文件会直接显示为 HTML 页面. 查看 XML 文件 - Tove Jani Reminder Don' ...
- WinForm下窗体权限设计
权限设计 笔者不才看了园子里面很多园友写关于权限设计这块内容,那么笔者也在添一笔.这个是笔者在上完软件工程课程后,上交的一篇笔者论文,这里分享给大家交流,当然笔者经验尚浅,若内容有误,请大家指点出 ...
- codeforces587a//Duff and Weight Lifting// Codeforces Round #326 (Div. 1)
题意:一数列an,如果存在一个k,有2^(ai)+2^(aj)+......=2^k成立,那么一次能拿走ai,aj这些全部.问最少拿的次数. 太简单. 乱码 //#pragma comment(lin ...
- CF1114E Arithmetic Progression
给定一个打乱的等差数列,每次两种操作. 1.查询一个位置. 2.查询是否有比x大的数字. 一共60次操作. sol: 30次操作即可二分出首项. 剩下30次操作查询出30个位置然后两两做差取gcd即可 ...
- css之transform属性
定义元素的旋转(rotate),缩放(scale),移动(translate),倾斜(skew) rotate rotate(angle) 定义 2D 旋转,在参数中规定角度. rotate3d(x, ...
- python记录_day13 内置函数
详细链接:https://www.processon.com/mindmap/5bdbc4bee4b0878bf42089fb
- 牛客练习赛32-D-MST+tarjin割边
链接:https://ac.nowcoder.com/acm/contest/272/D来源:牛客网 题目描述 小p和他的朋友约定好去游乐场游玩,但是他们到了游乐场后却互相找不到对方了. 游乐场可以看 ...