tesnorflow conv deconv,padding
1.padding test
input = tf.placeholder(tf.float32, shape=(1,2, 2,1))
simpleconv=slim.conv2d(input,1,[3,3],stride = 1,activation_fn = None,scope = 'simpleconv3')
sess.run(tf.global_variables_initializer())
weights=graph.get_tensor_by_name("simpleconv3/weights:0")
sess.run(tf.assign(weights,tf.constant(1.0,shape=weights.shape)))
a=np.ndarray(shape=(1,2,2,1),dtype='float',buffer=np.array([1.0,2,3,4]))
simpleconvout=sess.run(simpleconv,feed_dict={input:a.astype('float32')})
print simpleconvout
[[[[ 10.000000]
[ 10.000000]] [[ 10.000000]
[ 10.000000]]]] input1 = tf.placeholder(tf.float32, shape=(1,4, 4,1))
simpleconv=slim.conv2d(input1,1,[3,3],stride = 2,activation_fn = None,scope = 'simpleconv3')
sess.run(tf.global_variables_initializer())
weights=graph.get_tensor_by_name("simpleconv3/weights:0")
sess.run(tf.assign(weights,tf.constant(1.0,shape=weights.shape)))
a=np.ndarray(shape=(1,4,4,1),dtype='float',buffer=np.array([1.0,2,3,4,2,3,4,5,3,4,5,6,4,5,6,7]))
simpleconvout=sess.run(simpleconv,feed_dict={input1:a.astype('float32')}) print simpleconvout [[[[ 27.]
[ 27.]] [[ 27.]
[ 24.]]]] simpledeconv=slim.conv2d_transpose(input,1,[3,3],stride = 2,activation_fn = None,scope = 'simpledeconv')
sess.run(tf.global_variables_initializer())
weights=graph.get_tensor_by_name("simpledeconv/weights:0")
sess.run(tf.assign(weights,tf.constant(1.0,shape=weights.shape)))
a=np.ndarray(shape=(1,2,2,1),dtype='float',buffer=np.array([1.0,2,3,4]))
simpleconvout=sess.run(simpledeconv,feed_dict={input:a.astype('float32')})
print simpleconvout [[[[ 1.000000]
[ 1.000000]
[ 3.000000]
[ 2.000000]] [[ 1.000000]
[ 1.000000]
[ 3.000000]
[ 2.000000]] [[ 4.000000]
[ 4.000000]
[ 10.000000]
[ 6.000000]] [[ 3.000000]
[ 3.000000]
[ 7.000000]
[ 4.000000]]]] conv stride=1是四周padding 0,stride=2是down right padding 0 deconv是top left各插了两行0 而torch中的deconv是四周padding一圈0
参考http://blog.csdn.net/lujiandong1/article/details/53728053
'SAME' padding方式时,如果padding的数目是奇数,则多的padding在右边(下边)
2.实现custom-padding
https://stackoverflow.com/questions/37659538/custom-padding-for-convolutions-in-tensorflow
实现custom conv decon
def conv(input,num_outputs,kernel_size,stride=1,padW=0,padH=0,activation_fn=None,scope=None):
padded_input = tf.pad(input, [[0, 0], [padH, padH], [padW, padW], [0, 0]], "CONSTANT")
return slim.conv2d(padded_input,num_outputs,kernel_size,stride = stride,padding="VALID",activation_fn = activation_fn ,scope = scope)
input1 = tf.placeholder(tf.float32, shape=(1,4, 4,1))
a=np.ndarray(shape=(1,4,4,1),dtype='float',buffer=np.array([1.0,2,3,4,2,3,4,5,3,4,5,6,4,5,6,7]))
simpleconv=conv(input1,1,[3,3],stride = 2,padW=1,padH=1,activation_fn = None,scope = 'conv')
sess.run(tf.global_variables_initializer())
weights=graph.get_tensor_by_name("conv/weights:0")
sess.run(tf.assign(weights,tf.constant(1.0,shape=weights.shape)))
simpleconvout=sess.run(simpleconv,feed_dict={input1:a.astype('float32')})
print simpleconvout
[[[[ 8.]
[ 21.]] [[ 21.]
[ 45.]]]] def deconv(input,num_outputs,kernel_size,stride=2,activation_fn=None,scope=None):
N,H,W,C = [i.value for i in input.get_shape()]
out = slim.conv2d_transpose(input,num_outputs,kernel_size,stride = stride,padding="VALID",activation_fn = activation_fn ,scope = scope)
return tf.slice(out, [0, kernel_size[0]/2,kernel_size[1]/2, 0], [N, H*stride, W*stride,num_outputs]) input = tf.placeholder(tf.float32, shape=(1,2, 2,1))
a=np.ndarray(shape=(1,2,2,1),dtype='float',buffer=np.array([1.0,2,3,4]))
simpledeconv=deconv(input,1,[3,3],stride = 2,activation_fn = None,scope = 'simpledeconv1')
sess.run(tf.global_variables_initializer())
weights=graph.get_tensor_by_name("simpledeconv1/weights:0")
sess.run(tf.assign(weights,tf.constant(1.0,shape=weights.shape)))
out=sess.run(simpledeconv,feed_dict={input:a.astype('float32')})
print out [[[[ 1.]
[ 3.]
[ 2.]
[ 2.]] [[ 4.]
[ 10.]
[ 6.]
[ 6.]] [[ 3.]
[ 7.]
[ 4.]
[ 4.]] [[ 3.]
[ 7.]
[ 4.]
[ 4.]]]]
tesnorflow conv deconv,padding的更多相关文章
- 深度学习卷积网络中反卷积/转置卷积的理解 transposed conv/deconv
搞明白了卷积网络中所谓deconv到底是个什么东西后,不写下来怕又忘记,根据参考资料,加上我自己的理解,记录在这篇博客里. 先来规范表达 为了方便理解,本文出现的举例情况都是2D矩阵卷积,卷积输入和核 ...
- 论文阅读(Xiang Bai——【arXiv2016】Scene Text Detection via Holistic, Multi-Channel Prediction)
Xiang Bai--[arXiv2016]Scene Text Detection via Holistic, Multi-Channel Prediction 目录 作者和相关链接 方法概括 创新 ...
- 论文笔记:Mask R-CNN
之前在一次组会上,师弟诉苦说他用 UNet 处理一个病灶分割的任务,但效果极差,我看了他的数据后发现,那些病灶区域比起整张图而言非常的小,而 UNet 采用的损失函数通常是逐像素的分类损失,如此一来, ...
- 本人AI知识体系导航 - AI menu
Relevant Readable Links Name Interesting topic Comment Edwin Chen 非参贝叶斯 徐亦达老板 Dirichlet Process 学习 ...
- 【文献阅读】Densely Connected Convolutional Networks-best paper-CVPR-2017
Densely Connected Convolutional Networks,CVPR-2017-best paper之一(共两篇,另外一篇是apple关于GAN的paper),早在去年八月 De ...
- 如何快速使用YOLO3进行目标检测
本文目的:介绍一篇YOLO3的Keras实现项目,便于快速了解如何使用预训练的YOLOv3,来对新图像进行目标检测. 本文使用的是Github上一位大神训练的YOLO3开源的项目.这个项目提供了很多使 ...
- YOLO v3算法介绍
图片来自https://towardsdatascience.com/yolo-v3-object-detection-with-keras-461d2cfccef6 数据前处理 输入的图片维数:(4 ...
- LCD: 2D-3D匹配算法
LCD: 2D-3D匹配算法 标题:LCD:Learned Cross-Domain Descriptors for 2D-3D Matching 作者:Quang-Hieu Pham, Mikael ...
- dilated conv、deconv、fractional-strided conv
deconv的其中一个用途是做upsampling,即增大图像尺寸. dilated convolution: dilated conv,中文可以叫做空洞卷积或者扩张卷积. 首先是诞生背景,在图像分割 ...
随机推荐
- redis新特性
摘自<redis 4.xcookbook> 从实例重启同步] 故障切换同步] 4.0之前从实例主键过期bug redis4新特性 Memory Command Lazy Free PSYN ...
- PHP 下基于 php-amqp 扩展的 RabbitMQ 简单用例 (三) -- Header Exchange
此模式下,消息的routing key 和队列的 routing key 会被完全忽略,而是在交换机推送消息和队列绑定交换机时, 分别为消息和队列设置 headers 属性, 通过匹配消息和队列的 h ...
- python beautifulsoup获取特定html源码
beautifulsoup 获取特定html源码(无需登录页面) import refrom bs4 import BeautifulSoupimport urllib2 url = 'http:// ...
- MATLAB优化——减少for的使用
Table of Contents 1. MATLAB 2. 矩阵计算--全0行整体替换 MATLAB MATLAB作为一个强大的工具(可惜是收费的),在矩阵运算.绘制函数和数据.实现算法.创建用户界 ...
- linux core dump 生成和调试
core dump 某些信号的产生会导致产生core dump,包含了进程终止时的内存镜像.在某些时候这个core文件就非常的有用处,配合gdb或者lldb调试起来非常方便. 更详细的文档参考 Lin ...
- Mysql:零散记录
limit用法 查询第4行记录 select * from tablename limit 3,1; limit 3,1:截取第3行加1行的数据 查询第6-15行 select * from tabl ...
- MySQL-----增
增 **创建用户** create user 'alex'@'192.168.1.1' identified by '123123'; create user 'alex'@'192.168.1.%' ...
- UVA 213 信息解码(二进制&位运算)
题意: 出自刘汝佳算法竞赛入门经典第四章. 考虑下面的01串序列: 0, 00, 01, 10, 000, 001, 010, 011, 100, 101, 110, 0000, 0001, …, 1 ...
- * SPOJ PGCD Primes in GCD Table (需要自己推线性筛函数,好题)
题目大意: 给定n,m,求有多少组(a,b) 0<a<=n , 0<b<=m , 使得gcd(a,b)= p , p是一个素数 这里本来利用枚举一个个素数,然后利用莫比乌斯反演 ...
- HDU 1525 Euclid Game
题目大意: 给定2个数a , b,假定b>=a总是从b中取走一个a的整数倍,也就是让 b-k*a(k*a<=b) 每人执行一步这个操作,最后得到0的人胜利结束游戏 (0,a)是一个终止态P ...