pytorch 不使用转置卷积来实现上采样
上采样(upsampling)一般包括2种方式:
- Resize,如双线性插值直接缩放,类似于图像缩放,概念可见最邻近插值算法和双线性插值算法——图像缩放
- Deconvolution,也叫Transposed Convolution,可见逆卷积的详细解释ConvTranspose2d(fractionally-strided convolutions)
第二种方法如何用pytorch实现可见上面的链接
这里想要介绍的是如何使用pytorch实现第一种方法:
- 有两个模块都支持该上采样的实现,一个是torch.nn模块,详情可见:pytorch torch.nn 实现上采样——nn.Upsample (但是现在这种方法已经不推荐使用了,最好使用下面的方法)
- 一个是torch.nn.funtional模块,详情可见:pytorch torch.nn.functional实现插值和上采样
举例:
1)使用torch.nn模块实现一个生成器为:
import torch.nn as nn
import torch.nn.functional as F class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride):
super(ConvLayer, self).__init__()
padding = kernel_size //
self.reflection_pad = nn.ReflectionPad2d(padding)
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride) def forward(self, x):
out = self.reflection_pad(x)
out = self.conv(out) return out class Generator(nn.Module):
def __init__(self, in_channels):
super(Generator, self).__init__()
self.in_channels = in_channels self.encoder = nn.Sequential(
ConvLayer(self.in_channels, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
) upsample = nn.Upsample(scale_factor=, mode='bilinear', align_corners=True)
self.decoder = nn.Sequential(
upsample,
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU(),
upsample,
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU(),
upsample,
nn.Conv2d(, , ),
nn.Tanh()
) def forward(self, x):
x = self.encoder(x)
out = self.decoder(x) return out def test():
net = Generator()
for module in net.children():
print(module)
x = Variable(torch.randn(,,,))
output = net(x)
print('output :', output.size())
print(type(output)) if __name__ == '__main__':
test()
返回:
model.py .Sequential(
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
)
Sequential(
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): Tanh()
)
output : torch.Size([, , , ])
<class 'torch.Tensor'>
但是这个会有警告:
UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
可使用torch.nn.functional模块替换为:
import torch.nn as nn
import torch.nn.functional as F class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride):
super(ConvLayer, self).__init__()
padding = kernel_size //
self.reflection_pad = nn.ReflectionPad2d(padding)
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride) def forward(self, x):
out = self.reflection_pad(x)
out = self.conv(out) return out class Generator(nn.Module):
def __init__(self, in_channels):
super(Generator, self).__init__()
self.in_channels = in_channels self.encoder = nn.Sequential(
ConvLayer(self.in_channels, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
) self.decoder1 = nn.Sequential(
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU()
)
self.decoder2 = nn.Sequential(
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU()
)
self.decoder3 = nn.Sequential(
nn.Conv2d(, , ),
nn.Tanh()
) def forward(self, x):
x = self.encoder(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
x = self.decoder1(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
x = self.decoder2(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
out = self.decoder3(x) return out def test():
net = Generator()
for module in net.children():
print(module)
x = Variable(torch.randn(,,,))
output = net(x)
print('output :', output.size())
print(type(output)) if __name__ == '__main__':
test()
返回:
model.py .Sequential(
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): Tanh()
)
output : torch.Size([, , , ])
<class 'torch.Tensor'>
pytorch 不使用转置卷积来实现上采样的更多相关文章
- 【python实现卷积神经网络】上采样层upSampling2D实现
代码来源:https://github.com/eriklindernoren/ML-From-Scratch 卷积神经网络中卷积层Conv2D(带stride.padding)的具体实现:https ...
- pytorch torch.nn 实现上采样——nn.Upsample
Vision layers 1)Upsample CLASS torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align ...
- 由浅入深:CNN中卷积层与转置卷积层的关系
欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 本文由forrestlin发表于云+社区专栏 导语:转置卷积层(Transpose Convolution Layer)又称反卷积层或分数卷 ...
- 上采样 及 Sub-pixel Convolution (子像素卷积)
参考:https://blog.csdn.net/leviopku/article/details/84975282 参考:https://blog.csdn.net/g11d111/article/ ...
- 直接理解转置卷积(Transposed convolution)的各种情况
使用GAN生成图像必不可少的层就是上采样,其中最常用的就是转置卷积(Transposed Convolution).如果把卷积操作转换为矩阵乘法的形式,转置卷积实际上就是将其中的矩阵进行转置,从而产生 ...
- 深度学习卷积网络中反卷积/转置卷积的理解 transposed conv/deconv
搞明白了卷积网络中所谓deconv到底是个什么东西后,不写下来怕又忘记,根据参考资料,加上我自己的理解,记录在这篇博客里. 先来规范表达 为了方便理解,本文出现的举例情况都是2D矩阵卷积,卷积输入和核 ...
- CNN:转置卷积输出分辨率计算
上一篇介绍了卷积的输出分辨率计算,现在这一篇就来写下转置卷积的分辨率计算.转置卷积(Transposed convolution),转置卷积也有叫反卷积(deconvolution)或者fractio ...
- pytorch(13)卷积层
卷积层 1. 1d/2d/3d卷积 Dimension of Convolution 卷积运算:卷积核在输入信号(图像)上滑动,相应位置上进行乘加 卷积核:又称为滤波器,过滤器,可认为是某种模式,某种 ...
- 『TensotFlow』转置卷积
网上解释 作者:张萌链接:https://www.zhihu.com/question/43609045/answer/120266511来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商业 ...
随机推荐
- ldconfig 让安装的 php 的rdkafka生效
原文:https://www.cnblogs.com/schips/p/10183111.html linux中ldconfig的使用介绍 ldconfig是一个动态链接库管理命令,其目的为了让动 ...
- pandas数据类型(二)与numpy的str和object类型之间的区别
现象: Numpy区分了str和object类型,其中dtype(‘S’)和dtype(‘O’)分别对应于str和object. 然而,pandas缺乏这种区别 str和object类型都对应dtyp ...
- Storage事件及综合案例
说到Storage事件,那么就得先给大家说一下localstorage和sessionstorage: 1.localStorage和sessionStorage一样都是用来存储客户端临时信息的对象. ...
- Luogu P3489 [POI2009]WIE-Hexer 最短路
https://www.luogu.org/problemnew/show/P3489 普通的最短路,不过我觉得这个复杂度按道理来说边数不应该是m*2^13吗,不知道是数据比较水还是实际上能证明复杂度 ...
- 洛谷 P2058 海港 题解
P2058 海港 题目描述 小K是一个海港的海关工作人员,每天都有许多船只到达海港,船上通常有很多来自不同国家的乘客. 小K对这些到达海港的船只非常感兴趣,他按照时间记录下了到达海港的每一艘船只情况: ...
- 【概率论】6-1:大样本介绍(Large Random Samples Introduction)
title: [概率论]6-1:大样本介绍(Large Random Samples Introduction) categories: - Mathematic - Probability keyw ...
- mac系统提示 interactive intelligence 的恼人问题
处理 interacti intelligence 提示问题记录 二手购买了一台电脑,从最初的小白到现在稍微熟悉mac的使用, 一直困扰我的便是一个提示, 上图 困扰多年, 记录一下解决和尝试过程吧. ...
- YII框架的模块化技术
一.模块的创建 利用yii的自动生成工具gii生成模块. 1.访问:lcoalhost/web/index.php?r=gii 2.点击 Module Generator 下面的 start 3.填写 ...
- Alpha(2/6)
队名:無駄無駄 组长博客 作业博客 组员情况 张越洋 过去两天完成了哪些任务 任务分配.进度监督 提交记录(全组共用) 接下来的计划 沟通前后端成员,监督.提醒他们尽快完成各自的进度 还剩下哪些任务 ...
- switchcase的用法
<script> var level = prompt("请输入员工评级"); var salary = 5000; switch (level) { case &qu ...