上采样(upsampling)一般包括2种方式:

第二种方法如何用pytorch实现可见上面的链接

这里想要介绍的是如何使用pytorch实现第一种方法:

举例:

1)使用torch.nn模块实现一个生成器为:

import torch.nn as nn
import torch.nn.functional as F class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride):
super(ConvLayer, self).__init__()
padding = kernel_size //
self.reflection_pad = nn.ReflectionPad2d(padding)
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride) def forward(self, x):
out = self.reflection_pad(x)
out = self.conv(out) return out class Generator(nn.Module):
def __init__(self, in_channels):
super(Generator, self).__init__()
self.in_channels = in_channels self.encoder = nn.Sequential(
ConvLayer(self.in_channels, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
) upsample = nn.Upsample(scale_factor=, mode='bilinear', align_corners=True)
self.decoder = nn.Sequential(
upsample,
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU(),
upsample,
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU(),
upsample,
nn.Conv2d(, , ),
nn.Tanh()
) def forward(self, x):
x = self.encoder(x)
out = self.decoder(x) return out def test():
net = Generator()
for module in net.children():
print(module)
x = Variable(torch.randn(,,,))
output = net(x)
print('output :', output.size())
print(type(output)) if __name__ == '__main__':
test()

返回:

model.py .Sequential(
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
)
Sequential(
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): Tanh()
)
output : torch.Size([, , , ])
<class 'torch.Tensor'>

但是这个会有警告:

 UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.

可使用torch.nn.functional模块替换为:

import torch.nn as nn
import torch.nn.functional as F class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride):
super(ConvLayer, self).__init__()
padding = kernel_size //
self.reflection_pad = nn.ReflectionPad2d(padding)
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride) def forward(self, x):
out = self.reflection_pad(x)
out = self.conv(out) return out class Generator(nn.Module):
def __init__(self, in_channels):
super(Generator, self).__init__()
self.in_channels = in_channels self.encoder = nn.Sequential(
ConvLayer(self.in_channels, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
) self.decoder1 = nn.Sequential(
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU()
)
self.decoder2 = nn.Sequential(
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU()
)
self.decoder3 = nn.Sequential(
nn.Conv2d(, , ),
nn.Tanh()
) def forward(self, x):
x = self.encoder(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
x = self.decoder1(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
x = self.decoder2(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
out = self.decoder3(x) return out def test():
net = Generator()
for module in net.children():
print(module)
x = Variable(torch.randn(,,,))
output = net(x)
print('output :', output.size())
print(type(output)) if __name__ == '__main__':
test()

返回:

model.py .Sequential(
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): Tanh()
)
output : torch.Size([, , , ])
<class 'torch.Tensor'>

pytorch 不使用转置卷积来实现上采样的更多相关文章

  1. 【python实现卷积神经网络】上采样层upSampling2D实现

    代码来源:https://github.com/eriklindernoren/ML-From-Scratch 卷积神经网络中卷积层Conv2D(带stride.padding)的具体实现:https ...

  2. pytorch torch.nn 实现上采样——nn.Upsample

    Vision layers 1)Upsample CLASS torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align ...

  3. 由浅入深:CNN中卷积层与转置卷积层的关系

    欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 本文由forrestlin发表于云+社区专栏 导语:转置卷积层(Transpose Convolution Layer)又称反卷积层或分数卷 ...

  4. 上采样 及 Sub-pixel Convolution (子像素卷积)

    参考:https://blog.csdn.net/leviopku/article/details/84975282 参考:https://blog.csdn.net/g11d111/article/ ...

  5. 直接理解转置卷积(Transposed convolution)的各种情况

    使用GAN生成图像必不可少的层就是上采样,其中最常用的就是转置卷积(Transposed Convolution).如果把卷积操作转换为矩阵乘法的形式,转置卷积实际上就是将其中的矩阵进行转置,从而产生 ...

  6. 深度学习卷积网络中反卷积/转置卷积的理解 transposed conv/deconv

    搞明白了卷积网络中所谓deconv到底是个什么东西后,不写下来怕又忘记,根据参考资料,加上我自己的理解,记录在这篇博客里. 先来规范表达 为了方便理解,本文出现的举例情况都是2D矩阵卷积,卷积输入和核 ...

  7. CNN:转置卷积输出分辨率计算

    上一篇介绍了卷积的输出分辨率计算,现在这一篇就来写下转置卷积的分辨率计算.转置卷积(Transposed convolution),转置卷积也有叫反卷积(deconvolution)或者fractio ...

  8. pytorch(13)卷积层

    卷积层 1. 1d/2d/3d卷积 Dimension of Convolution 卷积运算:卷积核在输入信号(图像)上滑动,相应位置上进行乘加 卷积核:又称为滤波器,过滤器,可认为是某种模式,某种 ...

  9. 『TensotFlow』转置卷积

    网上解释 作者:张萌链接:https://www.zhihu.com/question/43609045/answer/120266511来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商业 ...

随机推荐

  1. 51nod 2499 不降的数字

    小b有一个非负整数 N,她想请你找出 ≤N≤N 的最大整数x,满足x各个位数上的数字是不降的.也就是说,设x的十进制表示为 a1,a2,…,ama1,a2,…,am,则对于任意 1≤i<m1≤i ...

  2. 如何在linux环境下配置环境变量

    jdk下载地址: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 在linux环 ...

  3. 十六.maven自动化构建protobuf代码依赖

    protobuf在序列化和反序列化中的优势: 1):序列化后体积相比Json和XML很小,适合网络传输2):支持跨平台多语言3):消息格式升级和兼容性还不错4):序列化反序列化速度很快,快于Json的 ...

  4. hbase实践之协处理器Coprocessor

    HBase客户端查询存在的问题 Scan 用Get/Scan查询数据, Filter 用Filter查询特定数据 以上情况只适合几千行数据以及不是很多的列的"小数据". 当表扩展为 ...

  5. HDFS的NameNode堆内存估算

    NameNode堆内存估算 在HDFS中,数据和元数据是分开存储的,数据文件被分割成若干个数据块,每一个数据块默认备份3份,然后分布式的存储在所有的DataNode上,元数据会常驻在NameNode的 ...

  6. 2019-2020-1 20199312《Linux内核原理与分析》第十一周作业

    实验简介 缓冲区溢出是指程序试图向缓冲区写入超出预分配固定长度数据的情况.这一漏洞可以被恶意用户利用来改变程序的流控制,甚至执行代码的任意片段.这一漏洞的出现是由于数据缓冲器和返回地址的暂时关闭,溢出 ...

  7. 织梦个人空间中调用ip,会员类型,邮箱,金币,会员积分

    织梦个人空间中调用.用户昵称,最后登录,会员等级 ,会员头衔,会员积分,空间访问,邮箱地址 ,金币数量,会员组的有效期天数 ,升级会员组的时间 ,用户的等级,用户的性别 ,会员的类型,ip 第一步确定 ...

  8. [NgRx 8] Basic of NgRx8

    1. First step is creating action creator Action name should be clear which page, which functionality ...

  9. Vector(同步)和Arraylist(异步)的异同

    //  同步  异步  //1  同步  //2  异步  //未响应 = 假死  占用内存过多  内存无法进行处理  //请求方式:同步    异步  //网页的展现过程中:1 css文件的下载  ...

  10. Centos7 minimal 安装npm

    最小版本缺少很多源,需要手动去添加源 如何去判断yum中 有没有 npm 的源呢 yum list | grep npm 如果是这样的,就代表需要自己去添加 curl -sL -o /etc/yum. ...