上采样(upsampling)一般包括2种方式:

第二种方法如何用pytorch实现可见上面的链接

这里想要介绍的是如何使用pytorch实现第一种方法:

举例:

1)使用torch.nn模块实现一个生成器为:

import torch.nn as nn
import torch.nn.functional as F class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride):
super(ConvLayer, self).__init__()
padding = kernel_size //
self.reflection_pad = nn.ReflectionPad2d(padding)
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride) def forward(self, x):
out = self.reflection_pad(x)
out = self.conv(out) return out class Generator(nn.Module):
def __init__(self, in_channels):
super(Generator, self).__init__()
self.in_channels = in_channels self.encoder = nn.Sequential(
ConvLayer(self.in_channels, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
) upsample = nn.Upsample(scale_factor=, mode='bilinear', align_corners=True)
self.decoder = nn.Sequential(
upsample,
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU(),
upsample,
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU(),
upsample,
nn.Conv2d(, , ),
nn.Tanh()
) def forward(self, x):
x = self.encoder(x)
out = self.decoder(x) return out def test():
net = Generator()
for module in net.children():
print(module)
x = Variable(torch.randn(,,,))
output = net(x)
print('output :', output.size())
print(type(output)) if __name__ == '__main__':
test()

返回:

model.py .Sequential(
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
)
Sequential(
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): Upsample(scale_factor=, mode=bilinear)
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): Tanh()
)
output : torch.Size([, , , ])
<class 'torch.Tensor'>

但是这个会有警告:

 UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.

可使用torch.nn.functional模块替换为:

import torch.nn as nn
import torch.nn.functional as F class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride):
super(ConvLayer, self).__init__()
padding = kernel_size //
self.reflection_pad = nn.ReflectionPad2d(padding)
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride) def forward(self, x):
out = self.reflection_pad(x)
out = self.conv(out) return out class Generator(nn.Module):
def __init__(self, in_channels):
super(Generator, self).__init__()
self.in_channels = in_channels self.encoder = nn.Sequential(
ConvLayer(self.in_channels, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
nn.BatchNorm2d(),
nn.ReLU(),
ConvLayer(, , , ),
) self.decoder1 = nn.Sequential(
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU()
)
self.decoder2 = nn.Sequential(
nn.Conv2d(, , ),
nn.BatchNorm2d(),
nn.ReLU()
)
self.decoder3 = nn.Sequential(
nn.Conv2d(, , ),
nn.Tanh()
) def forward(self, x):
x = self.encoder(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
x = self.decoder1(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
x = self.decoder2(x)
x = F.interpolate(x, scale_factor=, mode='bilinear', align_corners=True)
out = self.decoder3(x) return out def test():
net = Generator()
for module in net.children():
print(module)
x = Variable(torch.randn(,,,))
output = net(x)
print('output :', output.size())
print(type(output)) if __name__ == '__main__':
test()

返回:

model.py .Sequential(
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
(): ConvLayer(
(reflection_pad): ReflectionPad2d((, , , ))
(conv): Conv2d(, , kernel_size=(, ), stride=(, ))
)
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): BatchNorm2d(, eps=1e-, momentum=0.1, affine=True, track_running_stats=True)
(): ReLU()
)
Sequential(
(): Conv2d(, , kernel_size=(, ), stride=(, ))
(): Tanh()
)
output : torch.Size([, , , ])
<class 'torch.Tensor'>

pytorch 不使用转置卷积来实现上采样的更多相关文章

  1. 【python实现卷积神经网络】上采样层upSampling2D实现

    代码来源:https://github.com/eriklindernoren/ML-From-Scratch 卷积神经网络中卷积层Conv2D(带stride.padding)的具体实现:https ...

  2. pytorch torch.nn 实现上采样——nn.Upsample

    Vision layers 1)Upsample CLASS torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align ...

  3. 由浅入深:CNN中卷积层与转置卷积层的关系

    欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 本文由forrestlin发表于云+社区专栏 导语:转置卷积层(Transpose Convolution Layer)又称反卷积层或分数卷 ...

  4. 上采样 及 Sub-pixel Convolution (子像素卷积)

    参考:https://blog.csdn.net/leviopku/article/details/84975282 参考:https://blog.csdn.net/g11d111/article/ ...

  5. 直接理解转置卷积(Transposed convolution)的各种情况

    使用GAN生成图像必不可少的层就是上采样,其中最常用的就是转置卷积(Transposed Convolution).如果把卷积操作转换为矩阵乘法的形式,转置卷积实际上就是将其中的矩阵进行转置,从而产生 ...

  6. 深度学习卷积网络中反卷积/转置卷积的理解 transposed conv/deconv

    搞明白了卷积网络中所谓deconv到底是个什么东西后,不写下来怕又忘记,根据参考资料,加上我自己的理解,记录在这篇博客里. 先来规范表达 为了方便理解,本文出现的举例情况都是2D矩阵卷积,卷积输入和核 ...

  7. CNN:转置卷积输出分辨率计算

    上一篇介绍了卷积的输出分辨率计算,现在这一篇就来写下转置卷积的分辨率计算.转置卷积(Transposed convolution),转置卷积也有叫反卷积(deconvolution)或者fractio ...

  8. pytorch(13)卷积层

    卷积层 1. 1d/2d/3d卷积 Dimension of Convolution 卷积运算:卷积核在输入信号(图像)上滑动,相应位置上进行乘加 卷积核:又称为滤波器,过滤器,可认为是某种模式,某种 ...

  9. 『TensotFlow』转置卷积

    网上解释 作者:张萌链接:https://www.zhihu.com/question/43609045/answer/120266511来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商业 ...

随机推荐

  1. 乔布斯在位时,库克实质上已经在做CEO的工作了:3星|《蒂姆·库克传》

    “ 一些人认为艾夫是接替乔布斯的热门人选,他对苹果的原晃和产品来说至关重要,但他本人对管理企业却毫无兴趣.艾夫想继统做设计.在苹果,他拥有所有设计师都梦寐以求的工作环境——无限的资源和自由创作的空间. ...

  2. linux系统编程之进程(三)

    今天继续学习进程相关的东东,继上节最后简单介绍了用exec函数替换进程映像的用法,今天将来深入学习exec及它关联的函数,话不多说,正式进入正题: exec替换进程映象:   对于fork()函数,它 ...

  3. IDEA -01 -忽略指定文件夹 -防止加载Vue-cli执行"npm install"命令后的项目时卡死

    问题描述 Vue的"npm install" 命令执行后,会生成一个很大的目录层次的"node_modules",文件十分繁多; idea加载这个项目下的文件夹 ...

  4. 大数据之路week07--day05 (一个基于Hadoop的数据仓库建模工具之一 HIve)

    什么是Hive? 我来一个短而精悍的总结(面试常问) 1:hive是基于hadoop的数据仓库建模工具之一(后面还有TEZ,Spark). 2:hive可以使用类sql方言,对存储在hdfs上的数据进 ...

  5. JQuery实现品牌展示

    最近验收了ITOO,老师当时验收的时候对于界面的设计非常敏感,只要看了一个大体轮廓,就能给出我们建议,这是二十年积累的经验,我们要做的就是站在巨人的肩膀上,让我们成长更快! 老师说了一下关于界面设计的 ...

  6. jemeter 查看结果树 分析

    查看结果树,可以看到测试通过,通过 的测试通常为绿色.红色则代表失败了.可以查看到取样器结果,请求,响应数据 取样器结果中可以查看到响应头,响应数据大小,响应时间等信息. Thread Name: 线 ...

  7. Mysql 查询阻塞和事物情况

    MYSQL 服务器逻辑架构图 连接/线程处理 == > (解析器 –> 查询缓存) ===> 优化器 ===> 存储引擎 服务器级别锁MYSQL 使用的锁类型:表锁(显式:LO ...

  8. HttpClient SSL connection could not be established error

    系统从.net framework 升级到dotnet core2.1 原先工作正常的httpclient,会报SSL connection could not be established erro ...

  9. 2019 icpc 徐州 解题报告

    A.Cat 题库链接 给定区间[l,r],求一个最长子区间,使得区间异或和小于等于s,(结论)偶数和偶数后三个数的异或和等于0 #include <bits/stdc++.h> using ...

  10. MongoDB dataSize如何比storageSize更大?

      原文   https://stackoverflow.com/questions/34054780/how-can-mongodb-datasize-be-larger-than-storages ...