Pretrained models for Pytorch (Work in progress)
The goal of this repo is:
- to help to reproduce research papers results (transfer learning setups for instance),
- to access pretrained ConvNets with a unique interface/API inspired by torchvision.
News:
- 04/06/2018: PolyNet and PNASNet-5-Large thanks to Alex Parinov
- 16/04/2018: SE-ResNet* and SE-ResNeXt* thanks to Alex Parinov
- 09/04/2018: SENet154 thanks to Alex Parinov
- 22/03/2018: CaffeResNet101 (good for localization with FasterRCNN)
- 21/03/2018: NASNet Mobile thanks to Veronika Yurchuk and Anastasiia
- 25/01/2018: DualPathNetworks thanks to Ross Wightman, Xception thanks to T Standley, improved TransformImage API
- 13/01/2018:
pip install pretrainedmodels,pretrainedmodels.model_names,pretrainedmodels.pretrained_settings - 12/01/2018:
python setup.py install - 08/12/2017: update data url (/!\
git pullis needed) - 30/11/2017: improve API (
model.features(input),model.logits(features),model.forward(input),model.last_linear) - 16/11/2017: nasnet-a-large pretrained model ported by T. Durand and R. Cadene
- 22/07/2017: torchvision pretrained models
- 22/07/2017: momentum in inceptionv4 and inceptionresnetv2 to 0.1
- 17/07/2017: model.input_range attribut
- 17/07/2017: BNInception pretrained on Imagenet
Summary
- Installation
- Quick examples
- Few use cases
- Evaluation on ImageNet
- Documentation
- Available models
- AlexNet
- BNInception
- CaffeResNet101
- DenseNet121
- DenseNet161
- DenseNet169
- DenseNet201
- DenseNet201
- DualPathNet68
- DualPathNet92
- DualPathNet98
- DualPathNet107
- DualPathNet113
- FBResNet152
- InceptionResNetV2
- InceptionV3
- InceptionV4
- NASNet-A-Large
- NASNet-A-Mobile
- PNASNet-5-Large
- PolyNet
- ResNeXt101_32x4d
- ResNeXt101_64x4d
- ResNet101
- ResNet152
- ResNet18
- ResNet34
- ResNet50
- SENet154
- SE-ResNet50
- SE-ResNet101
- SE-ResNet152
- SE-ResNeXt50_32x4d
- SE-ResNeXt101_32x4d
- SqueezeNet1_0
- SqueezeNet1_1
- VGG11
- VGG13
- VGG16
- VGG19
- VGG11_BN
- VGG13_BN
- VGG16_BN
- VGG19_BN
- Xception
- Model API
- Available models
- Reproducing porting
Installation
Install from pip
pip install pretrainedmodels
Install from repo
git clone https://github.com/Cadene/pretrained-models.pytorch.gitcd pretrained-models.pytorchpython setup.py install
Quick examples
- To import
pretrainedmodels:
import pretrainedmodels
- To print the available pretrained models:
print(pretrainedmodels.model_names)
> ['fbresnet152', 'bninception', 'resnext101_32x4d', 'resnext101_64x4d', 'inceptionv4', 'inceptionresnetv2', 'alexnet', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'inceptionv3', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19_bn', 'vgg19', 'nasnetalarge', 'nasnetamobile', 'cafferesnet101', 'senet154', 'se_resnet50', 'se_resnet101', 'se_resnet152', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'cafferesnet101', 'polynet', 'pnasnet5large']
- To print the available pretrained settings for a chosen model:
print(pretrainedmodels.pretrained_settings['nasnetalarge'])
> {'imagenet': {'url': 'http://data.lip6.fr/cadene/pretrainedmodels/nasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1000}, 'imagenet+background': {'url': 'http://data.lip6.fr/cadene/pretrainedmodels/nasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1001}}
- To load a pretrained models from imagenet:
model_name = 'nasnetalarge' # could be fbresnet152 or inceptionresnetv2
model = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')
model.eval()
Note: By default, models will be downloaded to your $HOME/.torch folder. You can modify this behavior using the $TORCH_MODEL_ZOO variable as follow: export TORCH_MODEL_ZOO="/local/pretrainedmodels
- To load an image and do a complete forward pass:
import torch
import pretrainedmodels.utils as utils
load_img = utils.LoadImage()
# transformations depending on the model
# rescale, center crop, normalize, and others (ex: ToBGR, ToRange255)
tf_img = utils.TransformImage(model)
path_img = 'data/cat.jpg'
input_img = load_img(path_img)
input_tensor = tf_img(input_img) # 3x400x225 -> 3x299x299 size may differ
input_tensor = input_tensor.unsqueeze(0) # 3x299x299 -> 1x3x299x299
input = torch.autograd.Variable(input_tensor,
requires_grad=False)
output_logits = model(input) # 1x1000
- To extract features (beware this API is not available for all networks):
output_features = model.features(input) # 1x14x14x2048 size may differ
output_logits = model.logits(output_features) # 1x1000
Few use cases
Compute imagenet logits
- See examples/imagenet_logits.py to compute logits of classes appearance over a single image with a pretrained model on imagenet.
$ python examples/imagenet_logits.py -h
> nasnetalarge, resnet152, inceptionresnetv2, inceptionv4, ...
$ python examples/imagenet_logits.py -a nasnetalarge --path_img data/cat.png
> 'nasnetalarge': data/cat.png' is a 'tiger cat'
Compute imagenet evaluation metrics
- See examples/imagenet_eval.py to evaluate pretrained models on imagenet valset.
$ python examples/imagenet_eval.py /local/common-data/imagenet_2012/images -a nasnetalarge -b 20 -e
> * Acc@1 92.693, Acc@5 96.13
Evaluation on imagenet
Accuracy on validation set (single model)
Results were obtained using (center cropped) images of the same size than during the training process.
| Model | Version | Acc@1 | Acc@5 |
|---|---|---|---|
| PNASNet-5-Large | Tensorflow | 82.858 | 96.182 |
| PNASNet-5-Large | Our porting | 82.736 | 95.992 |
| NASNet-A-Large | Tensorflow | 82.693 | 96.163 |
| NASNet-A-Large | Our porting | 82.566 | 96.086 |
| SENet154 | Caffe | 81.32 | 95.53 |
| SENet154 | Our porting | 81.304 | 95.498 |
| PolyNet | Caffe | 81.29 | 95.75 |
| PolyNet | Our porting | 81.002 | 95.624 |
| InceptionResNetV2 | Tensorflow | 80.4 | 95.3 |
| InceptionV4 | Tensorflow | 80.2 | 95.3 |
| SE-ResNeXt101_32x4d | Our porting | 80.236 | 95.028 |
| SE-ResNeXt101_32x4d | Caffe | 80.19 | 95.04 |
| InceptionResNetV2 | Our porting | 80.170 | 95.234 |
| InceptionV4 | Our porting | 80.062 | 94.926 |
| DualPathNet107_5k | Our porting | 79.746 | 94.684 |
| ResNeXt101_64x4d | Torch7 | 79.6 | 94.7 |
| DualPathNet131 | Our porting | 79.432 | 94.574 |
| DualPathNet92_5k | Our porting | 79.400 | 94.620 |
| DualPathNet98 | Our porting | 79.224 | 94.488 |
| SE-ResNeXt50_32x4d | Our porting | 79.076 | 94.434 |
| SE-ResNeXt50_32x4d | Caffe | 79.03 | 94.46 |
| Xception | Keras | 79.000 | 94.500 |
| ResNeXt101_64x4d | Our porting | 78.956 | 94.252 |
| Xception | Our porting | 78.888 | 94.292 |
| ResNeXt101_32x4d | Torch7 | 78.8 | 94.4 |
| SE-ResNet152 | Caffe | 78.66 | 94.46 |
| SE-ResNet152 | Our porting | 78.658 | 94.374 |
| ResNet152 | Pytorch | 78.428 | 94.110 |
| SE-ResNet101 | Our porting | 78.396 | 94.258 |
| SE-ResNet101 | Caffe | 78.25 | 94.28 |
| ResNeXt101_32x4d | Our porting | 78.188 | 93.886 |
| FBResNet152 | Torch7 | 77.84 | 93.84 |
| SE-ResNet50 | Caffe | 77.63 | 93.64 |
| SE-ResNet50 | Our porting | 77.636 | 93.752 |
| DenseNet161 | Pytorch | 77.560 | 93.798 |
| ResNet101 | Pytorch | 77.438 | 93.672 |
| FBResNet152 | Our porting | 77.386 | 93.594 |
| InceptionV3 | Pytorch | 77.294 | 93.454 |
| DenseNet201 | Pytorch | 77.152 | 93.548 |
| DualPathNet68b_5k | Our porting | 77.034 | 93.590 |
| CaffeResnet101 | Caffe | 76.400 | 92.900 |
| CaffeResnet101 | Our porting | 76.200 | 92.766 |
| DenseNet169 | Pytorch | 76.026 | 92.992 |
| ResNet50 | Pytorch | 76.002 | 92.980 |
| DualPathNet68 | Our porting | 75.868 | 92.774 |
| DenseNet121 | Pytorch | 74.646 | 92.136 |
| VGG19_BN | Pytorch | 74.266 | 92.066 |
| NASNet-A-Mobile | Tensorflow | 74.0 | 91.6 |
| NASNet-A-Mobile | Our porting | 74.080 | 91.740 |
| ResNet34 | Pytorch | 73.554 | 91.456 |
| BNInception | Our porting | 73.522 | 91.560 |
| VGG16_BN | Pytorch | 73.518 | 91.608 |
| VGG19 | Pytorch | 72.080 | 90.822 |
| VGG16 | Pytorch | 71.636 | 90.354 |
| VGG13_BN | Pytorch | 71.508 | 90.494 |
| VGG11_BN | Pytorch | 70.452 | 89.818 |
| ResNet18 | Pytorch | 70.142 | 89.274 |
| VGG13 | Pytorch | 69.662 | 89.264 |
| VGG11 | Pytorch | 68.970 | 88.746 |
| SqueezeNet1_1 | Pytorch | 58.250 | 80.800 |
| SqueezeNet1_0 | Pytorch | 58.108 | 80.428 |
| Alexnet | Pytorch | 56.432 | 79.194 |
Notes:
- the Pytorch version of ResNet152 is not a porting of the Torch7 but has been retrained by facebook.
- For the PolyNet evaluation each image was resized to 378x378 without preserving the aspect ratio and then the central 331×331 patch from the resulting image was used.
Beware, the accuracy reported here is not always representative of the transferable capacity of the network on other tasks and datasets. You must try them all!
Pretrained models for Pytorch (Work in progress)的更多相关文章
- Caffe2 载入预训练模型(Loading Pre-Trained Models)[7]
这一节我们主要讲述如何使用预训练模型.Ipython notebook链接在这里. 模型下载 你可以去Model Zoo下载预训练好的模型,或者使用Caffe2的models.download模块获取 ...
- (转载)PyTorch代码规范最佳实践和样式指南
A PyTorch Tools, best practices & Styleguide 中文版:PyTorch代码规范最佳实践和样式指南 This is not an official st ...
- pytorch中tensorboardX的用法
在代码中改好存储Log的路径 命令行中输入 tensorboard --logdir /home/huihua/NewDisk1/PycharmProjects/pytorch-deeplab-xce ...
- (转)Awesome PyTorch List
Awesome-Pytorch-list 2018-08-10 09:25:16 This blog is copied from: https://github.com/Epsilon-Lee/Aw ...
- Ubuntu 16.04上源码编译和安装pytorch教程,并编写C++ Demo CMakeLists.txt | tutorial to compile and use pytorch on ubuntu 16.04
本文首发于个人博客https://kezunlin.me/post/54e7a3d8/,欢迎阅读最新内容! tutorial to compile and use pytorch on ubuntu ...
- (转) The Incredible PyTorch
转自:https://github.com/ritchieng/the-incredible-pytorch The Incredible PyTorch What is this? This is ...
- Run Your Tensorflow Deep Learning Models on Google AI
People commonly tend to put much effort on hyperparameter tuning and training while using Tensoflow& ...
- FaceNet pre-trained模型以及FaceNet源码使用方法和讲解
Pre-trained models Model name LFW accuracy Training dataset Architecture 20180408-102900 0.9905 CASI ...
- 【翻译】OpenVINO Pre-Trained 预训练模型介绍
OpenVINO 系列软件包预训练模型介绍 本文翻译自 Intel OpenVINO 的 "Overview of OpenVINO Toolkit Pre-Trained Models& ...
随机推荐
- c/c++中的关键字(static、const、inline、friend)
static:1.a.c语言中static修饰的局部变量在编译时赋初始值,只赋初始值一次,在函数运行时已有初值,每次调用函数时不用重新赋值,指示保留上次 函 数调用结束时的值. 如果定义局部变量不赋初 ...
- POJ3281:Dining——题解
http://poj.org/problem?id=3281 题目大意: N牛,F种吃的D种喝的,牛可以在它喜欢的吃的喝的选一组,之后就不能选这个吃的喝的. 问最多满足多少人. ——————————— ...
- BZOJ1934:[SHOI2007]善意的投票 & BZOJ2768:[JLOI2010]冠军调查——题解
https://www.lydsy.com/JudgeOnline/problem.php?id=1934 https://www.lydsy.com/JudgeOnline/problem.php? ...
- java 解析http返回xml数据
//post 请求 private static String sendPost(String url, String urlParameters) throws Exception { URL ob ...
- bzoj 3252 攻略 长链剖分思想+贪心
攻略 Time Limit: 10 Sec Memory Limit: 128 MBSubmit: 889 Solved: 423[Submit][Status][Discuss] Descrip ...
- Microsoft office 2013安装图解
Microsoft office 2013安装图解... ================ 简介: Microsoft Office 2013(Office 15)是微软的新一代Office办公软件, ...
- 一个JAVA题引发的思考
转载自:http://www.cnblogs.com/heshan664754022/archive/2013/03/24/2979495.html 十年半山 今天在论坛闲逛的时候发现了一个很有趣的题 ...
- .net 跨域 问题解决
参考地址:http://www.cnblogs.com/moretry/p/4154479.html 在项目上面使用 Nuget 搜索 microsoft.aspnet.webapi.cors 直接下 ...
- (第三章,第四章)http报文内的http信息,返回结果的http状态码
第三章 http报文内的http信息 用于http协议交互的信息被称为http报文,包括请求报文和响应报文. 1.编码提升传输速率,在传输时编码能有效的处理大量的访问请求.但是编码的操作是计算机完成的 ...
- LightOJ 1161 - Extreme GCD 容斥
题意:给你n个数[4,10000],问在其中任意选四个其GCD值为1的情况有几种. 思路:GCD为1的情况很简单 即各个数没有相同的质因数,所以求所有出现过的质因数次数再容斥一下-- 很可惜是错的,因 ...