Table of contents

  1. Introduction

  2. Survey papers

  3. Benchmark datasets

  4. Fine-grained image recognition

    1. Fine-grained recognition by localization-classification subnetworks

    2. Fine-grained recognition by end-to-end feature encoding

    3. Fine-grained recognition with external information

      1. Fine-grained recognition with web data / auxiliary data

      2. Fine-grained recognition with multi-modality data

      3. Fine-grained recognition with humans in the loop

  5. Fine-grained image retrieval

    1. Unsupervised with pre-trained models

    2. Supervised with metric learning

  6. Fine-grained image generation

    1. Generating from fine-grained image distributions

    2. Generating from text descriptions

  7. Future directions of FGIA

    1. Automatic fine-grained models

    2. Fine-grained few shot learning

    3. Fine-grained hashing

    4. FGIA within more realistic settings

  8. Leaderboard

1. Introduction


This homepage lists some representative papers/codes/datasets all about deep learning based fine-grained image, including fine-grained image recognition, fine-grained image retrieval, fine-grained image generation, etc. If you have any questions, please feel free to leave message.

2. Survey papers


3. Benchmark datasets


Summary of popular fine-grained image datasets. Note that ‘‘BBox’’ indicates whether this dataset provides object bounding box supervisions. ‘‘Part anno.’’ means providing the key part localizations. ‘‘HRCHY’’ corresponds to hierarchical labels. ‘‘ATR’’ represents the attribute labels (e.g., wing color, male, female, etc). ‘‘Texts’’ indicates whether fine-grained text descriptions of images are supplied.

Dataset name Year Meta-class  images  categories BBox Part anno. HRCHY ATR Texts
Oxford flower 2008 Flowers 8,189 102        
CUB200 2011 Birds 11,788 200  
Stanford Dog 2011 Dogs 20,580 120        
Stanford Car 2013 Cars 16,185 196        
FGVC Aircraft 2013 Aircrafts 10,000 100      
Birdsnap 2014 Birds 49,829 500    
NABirds 2015 Birds 48,562 555      
DeepFashion 2016 Clothes 800,000 1,050    
Fru92 2017 Fruits 69,614 92        
Veg200 2017 Vegetable 91,117 200        
iNat2017 2017 Plants & Animals 859,000 5,089      
RPC 2019 Retail products 83,739 200      

4. Fine-grained image recognition


Fine-grained recognition by localization-classification subnetworks

Fine-grained recognition by end-to-end feature encoding

5. Fine-grained recognition with external information

Fine-grained recognition with web data / auxiliary data

Fine-grained recognition with multi-modality data

Fine-grained recognition with humans in the loop

5. Fine-grained image retrieval


Unsupervised with pre-trained models

Supervised with metric learning

6. Fine-grained image generation


Generating from fine-grained image distributions

Generating from text descriptions

7. Future directions of FGIA


Fine-grained few shot learning

FGIA within more realistic settings

8. Leaderboard


The section is being continually updated. Since CUB200-2011 is the most popularly used fine-grained dataset, we list the fine-grained recognition leaderboard by treating it as the test bed.

Method Publication BBox? Part? External information? Base model Image resolution Accuracy
PB R-CNN ECCV 2014       Alex-Net 224x224 73.9%
MaxEnt NIPS 2018       GoogLeNet TBD 74.4%
PB R-CNN ECCV 2014     Alex-Net 224x224 76.4%
PS-CNN CVPR 2016   CaffeNet 454x454 76.6%
MaxEnt NIPS 2018       VGG-16 TBD 77.0%
Mask-CNN PR 2018     Alex-Net 448x448 78.6%
PC ECCV 2018       ResNet-50 TBD 80.2%
DeepLAC CVPR 2015   Alex-Net 227x227 80.3%
MaxEnt NIPS 2018       ResNet-50 TBD 80.4%
Triplet-A CVPR 2016   Manual labour GoogLeNet TBD 80.7%
Multi-grained ICCV 2015     WordNet etc. VGG-19 224x224 81.7%
Krause et al. CVPR 2015     CaffeNet TBD 82.0%
Multi-grained ICCV 2015   WordNet etc. VGG-19 224x224 83.0%
TS CVPR 2016       VGGD+VGGM 448x448 84.0%
Bilinear CNN ICCV 2015       VGGD+VGGM 448x448 84.1%
STN NIPS 2015       GoogLeNet+BN 448x448 84.1%
LRBP CVPR 2017       VGG-16 224x224 84.2%
PDFS CVPR 2016       VGG-16 TBD 84.5%
Xu et al. ICCV 2015 Web data CaffeNet 224x224 84.6%
Cai et al. ICCV 2017       VGG-16 448x448 85.3%
RA-CNN CVPR 2017       VGG-19 448x448 85.3%
MaxEnt NIPS 2018       Bilinear CNN TBD 85.3%
PC ECCV 2018       Bilinear CNN TBD 85.6%
CVL CVPR 2017     Texts VGG TBD 85.6%
Mask-CNN PR 2018     VGG-16 448x448 85.7%
GP-256 ECCV 2018       VGG-16 448x448 85.8%
KP CVPR 2017       VGG-16 224x224 86.2%
T-CNN IJCAI 2018       ResNet 224x224 86.2%
MA-CNN ICCV 2017       VGG-19 448x448 86.5%
MaxEnt NIPS 2018       DenseNet-161 TBD 86.5%
DeepKSPD ECCV 2018       VGG-19 448x448 86.5%
OSME+MAMC ECCV 2018       ResNet-101 448x448 86.5%
StackDRL IJCAI 2018       VGG-19 224x224 86.6%
DFL-CNN CVPR 2018       VGG-16 448x448 86.7%
PC ECCV 2018       DenseNet-161 TBD 86.9%
KERL IJCAI 2018     Attributes VGG-16 224x224 87.0%
HBP ECCV 2018       VGG-16 448x448 87.1%
Mask-CNN PR 2018     ResNet-50 448x448 87.3%
DFL-CNN CVPR 2018       ResNet-50 448x448 87.4%
NTS-Net ECCV 2018       ResNet-50 448x448 87.5%
HSnet CVPR 2017   GoogLeNet+BN TBD 87.5%
MetaFGNet ECCV 2018     Auxiliary data ResNet-34 TBD 87.6%
DCL CVPR 2019       ResNet-50 448x448 87.8%
TASN CVPR 2019       ResNet-50 448x448 87.9%
Ge et al. CVPR 2019       GoogLeNet+BN Shorter side is 800 px 90.4%

Fine-Grained(细粒度) Image – Papers, Codes and Datasets的更多相关文章

  1. Matlab Codes and Datasets for Feature Learning

    Matlab Codes and Datasets for Feature Learning 浙江大学CAiDeng提供的Matlab特征学习Code.

  2. CVPR 2015 papers

    CVPR2015 Papers震撼来袭! CVPR 2015的文章可以下载了,如果链接无法下载,可以在Google上通过搜索paper名字下载(友情提示:可以使用filetype:pdf命令). Go ...

  3. KDD2015,Accepted Papers

    Accepted Papers by Session Research Session RT01: Social and Graphs 1Tuesday 10:20 am–12:00 pm | Lev ...

  4. HDFS 细粒度锁优化,FusionInsight MRS有妙招

    摘要:华为云FusionInsight MRS通过FGL对HDFS NameNode锁机制进行优化,有效提升了NameNode的读写吞吐量,从而能够支持更多数据,更多业务请求访问,从而更好的支撑政企客 ...

  5. cvpr2015papers

    @http://www-cs-faculty.stanford.edu/people/karpathy/cvpr2015papers/ CVPR 2015 papers (in nicer forma ...

  6. Official Program for CVPR 2015

    From:  http://www.pamitc.org/cvpr15/program.php Official Program for CVPR 2015 Monday, June 8 8:30am ...

  7. 2016CVPR论文集

    http://www.cv-foundation.org/openaccess/CVPR2016.py ORAL SESSION Image Captioning and Question Answe ...

  8. CVPR2016 Paper list

    CVPR2016 Paper list ORAL SESSIONImage Captioning and Question Answering Monday, June 27th, 9:00AM - ...

  9. Cryptographic method and system

    The present invention relates to the field of security of electronic data and/or communications. In ...

随机推荐

  1. 深入集合类系列——HashMap和HashTable的区别

    含义:HashMap是基于哈希表的Map接口的非同步实现.允许使用null值和null键.此类不保证映射的顺序,特别是它不保证该顺序恒久不变. 数据结构:HashMap实际上是一个“链表散列”的数据结 ...

  2. 五、springboot 简单优雅是实现邮件服务

    前言 spring boot 的项目放下小半个月没有更新了,终于闲下来可以开心的接着写啦. 之前我们配置好mybatis 多数据源的,接下来我们需要做一个邮件服务.比如你注册的时候,需要输入验证码来校 ...

  3. LitePal的查询

    转载:http://blog.csdn.net/guolin_blog/article/details/40153833 传统的查询数据方式 其实最传统的查询数据的方式当然是使用SQL语句了,Andr ...

  4. intellj Idea git ignore文件的.idea不起作用解决

    问题描述: idea中使用git每次提交的时候都会选中项目目录下.idea目录,虽然设置了.ignore文件但是不起作用. 综合网上搜索结果,并完美解决,方法如下: 1.原因就是git已经关联追踪了这 ...

  5. 在C#中调用Python中遇到的坑(No module named xxx)

    例如Python的代码是这个样子的. # coding=<utf-> # -*- coding: utf- *- import requests import urllib def Cle ...

  6. 分享:JS视频在线视频教程

    作者说明 (1)JS说明 JS是非常重要的一门语言,但是,我们对JS的认识似乎仍然停留在“hello word”或者“alert”的观念上.其实,JS发展到现在已经非常的成熟,功能也非常的强大,因此, ...

  7. Mobius 反演与杜教筛

    积性函数 积性函数 指对于所有互质的整数 aaa 和 bbb 有性质 f(ab)=f(a)f(b)f(ab)=f(a)f(b)f(ab)=f(a)f(b) 的数论函数. 特别地,若所有的整数 aaa ...

  8. 【Java必修课】ArrayList与HashSet的contains方法性能比较(JMH性能测试)

    1 简介 在日常开发中,ArrayList和HashSet都是Java中很常用的集合类. ArrayList是List接口最常用的实现类: HashSet则是保存唯一元素Set的实现. 本文主要对两者 ...

  9. Js正则学习笔记

    众所周知正则表达式是十分强大的存在,编码时能够熟练使用正则能够极大的简化代码,因此掌握正则非常有必要. 创建正则语法: // 创建正则的两种方式// 1.构造函数 let reg = new RegE ...

  10. Tensorflow从开始到放弃(技术篇)

    在gpu中运行 tf.device("/gpu:1") 有时候这个是会出问题的,即便你在有名称为1的gpu时.有的操作是不能支持gpu的,应该为session添加一些配置: pyt ...