Contents

Attention

  • Recurrent Models of Visual Attention [2014 deepmind NIPS]
  • Neural Machine Translation by Jointly Learning to Align and Translate [ICLR 2015]

OverallSurvey

  • Efficient Transformers: A Survey [paper]
  • A Survey on Visual Transformer [paper]
  • Transformers in Vision: A Survey [paper]

NLP

Language

  • Sequence to Sequence Learning with Neural Networks [NIPS 2014] [paper] [code]
  • End-To-End Memory Networks [NIPS 2015] [paper] [code]
  • Attention is all you need [NIPS 2017] [paper] [code]
  • Bidirectional Encoder Representations from Transformers: BERT [paper] [code] [pretrained-models]
  • Reformer: The Efficient Transformer [ICLR2020] [paper] [code]
  • Linformer: Self-Attention with Linear Complexity [AAAI2020] [paper] [code]
  • GPT-3: Language Models are Few-Shot Learners [NIPS 2020] [paper] [code]

Speech

  • Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation [INTERSPEECH 2020] [paper] [code]

CV

Backbone_Classification

Papers and Codes

  • CoaT: Co-Scale Conv-Attentional Image Transformers [arxiv 2021] [paper] [code]
  • SiT: Self-supervised vIsion Transformer [arxiv 2021] [paper] [code]
  • VIT: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [VIT] [ICLR 2021] [paper] [code]
    • Trained with extra private data: do not generalized well when trained on insufficient amounts of data
  • DeiT: Data-efficient Image Transformers [arxiv2021] [paper] [code]
    • Token-based strategy and build upon VIT and convolutional models
  • Transformer in Transformer [arxiv 2021] [paper] [code1] [code-official]
  • OmniNet: Omnidirectional Representations from Transformers [arxiv2021] [paper]
  • Gaussian Context Transformer [CVPR 2021] [paper]
  • General Multi-Label Image Classification With Transformers [CVPR 2021] [paper] [code]
  • Scaling Local Self-Attention for Parameter Efficient Visual Backbones [CVPR 2021] [paper]
  • T2T-ViT: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [ICCV 2021] [paper] [code]
  • Swin Transformer: Hierarchical Vision Transformer using Shifted Windows [ICCV 2021] [paper] [code]
  • Bias Loss for Mobile Neural Networks [ICCV 2021] [paper] [[code()]]
  • Vision Transformer with Progressive Sampling [ICCV 2021] [paper] [[code(https://github.com/yuexy/PS-ViT)]]
  • Rethinking Spatial Dimensions of Vision Transformers [ICCV 2021] [paper] [code]
  • Rethinking and Improving Relative Position Encoding for Vision Transformer [ICCV 2021] [paper] [code]

Interesting Repos

Self-Supervised

  • Emerging Properties in Self-Supervised Vision Transformers [ICCV 2021] [paper] [code]
  • An Empirical Study of Training Self-Supervised Vision Transformers [ICCV 2021] [paper] [code]

Interpretability and Robustness

  • Transformer Interpretability Beyond Attention Visualization [CVPR 2021] [paper] [code]
  • On the Adversarial Robustness of Visual Transformers [arxiv 2021] [paper]
  • Robustness Verification for Transformers [ICLR 2020] [paper] [code]
  • Pretrained Transformers Improve Out-of-Distribution Robustness [ACL 2020] [paper] [code]

Detection

  • DETR: End-to-End Object Detection with Transformers [ECCV2020] [paper] [code]
  • Deformable DETR: Deformable Transformers for End-to-End Object Detection [ICLR2021] [paper] [code]
  • End-to-End Object Detection with Adaptive Clustering Transformer [arxiv2020] [paper]
  • UP-DETR: Unsupervised Pre-training for Object Detection with Transformers [[arxiv2020] [paper]
  • Rethinking Transformer-based Set Prediction for Object Detection [arxiv2020] [paper] [zhihu]
  • End-to-end Lane Shape Prediction with Transformers [WACV 2021] [paper] [code]
  • ViT-FRCNN: Toward Transformer-Based Object Detection [arxiv2020] [paper]
  • Line Segment Detection Using Transformers [CVPR 2021] [paper] [code]
  • Facial Action Unit Detection With Transformers [CVPR 2021] [paper] [code]
  • Adaptive Image Transformer for One-Shot Object Detection [CVPR 2021] [paper] [code]
  • Self-attention based Text Knowledge Mining for Text Detection [CVPR 2021] [paper] [code]
  • Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions [ICCV 2021] [paper] [code]
  • Group-Free 3D Object Detection via Transformers [ICCV 2021] [paper] [code]
  • Fast Convergence of DETR with Spatially Modulated Co-Attention [ICCV 2021] [paper] [code]

HOI

  • End-to-End Human Object Interaction Detection with HOI Transformer [CVPR 2021] [paper] [code]
  • HOTR: End-to-End Human-Object Interaction Detection with Transformers [CVPR 2021] [paper] [code]

Tracking

  • Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking [CVPR 2021] [paper] [code]
  • TransTrack: Multiple-Object Tracking with Transformer [CVPR 2021] [paper] [code]
  • Transformer Tracking [CVPR 2021] [paper] [code]
  • Learning Spatio-Temporal Transformer for Visual Tracking [ICCV 2021] [paper] [code]

Segmentation

  • SETR : Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [CVPR 2021] [paper] [code]
  • Trans2Seg: Transparent Object Segmentation with Transformer [arxiv2021] [paper] [code]
  • End-to-End Video Instance Segmentation with Transformers [arxiv2020] [paper] [zhihu]
  • MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers [CVPR 2021] [paper] [official-code] [unofficial-code]
  • Medical Transformer: Gated Axial-Attention for Medical Image Segmentation [arxiv 2020] [paper] [code]
  • SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation [CVPR 2021] [paper] [code]

Reid

  • Diverse Part Discovery: Occluded Person Re-Identification With Part-Aware Transformer [CVPR 2021] [paper] [code]

Localization

  • LoFTR: Detector-Free Local Feature Matching with Transformers [CVPR 2021] [paper] [code]
  • MIST: Multiple Instance Spatial Transformer [CVPR 2021] [paper] [code]

Generation

Inpainting

  • STTN: Learning Joint Spatial-Temporal Transformations for Video Inpainting [ECCV 2020] [paper] [code]

Image enhancement

  • Pre-Trained Image Processing Transformer [CVPR 2021] [paper]
  • TTSR: Learning Texture Transformer Network for Image Super-Resolution [CVPR2020] [paper] [code]

Pose Estimation

  • Pose Recognition with Cascade Transformers [CVPR 2021] [paper] [code]
  • TransPose: Towards Explainable Human Pose Estimation by Transformer [arxiv 2020] [paper] [code]
  • Hand-Transformer: Non-Autoregressive Structured Modeling for 3D Hand Pose Estimation [ECCV 2020] [paper]
  • HOT-Net: Non-Autoregressive Transformer for 3D Hand-Object Pose Estimation [ACMMM 2020] [paper]
  • End-to-End Human Pose and Mesh Reconstruction with Transformers [CVPR 2021] [paper] [code]
  • 3D Human Pose Estimation with Spatial and Temporal Transformers [arxiv 2020] [paper] [code]
  • End-to-End Trainable Multi-Instance Pose Estimation with Transformers [arxiv 2020] [paper]

Face

  • Robust Facial Expression Recognition with Convolutional Visual Transformers [arxiv 2020] [paper]
  • Clusformer: A Transformer Based Clustering Approach to Unsupervised Large-Scale Face and Visual Landmark Recognition [CVPR 2021] [paper] [code]

Video Understanding

  • Is Space-Time Attention All You Need for Video Understanding? [arxiv 2020] [paper] [code]
  • Temporal-Relational CrossTransformers for Few-Shot Action Recognition [CVPR 2021] [paper] [code]
  • Self-Supervised Video Hashing via Bidirectional Transformers [CVPR 2021] [paper]
  • SSAN: Separable Self-Attention Network for Video Representation Learning [CVPR 2021] [paper]

Depth-Estimation

  • Adabins:Depth Estimation using Adaptive Bins [CVPR 2021] [paper] [code]

Prediction

  • Multimodal Motion Prediction with Stacked Transformers [CVPR 2021] [paper] [code]
  • Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case [paper]
  • Transformer networks for trajectory forecasting [ICPR 2020] [paper] [code]
  • Spatial-Channel Transformer Network for Trajectory Prediction on the Traffic Scenes [arxiv 2021] [paper] [code]
  • Pedestrian Trajectory Prediction using Context-Augmented Transformer Networks [ICRA 2020] [paper] [code]
  • Spatio-Temporal Graph Transformer Networks for Pedestrian Trajectory Prediction [ECCV 2020] [paper] [code]
  • Hierarchical Multi-Scale Gaussian Transformer for Stock Movement Prediction [paper]
  • Single-Shot Motion Completion with Transformer [arxiv2021] [paper] [code]

NAS

PointCloud

  • Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [CVPR 2021] [paper] [code]
  • Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos [CVPR 2021] [paper]

Fashion

  • Kaleido-BERT:Vision-Language Pre-training on Fashion Domain [CVPR 2021] [paper] [code]

Medical

  • Lesion-Aware Transformers for Diabetic Retinopathy Grading [CVPR 2021] [paper]

Cross-Modal

  • Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers [CVPR 2021] [paper]
  • Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning [CVPR2021] [paper] [code]
  • Topological Planning With Transformers for Vision-and-Language Navigation [CVPR 2021] [paper]
  • Multi-Stage Aggregated Transformer Network for Temporal Language Localization in Videos [CVPRR 2021] [paper]
  • VLN BERT: A Recurrent Vision-and-Language BERT for Navigation [CVPR 2021] [paper] [code]
  • Less Is More: ClipBERT for Video-and-Language Learning via Sparse Sampling [CVPR 2021] [paper] [code]

Reference

Transformer总结的更多相关文章

  1. Spatial Transformer Networks(空间变换神经网络)

    Reference:Spatial Transformer Networks [Google.DeepMind]Reference:[Theano源码,基于Lasagne] 闲扯:大数据不如小数据 这 ...

  2. ABBYY PDF Transformer+怎么标志注释

    ABBYY PDF Transformer+是一款可创建.编辑.添加注释及将PDF文件转换为其他可编辑格式的通用工具,可用来在PDF页面的任何位置添加注释(关于如何通过ABBYY PDF Transf ...

  3. OAF_文件系列6_实现OAF导出XML文件javax.xml.parsers/transformer(案例)

    20150803 Created By BaoXinjian

  4. 泛函编程(27)-泛函编程模式-Monad Transformer

    经过了一段时间的学习,我们了解了一系列泛函数据类型.我们知道,在所有编程语言中,数据类型是支持软件编程的基础.同样,泛函数据类型Foldable,Monoid,Functor,Applicative, ...

  5. 如何用Transformer+从PDF文档编辑数据

    ABBYY PDF Transformer+是一款可创建.编辑.添加注释及将PDF文件转换为其他可编辑格式的通用工具,可使用该软件从PDF文档编辑机密信息,然后再发布它们,文本和图像均可编辑,本文将为 ...

  6. ABBYY PDF Transformer+ Pro支持全世界189种语言

    ABBYY PDF Transformer+ Pro版支持189种语言,包括我们人类的自然语言.人造语言以及正式语言.受支持的语言可能会因产品的版本不同而各异.本文具体列举了所有ABBYY PDF T ...

  7. 发现PDF Transformer+转换的图像字体小了如何处理

    ABBYY PDF Transformer+转换的原始图像字体太小怎么办?为了获得最佳文本识别效果,请用较高的分辨率扫描用极小字体打印的文档,否则很容易在转换识别时出错.下面小编就给大家讲讲该怎么解决 ...

  8. ABBYY PDF Transformer+从文件选项中创建PDF文档的教程

    可使用OCR文字识别软件ABBYY PDF Transformer+从Microsoft Word.Microsoft Excel.Microsoft PowerPoint.HTML.RTF.Micr ...

  9. Could not find a transformer to transform "SimpleDataType{type=org.mule.transport.NullPayload

    mule esb报错 com.isoftstone.esb.transformer.Json2RequestBusinessObject.transformMessage(Json2RequestBu ...

  10. Transformer

    参考资料: [ERT大火却不懂Transformer?读这一篇就够了] https://zhuanlan.zhihu.com/p/54356280 (中文版) http://jalammar.gith ...

随机推荐

  1. Newstar_week1-2_wp

    week1 wp crypto 一眼秒了 n费马分解再rsa flag: import libnum import gmpy2 from Crypto.Util.number import * p = ...

  2. Nuxt.js 应用中的 imports:context 事件钩子详解

    title: Nuxt.js 应用中的 imports:context 事件钩子详解 date: 2024/10/29 updated: 2024/10/29 author: cmdragon exc ...

  3. CentOS 7环境下DM8数据库的安装与配置

    一.环境准备 首先,确保你的系统已经安装了CentOS 7,并且具有足够的磁盘空间和内存来支持DM8数据库的运行.此外,你还需要具备管理员权限,以便进行后续的安装和配置操作. 二.下载DM8安装包 访 ...

  4. Python运行报错:ImportError: cannot import name 'BinarySpaceToDiscreteSpaceEnv' from 'nes_py.wrappers'

    运行Python项目: https://pypi.org/project/gym-super-mario-bros/ 报错: ImportError: cannot import name 'Bina ...

  5. 22.使用Rancher2.0搭建Kubernetes集群

    使用Rancher2.0搭建Kubernetes集群 中文文档:https://docs.rancher.cn/docs/rancher2 安装Rancher2.0 使用下面命令,我们快速的安装 # ...

  6. isPCBroswer:检测是否为PC端浏览器模式

    function isPCBroswer() { let e = navigator.userAgent.toLowerCase() , t = "ipad" == e.match ...

  7. meta-analysis初学--笔记摘抄

    meta-analysis的定义 Meta-analysis是指对研究的研究,可以翻译为元分析.后设分析.整合分析.荟萃分析等.最常用的翻译是荟萃分析.Meta-analysis是用统计的概念与方法, ...

  8. OSG开发笔记(三十三):同时观察物体不同角度的多视图从相机技术

    前言   前面的相机hud可以单独显示图形,继续深入研究相机hud,技术就是子视图了,实现该功能的直接技术是从相机技术.  本篇描述osg从相机技术   Demo         相机视口的关键调用 ...

  9. vivo 企业云盘服务端实现简介

    作者:来自 vivo 互联网存储团队- Cheng Zhi 本文将介绍企业云盘的基本功能以及服务端实现. 一.背景 vivo 企业云盘是一个企业级文件数据管理服务,解决办公数据的存储.共享.审计等文件 ...

  10. linux学习用到的命令

    创建快件方式 ln 创建目录的快件方式 sudo ln -s /root/myhack/ /root/Desktop以上指令是创建软链接到桌面. ln -s /mnt/hgfs/VMware_shar ...