少标签数据学习:宾夕法尼亚大学Learning with Few Labeled Data

原文链接 小样本学习与智能前沿 。 在这个公众号后台回复“200527”,即可获得课件电子资源。

Few-shot image classification
Three regimes of image classification

Problem formulation
Training set consists of labeled samples from lots of “tasks”, e.g., classifying cars, cats, dogs, planes . . .
Data from the new task, e.g., classifying strawberries has:

Few-shot setting considers the case when s is small.
A flavor of current few-shot algorithms
Meta-learning forms the basis for almost all current algorithms. Here’s one successful instantiation.
Prototypical Networks [Snell et al., 2017]
- Collect a meta-training set, this consists of a large number of related tasks
- Train one model on all these tasks to ensure that the clustering of features of this model correctly classifies the task
- If the test task comes from the same distribution as the meta-training tasks, we can use the clustering on the new task to classify new classes

How well does few-shot learning work today?

The key idea
A classifier trained on a dataset \(D_s\) is a function F that classifies data x using

The parameters \(θ^∗ = θ(D_s )\) of the classifier are a statistic of the dataset Ds obtained after training. Maintaining this statistic avoids having to search over functions F at inference time.
We cannot learn a good (sufficient) statistic using few samples. So we will search over functions at test-time more explicitly

Transductive Learning
## A very simple baseline
- Train a large deep network on the meta-training dataset with the standard classification loss
- Initialize a new “classifier head” on top of the logits to handle new classes
- Fine-tune with the few labeled data from the new task
- Perform transductive learning using the unlabeled test data
with a few practical tricks like cosine annealing of step-sizes,
mixup regularization, 16-bit training, very heavy data augmentation, and label smoothing cross-entrop
An example

Results on benchmark datasets

The ImageNet-21k dataset

1-shot, 5-way accuracies are as high as 89%, 1-shot 20-way accuracies are about 70%.
A proposal for systematic evaluation

A thermodynamical view of representation learning
表征学习的热力学观点
Transfer learning
Let’s take an example from computer vision
(.Zamir, A. R., Sax, A., Shen, W., Guibas, L. J., Malik, J., & Savarese, S. Taskonomy: Disentangling task transfer learning. CVPR 2018)

Information Bottleneck Principle
信息瓶颈原则
A generalization of rate-distortion theory for learning relevant representations of data [Tishby et al., 2000]

Z is a representation of the data X. We want
- Z to be sufficient to predict the target Y , and
- Z to be small in size, e.g., few number of bits.

Doing well on one task requires throwing away nuisance information [Achille & Soatto, 2017].
The key idea
The IB Lagrangian simply minimizes \(I(X;Z)\), it does not let us measure what was thrown away.
Choose a canonical task to measure discarded information. Setting

i.e., reconstruction of data, gives a special task. It is the superset of all tasks and forces the model to learn lossless representations.
The architecture we will focus on is

An auto-encoder
Shanon entropy measures the complexity of data

Distortion D measures the quality of reconstruction

Rate R measures the average excess bits used to encode the representation

Rate-Distortion curve
We know that [Alemi et al., 2017]

this is the well-known ELBO (evidence lower-bound). Let

This is a Lagrange relaxation of the fact that given a variational family and data there is an optimal value R = func(D) that best sandwiches (1).

Rate-Distortion-Classification (RDC) surface
Let us extend the Lagrangian to

where the classification loss is

Can also include other quantities like the entropy S of the model parameters


The existence of a convex surface func(R,D,C,S) = 0 tying together these functionals allows a formal connection to thermodynamics [Alemi and Fischer 2018]

Just like energy is conserved in physical processes, information is conserved in the model, either it is in the encoder-classifier pair or it is in the decoder.
Equilibrium surface of optimal free-energy
The RDC surface determines all possible representations that can be learnt from given data. Can solve the variational problem for F(λ,γ) to get

and

This is called the “equilibrium surface” because training converges to some point on this surface. We now construct ways to travel on the surface
The surface depends on data p(x,y).
An iso-classification loss process
A quasi-static process happens slowly enough for the system to remain in equilibrium with its surroundings, e.g., reversible expansion of an ideal gas.
We will create a quasi-static process to travel on the RDC surface. This constraint is

e.g., if we want classification loss to be constant in time, we need

……
更多精彩,请下载 资源文件 了解。
关注公众号“小样本学习与智能前沿”,回台回复“200527” ,即可获取资源文件“Learning with Few Labeled Data.pdf”

Summary
Simple methods such as transductive fine-tuning work extremely well for few-shot learning. This is really because of powerful function approximators such as neural networks.
The RDC surface is a fundamental quantity and enables principled methods for transfer learning. Also unlocks new paths to understanding regularization and properties of neural architecture for classical supervised learning.
We did well in the era of big data without understanding much about data; this is unlikely to work in the age of little data.
Email questions to pratikac@seas.upenn.edu Read more at
- Dhillon, G., Chaudhari, P., Ravichandran, A., and Soatto, S. (2019). A baseline for few-shot image classification. arXiv:1909.02729. ICLR 2020.
- Li, H., Chaudhari, P., Yang, H., Lam, M., Ravichandran, A., Bhotika, R., & Soatto, S. (2020). Rethinking the Hyperparameters for Fine-tuning. arXiv:2002.11770. ICLR 2020.
- Fakoor, R., Chaudhari, P., Soatto, S., & Smola, A. J. (2019). Meta-Q-Learning. arXiv:1910.00125. ICLR 2020.
- Gao, Y., and Chaudhari, P. (2020). A free-energy principle for representation learning. arXiv:2002.12406.
少标签数据学习:宾夕法尼亚大学Learning with Few Labeled Data的更多相关文章
- 【转载】 迁移学习(Transfer learning),多任务学习(Multitask learning)和端到端学习(End-to-end deep learning)
--------------------- 作者:bestrivern 来源:CSDN 原文:https://blog.csdn.net/bestrivern/article/details/8700 ...
- PU Learning简介:对无标签数据进行半监督分类
当只有几个正样本,你如何分类无标签数据 假设您有一个交易业务数据集.有些交易被标记为欺诈,其余交易被标记为真实交易,因此您需要设计一个模型来区分欺诈交易和真实交易. 假设您有足够的数据和良好的特征,这 ...
- 联邦学习(Federated Learning)
联邦学习简介 联邦学习(Federated Learning)是一种新兴的人工智能基础技术,在 2016 年由谷歌最先提出,原本用于解决安卓手机终端用户在本地更新模型的问题,其设计目标是 ...
- 零次学习(Zero-Shot Learning)入门(转)
很久没有更文章了,主要是没有找到zero-shot learning(ZSL)方面我特别想要分享的文章,且中间有一段时间在考虑要不要继续做这个题目,再加上我懒 (¬_¬),所以一直拖到了现在. 最近科 ...
- 小样本学习(few-shot learning)在文本分类中的应用
1,概述 目前有效的文本分类方法都是建立在具有大量的标签数据下的有监督学习,例如常见的textcnn,textrnn等,但是在很多场景下的文本分类是无法提供这么多训练数据的,比如对话场景下的意图识别, ...
- 【47】迁移学习(Transfer Learning)
迁移学习(Transfer Learning) 如果你要做一个计算机视觉的应用,相比于从头训练权重,或者说从随机初始化权重开始,如果你下载别人已经训练好网络结构的权重,你通常能够进展的相当快,用这个作 ...
- 多视图学习(multiview learning)
多视图学习(multi-view learning) 前期吹牛:今天这一章我们就是来吹牛的,刚开始老板在和我说什么叫多视图学习的时候,我的脑海中是这么理解的:我们在欣赏妹子福利照片的时候,不能只看45 ...
- 深度学习(Deep Learning)资料大全(不断更新)
Deep Learning(深度学习)学习笔记(不断更新): Deep Learning(深度学习)学习笔记之系列(一) 深度学习(Deep Learning)资料(不断更新):新增数据集,微信公众号 ...
- 读李宏毅《一天看懂深度学习》——Deep Learning Tutorial
大牛推荐的入门用深度学习导论,刚拿到有点懵,第一次接触PPT类型的学习资料,但是耐心看下来收获还是很大的,适合我这种小白入门哈哈. 原PPT链接:http://www.slideshare.net/t ...
随机推荐
- Hadoop高可用
一.原因 - NameNode是HDFS的黑心配置HDFS有事hadoop的核心组件 NameNode 在Hadoop及群众至关重要 - NameNode的宕机导致集群的不可用 二.解决方案 其中 N ...
- 《Clojure编程》笔记 第3章 集合类与数据结构
目录 背景简述 第3章 集合类与数据结构 3.1 抽象优于实现 3.1.1 Collection 3.1.2 Sequence 3.1.3 Associative 3.1.4 Indexed 3.1. ...
- MobileNet——一种模型轻量化方法
导言 新的CNN网络的提出,提高了模型的学习能力但同时也带来了学习效率的降低的问题(主要体现在模型的存储问题和模型进行预测的速度问题),这使得模型的轻量化逐渐得到重视.轻量化模型设计主要思想在于设计更 ...
- 剑指offer之顺序打印数组
算法的要求为: 输入一个矩阵,按照从外向里以顺时针的顺序依次打印出每一个数字,例如,如果输入如下4 X 4矩阵: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 则依次打 ...
- PHP直播平台源码搭建教程
直播源码市场火爆,但是PHP直播平台源码的搭建过程较为复杂,本文就简单为大家概述一下直播的实现过程以及PHP直播平台源码是如何搭建的. 一.直播的定义 如今PHP直播平台源码绝大部分情况下是指在现场架 ...
- Java工程师高薪训练营-第一阶段 开源框架源码解析-模块一 持久层框架涉及实现及MyBatis源码分析-任务一:自定义持久层框架
目录 任务一:自定义持久层框架 1.1 JDBC回顾及问题分析 1.2 自定义持久层框架思路分析 1.3 IPersistence_Test编写 1.3.1 XXXMapper.xml详解 1.3.2 ...
- Facebook 的神仙组长什么样?
这里是<齐姐聊大厂>系列的第 14 篇 每周五早上 8 点,与你唠唠大厂的那些事 号外号外!前 12 篇已出 PDF:公粽号后台回复「大厂」即可获得! ❝ 小齐说: 这篇文章是来自阿米粥的 ...
- 水题挑战6: CF1444A DIvision
A. Division time limit per test1 second memory limit per test512 megabytes inputstandard input outpu ...
- 设置Eclipse 字体 - MD终于摸到了
以前总是没办法设置Eclipse的 左边 资源管理器 字体.思想老局限在Eclipse的Font Setting里,跳不出这个玄妙的莫比乌斯环似的圈圈儿......每次戴着眼镜设置完字体,然后坐在电脑 ...
- python3中我所了解的print()的用法
1.最基础的用法:打印调试信息等字符串语句.而且在3里面,打印中文的时候不需要加u了. 2.打印变量 打印默认换行的: 打印出来不想要他换行的:参数end='',这样打印出来就可以不换行了,这种骚操作 ...