Self-Taught Learning
the promise of self-taught learning and unsupervised feature learning is that if we can get our algorithms to learn from unlabeled data, then we can easily obtain and learn from massive amounts of it.Even though a single unlabeled example is less informative than a single labeled example, if we can get tons of the former---for example, by downloading random unlabeled images/audio clips/text documents off the internet---and if our algorithms can exploit this unlabeled data effectively, then we might be able to achieve better performance than the massive hand-engineering and massive hand-labeling approaches.
Learning features
We have already seen how an autoencoder can be used to learn features from unlabeled data. Concretely, suppose we have an unlabeled training set
with
unlabeled examples. (The subscript "u" stands for "unlabeled.") We can then train a sparse autoencoder on this data (perhaps with appropriate whitening or other pre-processing):
Having trained the parameters
of this model, given any new input
, we can now compute the corresponding vector of activations
of the hidden units. As we saw previously, this often gives a better representation of the input than the original raw input
. We can also visualize the algorithm for computing the features/activations
as the following neural network:
This is just the sparse autoencoder that we previously had, with with the final layer removed.
Now, suppose we have a labeled training set
of
examples. (The subscript "l" stands for "labeled.") We can now find a better representation for the inputs. In particular, rather than representing the first training example as
, we can feed
as the input to our autoencoder, and obtain the corresponding vector of activations
. To represent this example, we can either just replace the original feature vector with
. Alternatively, we can concatenate the two feature vectors together, getting a representation
.
Thus, our training set now becomes
(if we use the replacement representation, and use
to represent the
-th training example), or
(if we use the concatenated representation). In practice, the concatenated representation often works better; but for memory or computation representations, we will sometimes use the replacement representation as well.
Finally, we can train a supervised learning algorithm such as an SVM, logistic regression, etc. to obtain a function that makes predictions on the
values. Given a test example
, we would then follow the same procedure: For feed it to the autoencoder to get
. Then, feed either
or
to the trained classifier to get a prediction.
On pre-processing the data
During the feature learning stage where we were learning from the unlabeled training set
, we may have computed various pre-processing parameters. For example, one may have computed a mean value of the data and subtracted off this mean to perform mean normalization, or used PCA to compute a matrix
to represent the data as
(or used PCA whitening or ZCA whitening). If this is the case, then it is important to save away these preprocessing parameters, and to use the same parameters during the labeled training phase and the test phase, so as to make sure we are always transforming the data the same way to feed into the autoencoder. In particular, if we have computed a matrix
using the unlabeled data and PCA, we should keep the same matrix
and use it to preprocess the labeled examples and the test data. We should not re-estimate a different
matrix (or data mean for mean normalization, etc.) using the labeled training set, since that might result in a dramatically different pre-processing transformation, which would make the input distribution to the autoencoder very different from what it was actually trained on.
On the terminology of unsupervised feature learning
There are two common unsupervised feature learning settings, depending on what type of unlabeled data you have. The more general and powerful setting is the self-taught learning setting, which does not assume that your unlabeled data xu has to be drawn from the same distribution as your labeled data xl. The more restrictive setting where the unlabeled data comes from exactly the same distribution as the labeled data is sometimes called the semi-supervised learning setting. This distinctions is best explained with an example, which we now give.
Suppose your goal is a computer vision task where you'd like to distinguish between images of cars and images of motorcycles; so, each labeled example in your training set is either an image of a car or an image of a motorcycle. Where can we get lots of unlabeled data? The easiest way would be to obtain some random collection of images, perhaps downloaded off the internet. We could then train the autoencoder on this large collection of images, and obtain useful features from them. Because here the unlabeled data is drawn from a different distribution than the labeled data (i.e., perhaps some of our unlabeled images may contain cars/motorcycles, but not every image downloaded is either a car or a motorcycle), we call this self-taught learning.
In contrast, if we happen to have lots of unlabeled images lying around that are all images of either a car or a motorcycle, but where the data is just missing its label (so you don't know which ones are cars, and which ones are motorcycles), then we could use this form of unlabeled data to learn the features. This setting---where each unlabeled example is drawn from the same distribution as your labeled examples---is sometimes called the semi-supervised setting. In practice, we often do not have this sort of unlabeled data (where would you get a database of images where every image is either a car or a motorcycle, but just missing its label?), and so in the context of learning features from unlabeled data, the self-taught learning setting is more broadly applicable.
自学习 VS 半监督学习
半监督学习假设,未标记数据和已标记数据拥有相同的数据分布
Self-Taught Learning的更多相关文章
- 一个Self Taught Learning的简单例子
idea: Concretely, for each example in the the labeled training dataset xl, we forward propagate the ...
- The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near
The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near ...
- What is machine learning?
What is machine learning? One area of technology that is helping improve the services that we use on ...
- How do I learn machine learning?
https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644 How Can I Learn X? ...
- (转) Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance
Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance 2018-1 ...
- A Brief Overview of Deep Learning
A Brief Overview of Deep Learning (This is a guest post by Ilya Sutskever on the intuition behind de ...
- 5 Techniques To Understand Machine Learning Algorithms Without the Background in Mathematics
5 Techniques To Understand Machine Learning Algorithms Without the Background in Mathematics Where d ...
- 深度学习Deep learning
In the last chapter we learned that deep neural networks are often much harder to train than shallow ...
- 【转】The most comprehensive Data Science learning plan for 2017
I joined Analytics Vidhya as an intern last summer. I had no clue what was in store for me. I had be ...
- Neural Networks and Deep Learning
Neural Networks and Deep Learning This is the first course of the deep learning specialization at Co ...
随机推荐
- sql server 中查询数据库下有多少张表以及同义词等信息
--查询数据库有多少张表SELECT count(0) from sysobjects where xtype = 'u' 复制代码 解释:sysobjects系统对象表. 保存当前数据库的对象.如约 ...
- tensorflow 1 - 起步
使用图 (graph) 来表示计算任务. 在被称之为 会话 (Session) 的上下文 (context) 中执行图. 使用 tensor 表示数据. 通过 变量 (Variable) 维护状态. ...
- EXPIREAT
EXPIREAT key timestamp EXPIREAT 的作用和EXPIRE类似,都用于为key设置生存时间. 不同在于EXPIREAT命令接受都时间参数是UNIX时间戳(unix times ...
- NYOJ_77 开灯问题
题目地址 分析: 用一个数组来保存每盏灯的操作的次数.推断奇偶就可以推断灯的状态. 最后的输出格式须要注意一下空格的位置,思路就是现输出一个.剩下来的输出在前面加一个空格. 空格用_表示: 1_3_5 ...
- linux下oracle11G DG搭建(三):环绕备库搭建操作
linux下oracle11G DG搭建(三):环绕备库搭建操作 环境 名称 主库 备库 主机名 bjsrv shsrv 软件版本号 RedHat Enterprise5.5.Oracle 11g 1 ...
- Hive Cilent数据操作
Hive运行命令方式有cli,jdbc.hwi.beeline.而我们经常使用的往往是cli shell 操作. cli shell hive -help hive --help 注:命令脚本必须在集 ...
- java调用com组件将office文件转换成pdf
在非常多企业级应用中都涉及到将office图片转换成pdf进行保存或者公布的场景,由于pdf格式的文档方便进行加密和权限控制(类似于百度文库).总结起来眼下将office文件转换 成pdf的方法主要有 ...
- UVA 12716 GCD XOR(数论+枚举+打表)
题意:给你一个N,让你求有多少组A,B, 满足1<= B <= A <= N, 且 gcd(A,B) = A XOR B. 思路:首先我们能够得出两个结论: A-B > ...
- php中局部变量和全局变量
php中局部变量和全局变量 代码1:函数内部使用函数外部变量错误方法 <?php $name = 'fish'; function animal() { echo $name; } animal ...
- centos 项目上线shell脚本
最近在弄项目上线,然后写了个上线,备份,回滚的shell脚本 上线可根据自己公司项目做相关操作,备份回滚可修改目录则可实现 主管要求用shell写,那就用shell写吧 本想Python写更好的 哈哈 ...
with
unlabeled examples. (The subscript "u" stands for "unlabeled.") We can then train a sparse autoencoder on this data (perhaps with appropriate whitening or other pre-processing):
of this model, given any new input
, we can now compute the corresponding vector of activations
of the hidden units. As we saw previously, this often gives a better representation of the input than the original raw input
of
examples. (The subscript "l" stands for "labeled.") We can now find a better representation for the inputs. In particular, rather than representing the first training example as
, we can feed
. To represent this example, we can either just replace the original feature vector with
.
(if we use the replacement representation, and use
to represent the
-th training example), or
(if we use the concatenated representation). In practice, the concatenated representation often works better; but for memory or computation representations, we will sometimes use the replacement representation as well.
values. Given a test example
, we would then follow the same procedure: For feed it to the autoencoder to get
. Then, feed either
to the trained classifier to get a prediction.
to represent the data as
(or used PCA whitening or ZCA whitening). If this is the case, then it is important to save away these preprocessing parameters, and to use the same parameters during the labeled training phase and the test phase, so as to make sure we are always transforming the data the same way to feed into the autoencoder. In particular, if we have computed a matrix