Sparse Coding

  • Sparse coding is a class of unsupervised methods for learning sets of over-complete bases to represent data efficiently. The aim of sparse coding is to find a set of basis vectors  such that an input vector  can be represented as a linear combination of these basis vectors:

  • The advantage of having an over-complete basis is that our basis vectors are better able to capture structures and patterns inherent in the input data 
  • However, with an over-complete basis, the coefficients are no longer uniquely determined by the input vector. Therefore, in sparse coding, we introduce the additional criterion of sparsity to resolve the degeneracy introduced by over-completeness.
  • The sparse coding cost function is defined on a set of m input vectors as: 

    where is a sparsity function which penalizes  for being far from zero. We can interpret the first term of the sparse coding objective as a reconstruction term which tries to force the algorithm to provide a good representation of , and the second term as a sparsity penalty which forces our representation of  (i.e., the learned features) to be sparse. The constant  is a scaling constant to determine the relative importance of these two contributions.

  • Although the most direct measure of sparsity is the  norm, it is non-differentiable and difficult to optimize in general. In practice, common choices for the sparsity cost  are the penalty and the log sparsity
  • It is also possible to make the sparsity penalty arbitrary small by scaling down  and scaling up  by some large constant. To prevent this from happening, we will constrain  to be less than some constant . The full sparse coding cost function hence is:


    where the constant is usually set

  • One problem is that the constraint cannot be forced using simple gradient-based methods. Hence, in practice, this constraint is weakened to a "weight decay" term designed to keep the entries of  small:
  • Another problem is that the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. We will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use  in place of , where is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when is large compared to , the is dominated by , and taking the square root yield approximately .
  • Hence, the final objective function is:
  • The set of basis vectors are called "dictionary" ().  is "adapted" to if it can represent it with a few basis vectors, that is, there exists a sparse vector in  such that . We call  the sparse code. It is illustrated as follows: 

Learning

  • Learning a set of basis vectors using sparse coding consists of performing two separate optimizations (i.e., alternative optimization method):

    • The first being an optimization over coefficients  for each training example
    • The second being an optimization over basis vectors across many training examples at once.
  • However, the classical optimization alternates between D and  can achieve good results, but very slow.
  • A significant limitation of sparse coding is that even after a set of basis vectors have been learnt, in order to "encode" a new data example, optimization must be performed to obtain the required coefficients. This significant "runtime" cost means that sparse coding is computationally expensive to implement even at test time, especially compared to typical feed-forward architectures.

Remarks

  • From my view, due to the sparseness enforced in the dictionary learning (i.e., sparse code), the restored matrix is able to remove noise of original matrix, i.e., having some effect of denoising. Hence, Sparse coding could be used to denoise images.

References

Study notes for Sparse Coding的更多相关文章

  1. Machine Learning Algorithms Study Notes(2)--Supervised Learning

    Machine Learning Algorithms Study Notes 高雪松 @雪松Cedro Microsoft MVP 本系列文章是Andrew Ng 在斯坦福的机器学习课程 CS 22 ...

  2. Machine Learning Algorithms Study Notes(3)--Learning Theory

    Machine Learning Algorithms Study Notes 高雪松 @雪松Cedro Microsoft MVP 本系列文章是Andrew Ng 在斯坦福的机器学习课程 CS 22 ...

  3. Machine Learning Algorithms Study Notes(1)--Introduction

    Machine Learning Algorithms Study Notes 高雪松 @雪松Cedro Microsoft MVP 目 录 1    Introduction    1 1.1    ...

  4. 理解sparse coding

    理解sparse coding 稀疏编码系列: (一)----Spatial Pyramid 小结 (二)----图像的稀疏表示——ScSPM和LLC的总结 (三)----理解sparse codin ...

  5. [Paper] **Before GAN: sparse coding

    读罢[UFLDL] ConvNet,为了知识体系的完整,看来需要实战几篇论文深入理解一些原理. 如下是未来博文系列的初步设想,为了hold住 GAN而必备的知识体系,也是必经之路. [Paper] B ...

  6. sparse coding

    Deep Learning(深度学习)学习笔记整理系列 zouxy09@qq.com http://blog.csdn.net/zouxy09 作者:Zouxy version 1.0 2013-04 ...

  7. 稀疏编码(Sparse Coding)的前世今生(一) 转自http://blog.csdn.net/marvin521/article/details/8980853

    稀疏编码来源于神经科学,计算机科学和机器学习领域一般一开始就从稀疏编码算法讲起,上来就是找基向量(超完备基),但是我觉得其源头也比较有意思,知道根基的情况下,拓展其应用也比较有底气.哲学.神经科学.计 ...

  8. Study notes for Clustering and K-means

    1. Clustering Analysis Clustering is the process of grouping a set of (unlabeled) data objects into ...

  9. sparse coding稀疏表达入门

    最近在看sparse and redundant representations这本书,进度比较慢,不过力争看过的都懂,不把时间浪费掉.才看完了不到3页吧,书上基本给出了稀疏表达的概念以及传统的求法. ...

随机推荐

  1. Map接口的学习

    接口Map<K, V> 一.Map功能 1.添加 put(K key, V value) putAll(Map<? extends K, ? extends V>); 2.删除 ...

  2. Identity-修改Error错误提示为中文

    第一步:重写IdentityErrorDescriber public class CustomIdentityErrorDescriber : IdentityErrorDescriber     ...

  3. Android编程获取网络连接状态(3G/Wifi)及调用网络配置界面

    随着3G和Wifi的推广,越来越多的Android应用程序需要调用网络资源,检测网络连接状态也就成为网络应用程序所必备的功能. Android平台提供了ConnectivityManager  类,用 ...

  4. eclipse @ 注释为何一写就报错

    以前一直奇怪,为什么eclipse自动生成的的代码中的@注释不会报错,而我直接写@就会报错 原因其实很简单: eclipse会检查@注释的位置 举个例子:写@Override,直接写会报错,但如果你继 ...

  5. poj 1979 Red and Black(dfs)

    题目链接:http://poj.org/problem?id=1979 思路分析:使用DFS解决,与迷宫问题相似:迷宫由于搜索方向只往左或右一个方向,往上或下一个方向,不会出现重复搜索: 在该问题中往 ...

  6. 绫致时装讲述O2O细节:野心在“私人定制” - 移动购物 - 亿邦动力网

    绫致时装讲述O2O细节:野心在"私人定制" - 移动购物 - 亿邦动力网 绫致时装讲述O2O细节:野心在"私人定制" 作者: 亿邦动力网来源: 亿邦动力网201 ...

  7. QT中QWidget类简介

    一.详细描述 QWidget类是所有用户界面对象的基类.通俗的来讲,Qt基本上所有的UI类都是由QWidget继承出来的,而QWidget继承于QObject,  大家可以查阅Qt source 即可 ...

  8. iOS网络之数据请求GET和POST

    1. HTTP和HTTPS协议 1> URL URL全称是Uniform Resource Locator(统一资源定位符)通过1个URL,能找到互联网上唯一的1个资源 URL就是资源的地址.位 ...

  9. 在MDK中怎样生成*.bin格式的文件?

    在Realview MDK的集成开发环境中.默认情况下能够生成*.axf格式的调试文件和*.hex格式的可运行文件. 尽管这两个格式的文件很有利于ULINK2仿真器的下载和调试,可是ADS的用户更习惯 ...

  10. switch语句:适用于一个条件有多个分支的情况---分支语句

    例1: 客服选择功能,然后按按键 Console.WriteLine("查花费请按1,查余额请按2,查流量请按3,办理业务请按4,宽带请按5,人工服务请按6,集团业务请按7"); ...