Pegasos: Primal Estimated sub-GrAdient Solver for SVM
Abstract
We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy is . In contrast, previous analyses of stochastic gradient descent methods require iterations. As in previous devised SVM solvers, the number of iterations also scales linearly with , where is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is , where is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV 1) with training examples.
1. Introduction
Support Vector Machines (SVMs) are effective and popular classification learning tool. The task of learning a support vector machine is cast as a constrained quadratic programming. However, in its native form, it is in fact an unconstrained empirical loss minimization with a penalty term for the norm of the classifier that is being learned. Formally, given a training set , where and , we would like to find the minimizer of the problem
(1)
where
(1)
We denote the objective function of Eq. (1) by . An optimization method finds an -accurate solution if . The original SVM problem also includes a bias term, . We omit the bias throughout the first sections and defer the description of an extension which employs a bias term to Sec. 4.
We describe and analyze in this paper a simple iterative algorithm, called Pegasos, for solving Eq. (1). The algorithm performs iterations and also requires an additional parameter , whose role is explained in the sequel. Pegasos alternates between stochastic subgradient descent steps and projection steps. The parameter determines the number of examples from the algorithm uses on each iteration for estimating the subgradient. When , Pegasos reduces to a variant of the subgradient projection method. We show that in this case the number of iterations that is required in order to achieve an -accurate solution is . At the other extreme, when , we recover a variant of the stochastic (sub) gradient method. In the stochastic case, we analyze the probability of obtaining a good approximate solution. Specifically, we show that with probability of at least our algorithm finds an -accurate solution using only iterations, while each iteration involves a single inner product between and . This rate of convergence does not depend on the size of the training set and thus our algorithm is especially suited for large datasets.
2. The Pegasos Algorithm
In this section we describe the Pegasos algorithm for solving the optimization problem given in Eq. (1). The algorithm receives as input two parameters: - the number of iterations to perform; - the number of examples to use for calculating sub-gradients. Initially, we set to any vector whose norm is at most . On iteration of the algorithm, we first choose a set of size . Then, we replace the objective in Eq. (1) with an approximate objective function,
.
Note that we overloaded our original definition of as the original objective can be denoted either as or as . We interchangeably use both notations depending on the context. Next, we set the learning rate and define to be the set of examples for which suffers a non-zero loss. We now perform a two-step update as follows. We scale by and for all examples we add to the vector . We denote the resulting vector by . This step can be also written as , where
(1)
The definition of the hinge-loss implies that is a sub-gradient of at . Last, we set to be the projection of onto the set
(1)
That is, is obtained by scaling by . As we show in our analysis below, the optimal solution of SVM is in the set . Informally speaking, we can always project back onto the set as we only get closer to the optimum. The output of Pegasos is the last vector .
Note that if we choose on each round then we obtain the sub-gradient projection method. On the other extreme, if we choose to contain a single randomly selected example, then we recover a variant of the stochastic gradient method. In general, we allow to be a set of examples sampled i.i.d. from .
We conclude this section with a short discussion of implementation details when the instances are sparse, namely, when each instance has very few non-zero elements. In this case, we can represent as a triplet where is a dense vector and are scalars. The vector is defined through the triplet as follows: and stores the squared norm of , . Using this representation, it is easily verified that the total number of operations required for performing one iteration of Pegasos with is , where is the number of non-zero elements in .
3. Analysis
In this section we analyze the convergence properties of Pegasos. Throughout this section we denote
(1)
Recall that on each iteration of the algorithm, we focus on an instantaneous objective function
Pegasos: Primal Estimated sub-GrAdient Solver for SVM的更多相关文章
- [转] 从零推导支持向量机 (SVM)
原文连接 - https://zhuanlan.zhihu.com/p/31652569 摘要 支持向量机 (SVM) 是一个非常经典且高效的分类模型.但是,支持向量机中涉及许多复杂的数学推导,并需要 ...
- 损失函数(Loss Function) -1
http://www.ics.uci.edu/~dramanan/teaching/ics273a_winter08/lectures/lecture14.pdf Loss Function 损失函数 ...
- 常见数据挖掘算法的Map-Reduce策略(2)
接着上一篇文章常见算法的mapreduce案例(1)继续挖坑,本文涉及到算法的基本原理,文中会大概讲讲,但具体有关公式的推导还请大家去查阅相关的文献文章.下面涉及到的数据挖掘算法会有:L ...
- 逻辑回归原理_挑战者飞船事故和乳腺癌案例_Python和R_信用评分卡(AAA推荐)
sklearn实战-乳腺癌细胞数据挖掘(博客主亲自录制视频教程) https://study.163.com/course/introduction.htm?courseId=1005269003&a ...
- 基于MNIST数据的softmax regression
跟着tensorflow上mnist基本机器学习教程联系 首先了解sklearn接口: sklearn.linear_model.LogisticRegression In the multiclas ...
- [Scikit-learn] 1.1 Generalized Linear Models - Logistic regression & Softmax
二分类:Logistic regression 多分类:Softmax分类函数 对于损失函数,我们求其最小值, 对于似然函数,我们求其最大值. Logistic是loss function,即: 在逻 ...
- Factorization Machine算法
参考: http://stackbox.cn/2018-12-factorization-machine/ https://baijiahao.baidu.com/s?id=1641085157432 ...
- LibLinear(SVM包)使用说明之(一)README
转自:http://blog.csdn.net/zouxy09/article/details/10947323/ LibLinear(SVM包)使用说明之(一)README zouxy09@qq.c ...
- SVM应用
我在项目中应用的SVM库是国立台湾大学林智仁教授开发的一套开源软件,主要有LIBSVM与LIBLINEAR两个,LIBSVM是对非线性数据进行分类,大家也比较熟悉,LIBLINEAR是对线性数据进行分 ...
随机推荐
- WPF界面布局——Canvas
Canvas用于定义一个区域,称为画布,用于完全控制每个元素的精确位置.它是布局控件中最为简单的一种,直接将元素放在指定位置,使用Canvas时,必须指定一个子元素的位置(相对于Canvas),否则所 ...
- Java 文本文件 读写
Use File/FileInputStream/FileOutputStream. public void testWithFIS() throws IOException{ File file=n ...
- Python之路 day2 集合的基本操作
#!/usr/bin/env python # -*- coding:utf-8 -*- #Author:ersa ''' #集合是无序的 集合的关系测试, 增加,删除,查找等操作 ''' #列表去重 ...
- [ubuntu]用ubuntu开发的日子----win7 ubuntu双系统
小子终于忍不了win7某些蛋疼的设定,看群里好多大牛推荐mac,但资金紧张,只好推而求其次使用ubuntu,但是由于公司工作环境是windows,所以还必须保留windows系统,一次决定双系统. 下 ...
- welcome-file-list设置问题之css,js文件无法加载
web.xml里的welcome-file-list里设置默认访问页面为/html/index.html 但是在访问时,页面CSS都没加载. 正常输入网址却没问题.用/html/index.jsp也没 ...
- ionic项目中手机状态栏显示使用$cordovaStatusbar插件
在项目中发现Android和iOS在手机状态栏样式不一样,然后就查到有一个cordova插件可以解决这个问题 1.下载插件$cordovaStatusbar命令: cordova plugin add ...
- Oracle TnsName问题记录
在多次oracle服务器搭建过程中,经常遇到tnsname不正确的情况1.安装了client 这个时候XX/client/network/admin/中也有一个tnsname,而且在环境变量中,系统是 ...
- PHP 中:: -> self $this 操作符的区别
访问PHP类中的成员变量或方法时, 如果被引用的变量或者方法被声明成const(定义常量)或者static(声明静态),那么就必须使用操作符::, 反之如果被引用的变量或者方法没有被声明成 const ...
- JS学习笔记--仿手机发送内容交互
学习JS笔记----记录上课中学习的知识点,分享下老师教的内容: 1.html内容 <div id="box"> <div id="message&qu ...
- Objective-C( Foundation框架 一 NSFileManager)
NSFileManager 用来管理文件系统的 它可以用于常见的文件,文件夹操作(拷贝,剪切,创建) NSFileManager使用了单例模式(Singleton) 使用defaultManager可 ...