Pegasos: Primal Estimated sub-GrAdient Solver for SVM
Abstract
We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy is . In contrast, previous analyses of stochastic gradient descent methods require iterations. As in previous devised SVM solvers, the number of iterations also scales linearly with , where is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is , where is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV 1) with training examples.
1. Introduction
Support Vector Machines (SVMs) are effective and popular classification learning tool. The task of learning a support vector machine is cast as a constrained quadratic programming. However, in its native form, it is in fact an unconstrained empirical loss minimization with a penalty term for the norm of the classifier that is being learned. Formally, given a training set , where and , we would like to find the minimizer of the problem
(1)
where
(1)
We denote the objective function of Eq. (1) by . An optimization method finds an -accurate solution if . The original SVM problem also includes a bias term, . We omit the bias throughout the first sections and defer the description of an extension which employs a bias term to Sec. 4.
We describe and analyze in this paper a simple iterative algorithm, called Pegasos, for solving Eq. (1). The algorithm performs iterations and also requires an additional parameter , whose role is explained in the sequel. Pegasos alternates between stochastic subgradient descent steps and projection steps. The parameter determines the number of examples from the algorithm uses on each iteration for estimating the subgradient. When , Pegasos reduces to a variant of the subgradient projection method. We show that in this case the number of iterations that is required in order to achieve an -accurate solution is . At the other extreme, when , we recover a variant of the stochastic (sub) gradient method. In the stochastic case, we analyze the probability of obtaining a good approximate solution. Specifically, we show that with probability of at least our algorithm finds an -accurate solution using only iterations, while each iteration involves a single inner product between and . This rate of convergence does not depend on the size of the training set and thus our algorithm is especially suited for large datasets.
2. The Pegasos Algorithm
In this section we describe the Pegasos algorithm for solving the optimization problem given in Eq. (1). The algorithm receives as input two parameters: - the number of iterations to perform; - the number of examples to use for calculating sub-gradients. Initially, we set to any vector whose norm is at most . On iteration of the algorithm, we first choose a set of size . Then, we replace the objective in Eq. (1) with an approximate objective function,
.
Note that we overloaded our original definition of as the original objective can be denoted either as or as . We interchangeably use both notations depending on the context. Next, we set the learning rate and define to be the set of examples for which suffers a non-zero loss. We now perform a two-step update as follows. We scale by and for all examples we add to the vector . We denote the resulting vector by . This step can be also written as , where
(1)
The definition of the hinge-loss implies that is a sub-gradient of at . Last, we set to be the projection of onto the set
(1)
That is, is obtained by scaling by . As we show in our analysis below, the optimal solution of SVM is in the set . Informally speaking, we can always project back onto the set as we only get closer to the optimum. The output of Pegasos is the last vector .
Note that if we choose on each round then we obtain the sub-gradient projection method. On the other extreme, if we choose to contain a single randomly selected example, then we recover a variant of the stochastic gradient method. In general, we allow to be a set of examples sampled i.i.d. from .
We conclude this section with a short discussion of implementation details when the instances are sparse, namely, when each instance has very few non-zero elements. In this case, we can represent as a triplet where is a dense vector and are scalars. The vector is defined through the triplet as follows: and stores the squared norm of , . Using this representation, it is easily verified that the total number of operations required for performing one iteration of Pegasos with is , where is the number of non-zero elements in .
3. Analysis
In this section we analyze the convergence properties of Pegasos. Throughout this section we denote
(1)
Recall that on each iteration of the algorithm, we focus on an instantaneous objective function
Pegasos: Primal Estimated sub-GrAdient Solver for SVM的更多相关文章
- [转] 从零推导支持向量机 (SVM)
原文连接 - https://zhuanlan.zhihu.com/p/31652569 摘要 支持向量机 (SVM) 是一个非常经典且高效的分类模型.但是,支持向量机中涉及许多复杂的数学推导,并需要 ...
- 损失函数(Loss Function) -1
http://www.ics.uci.edu/~dramanan/teaching/ics273a_winter08/lectures/lecture14.pdf Loss Function 损失函数 ...
- 常见数据挖掘算法的Map-Reduce策略(2)
接着上一篇文章常见算法的mapreduce案例(1)继续挖坑,本文涉及到算法的基本原理,文中会大概讲讲,但具体有关公式的推导还请大家去查阅相关的文献文章.下面涉及到的数据挖掘算法会有:L ...
- 逻辑回归原理_挑战者飞船事故和乳腺癌案例_Python和R_信用评分卡(AAA推荐)
sklearn实战-乳腺癌细胞数据挖掘(博客主亲自录制视频教程) https://study.163.com/course/introduction.htm?courseId=1005269003&a ...
- 基于MNIST数据的softmax regression
跟着tensorflow上mnist基本机器学习教程联系 首先了解sklearn接口: sklearn.linear_model.LogisticRegression In the multiclas ...
- [Scikit-learn] 1.1 Generalized Linear Models - Logistic regression & Softmax
二分类:Logistic regression 多分类:Softmax分类函数 对于损失函数,我们求其最小值, 对于似然函数,我们求其最大值. Logistic是loss function,即: 在逻 ...
- Factorization Machine算法
参考: http://stackbox.cn/2018-12-factorization-machine/ https://baijiahao.baidu.com/s?id=1641085157432 ...
- LibLinear(SVM包)使用说明之(一)README
转自:http://blog.csdn.net/zouxy09/article/details/10947323/ LibLinear(SVM包)使用说明之(一)README zouxy09@qq.c ...
- SVM应用
我在项目中应用的SVM库是国立台湾大学林智仁教授开发的一套开源软件,主要有LIBSVM与LIBLINEAR两个,LIBSVM是对非线性数据进行分类,大家也比较熟悉,LIBLINEAR是对线性数据进行分 ...
随机推荐
- Execl 使用技巧
1. =COUNTIF(C:C;"*OS7*") 某一列中包含OS7的数量总数138
- debug 断点无效
如果出现这样的情况 需要在debug下配置 配置好之后,断点测试即可,亲测有效.
- SQL与Mongodb聚合的对应关系(举例说明)
SQL中的聚合函数和Mongodb中的管道相互对应的关系: WHERE $match GROUP BY $group HAVING $match SELECT $project ORDER BY $s ...
- Ridit分析
对于有序分类资料,由于指标存在等级顺序,因此不能使用卡方检验,除了使用秩和检验之外,ridit检验也是分析有序分类资料的常用方法,属于非参数检验. ridit检验的基本做法是将一组有序分组资料转换成一 ...
- SPSS数据分析—加权最小二乘法
标准的线性回归模型的假设之一是因变量方差齐性,即因变量或残差的方差不随自身预测值或其他自变量的值变化而变化.但是有时候,这种情况会被违反,称为异方差性,比如因变量为储蓄额,自变量为家庭收入,显然高收入 ...
- 【翻译】configuration changes与handler.post
原文地址 http://corner.squareup.com/2013/12/android-main-thread-2.html 在前一部分里面previous part ,我们深入挖掘了 loo ...
- Discuz! X3搬家后UCenter出现UCenter info: MySQL Query Error解决方案
Discuz! X3 X2.5论坛搬家后 登录UCenter出现报错:UCenter info: MySQL Query ErrorSQL:SELECT value FROM [Table]vars ...
- 用Nginx+Lua(OpenResty)开发高性能Web应用
在互联网公司,Nginx可以说是标配组件,但是主要场景还是负载均衡.反向代理.代理缓存.限流等场景:而把Nginx作为一个Web容器使用的还不是那么广泛.Nginx的高性能是大家公认的,而Nginx开 ...
- java sqlhelper
dbinfo.properties部分: 注意每行末尾不可以有空格 #oracle configure UserName=scott Password=tiger Driver=oracle.jdbc ...
- Oracle逻辑备份与恢复
1. 备份的类型 按照备份方式的不同,可以把备份分为两类: 1.1 逻辑备份:指通过逻辑导出对数据进行备份.将数据库中的用户对象导出到一个二进制文件中,逻辑备份使用导入导出工具:EXPDP/IMP ...