Pegasos: Primal Estimated sub-GrAdient Solver for SVM
Abstract
We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy is . In contrast, previous analyses of stochastic gradient descent methods require iterations. As in previous devised SVM solvers, the number of iterations also scales linearly with , where is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is , where is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV 1) with training examples.
1. Introduction
Support Vector Machines (SVMs) are effective and popular classification learning tool. The task of learning a support vector machine is cast as a constrained quadratic programming. However, in its native form, it is in fact an unconstrained empirical loss minimization with a penalty term for the norm of the classifier that is being learned. Formally, given a training set , where and , we would like to find the minimizer of the problem
(1)
where
(1)
We denote the objective function of Eq. (1) by . An optimization method finds an -accurate solution if . The original SVM problem also includes a bias term, . We omit the bias throughout the first sections and defer the description of an extension which employs a bias term to Sec. 4.
We describe and analyze in this paper a simple iterative algorithm, called Pegasos, for solving Eq. (1). The algorithm performs iterations and also requires an additional parameter , whose role is explained in the sequel. Pegasos alternates between stochastic subgradient descent steps and projection steps. The parameter determines the number of examples from the algorithm uses on each iteration for estimating the subgradient. When , Pegasos reduces to a variant of the subgradient projection method. We show that in this case the number of iterations that is required in order to achieve an -accurate solution is . At the other extreme, when , we recover a variant of the stochastic (sub) gradient method. In the stochastic case, we analyze the probability of obtaining a good approximate solution. Specifically, we show that with probability of at least our algorithm finds an -accurate solution using only iterations, while each iteration involves a single inner product between and . This rate of convergence does not depend on the size of the training set and thus our algorithm is especially suited for large datasets.
2. The Pegasos Algorithm
In this section we describe the Pegasos algorithm for solving the optimization problem given in Eq. (1). The algorithm receives as input two parameters: - the number of iterations to perform; - the number of examples to use for calculating sub-gradients. Initially, we set to any vector whose norm is at most . On iteration of the algorithm, we first choose a set of size . Then, we replace the objective in Eq. (1) with an approximate objective function,
.
Note that we overloaded our original definition of as the original objective can be denoted either as or as . We interchangeably use both notations depending on the context. Next, we set the learning rate and define to be the set of examples for which suffers a non-zero loss. We now perform a two-step update as follows. We scale by and for all examples we add to the vector . We denote the resulting vector by . This step can be also written as , where
(1)
The definition of the hinge-loss implies that is a sub-gradient of at . Last, we set to be the projection of onto the set
(1)
That is, is obtained by scaling by . As we show in our analysis below, the optimal solution of SVM is in the set . Informally speaking, we can always project back onto the set as we only get closer to the optimum. The output of Pegasos is the last vector .
Note that if we choose on each round then we obtain the sub-gradient projection method. On the other extreme, if we choose to contain a single randomly selected example, then we recover a variant of the stochastic gradient method. In general, we allow to be a set of examples sampled i.i.d. from .
We conclude this section with a short discussion of implementation details when the instances are sparse, namely, when each instance has very few non-zero elements. In this case, we can represent as a triplet where is a dense vector and are scalars. The vector is defined through the triplet as follows: and stores the squared norm of , . Using this representation, it is easily verified that the total number of operations required for performing one iteration of Pegasos with is , where is the number of non-zero elements in .
3. Analysis
In this section we analyze the convergence properties of Pegasos. Throughout this section we denote
(1)
Recall that on each iteration of the algorithm, we focus on an instantaneous objective function
Pegasos: Primal Estimated sub-GrAdient Solver for SVM的更多相关文章
- [转] 从零推导支持向量机 (SVM)
原文连接 - https://zhuanlan.zhihu.com/p/31652569 摘要 支持向量机 (SVM) 是一个非常经典且高效的分类模型.但是,支持向量机中涉及许多复杂的数学推导,并需要 ...
- 损失函数(Loss Function) -1
http://www.ics.uci.edu/~dramanan/teaching/ics273a_winter08/lectures/lecture14.pdf Loss Function 损失函数 ...
- 常见数据挖掘算法的Map-Reduce策略(2)
接着上一篇文章常见算法的mapreduce案例(1)继续挖坑,本文涉及到算法的基本原理,文中会大概讲讲,但具体有关公式的推导还请大家去查阅相关的文献文章.下面涉及到的数据挖掘算法会有:L ...
- 逻辑回归原理_挑战者飞船事故和乳腺癌案例_Python和R_信用评分卡(AAA推荐)
sklearn实战-乳腺癌细胞数据挖掘(博客主亲自录制视频教程) https://study.163.com/course/introduction.htm?courseId=1005269003&a ...
- 基于MNIST数据的softmax regression
跟着tensorflow上mnist基本机器学习教程联系 首先了解sklearn接口: sklearn.linear_model.LogisticRegression In the multiclas ...
- [Scikit-learn] 1.1 Generalized Linear Models - Logistic regression & Softmax
二分类:Logistic regression 多分类:Softmax分类函数 对于损失函数,我们求其最小值, 对于似然函数,我们求其最大值. Logistic是loss function,即: 在逻 ...
- Factorization Machine算法
参考: http://stackbox.cn/2018-12-factorization-machine/ https://baijiahao.baidu.com/s?id=1641085157432 ...
- LibLinear(SVM包)使用说明之(一)README
转自:http://blog.csdn.net/zouxy09/article/details/10947323/ LibLinear(SVM包)使用说明之(一)README zouxy09@qq.c ...
- SVM应用
我在项目中应用的SVM库是国立台湾大学林智仁教授开发的一套开源软件,主要有LIBSVM与LIBLINEAR两个,LIBSVM是对非线性数据进行分类,大家也比较熟悉,LIBLINEAR是对线性数据进行分 ...
随机推荐
- 18TH赛事管理
赛事管理者 项目psp: 一.计划 估计这个任务需要7天时间 二.开发 1.需求分析 作为一个赛事管理者,我希望知道每场比赛的队伍得分和积分情况,以便给每队进行排名. 2.生成设计文档 查询出每场得分 ...
- viewPager 的可滑动 Title
有三种方式: 1. 系统提供的title 缺点:标题在viewpager的滑动过程中也会滑动 实现:在ViewPager布局中添加 PagerTabStrip 或者是 PagerTitleStrip ...
- PBS 安装
How to install PBS Pro using the configure script. . Install the prerequisite packages for building ...
- KEGG数据库
参考:KEGG数据库中文教程 - 博奥 &[学习笔记]KEGG数据库 - 微信 学习一个技能最主要的事情你必须知道,那就是能通过它来做什么? KEGG数据库里面有什么? 如何查询某一特定的代 ...
- 冰球项目日志4-yjw
小组讨论 我们组编程主要分成三个模块,各自负责自己的编程与测试. 杨静梧:确定击球算法编程.输入:冰球位置,速度大小方向:输出:撞击时冰球中心位置. 曹迦勒:确定击球手速度,位置.输入:撞击时冰球中心 ...
- 【Mxnet】----1、使用mxnet训练mnist数据集
使用自己准备的mnist数据集,将0-9的bmp图像分别放到0-9文件夹下,然后用mxnet训练. 1.制作rec数据集 (1).制作list
- 如何让同局域网的同事访问我电脑上的PHP网站和数据库
需求:想让公司同一局域网的同事电脑访问我的电脑里面的php项目. 条件:首先确认localhost正常访问你的本地项目 环境:我使用的是wampserver2.5集成环境 步骤: 1.增加新增监听端口 ...
- Bible
001 Love your neighbor as yourself. 要爱人如己.--<旧·利>19:18 002 Resentment kills a foo ...
- 推荐两个谷歌的json-view插件(附带下载分享地址)
1.JSONView 网盘下载地址:http://pan.baidu.com/s/1hrGlaVa 效果图: 2.JSON-handle 网盘下载地址:http://pan.baidu.com/s/1 ...
- [转]搭建高可用mongodb集群(二)—— 副本集
在上一篇文章<搭建高可用MongoDB集群(一)——配置MongoDB> 提到了几个问题还没有解决. 主节点挂了能否自动切换连接?目前需要手工切换. 主节点的读写压力过大如何解决? 从节点 ...