ML | SVM
What's xxx
An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.
In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. The transformation may be nonlinear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space.
Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of support vector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. We choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier.
Any hyperplane can be written as the set of points $\mathbf{x}$ satisfying
$\mathbf{w}\cdot\mathbf{x} - b=0,\,$
where $\cdot$ denotes the dot product and ${\mathbf{w}}$ the (not necessarily normalized) normal vector to the hyperplane.

Maximum-margin hyperplane and margins for an SVM trained with samples from two classes. Samples on the margin are called the support vectors.
It was converted into a quadratic programming optimization problem. The solution can be expressed as a linear combination of the training vectors
$\mathbf{w} = \sum_{i=1}^n{\alpha_i y_i\mathbf{x_i}}.$
Only a few $\alpha_i$ will be greater than zero. The corresponding $\mathbf{x_i}$ are exactly the support vectors, which lie on the margin and satisfy $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) = 1$. From this one can derive that the support vectors also satisfy
$\mathbf{w}\cdot\mathbf{x_i} - b = 1 / y_i = y_i \iff b = \mathbf{w}\cdot\mathbf{x_i} - y_i$
which allows one to define the offset b. In practice, it is more robust to average over all $N_{SV}$ support vectors:
$b = \frac{1}{N_{SV}} \sum_{i=1}^{N_{SV}}{(\mathbf{w}\cdot\mathbf{x_i} - y_i)}$
Writing the classification rule in its unconstrained dual form reveals that the maximum-margin hyperplane and therefore the classification task is only a function of the support vectors, the subset of the training data that lie on the margin.
Using the fact that $\|\mathbf{w}\|^2 = \mathbf{w}\cdot \mathbf{w}$ and substituting $\mathbf{w} = \sum_{i=1}^n{\alpha_i y_i\mathbf{x_i}}$, one can show that the dual of the SVM reduces to the following optimization problem:
Maximize (in $\alpha_i$ )
$\tilde{L}(\mathbf{\alpha})=\sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i, j} \alpha_i \alpha_j y_i y_j \mathbf{x}_i^T \mathbf{x}_j=\sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i, j} \alpha_i \alpha_j y_i y_j k(\mathbf{x}_i, \mathbf{x}_j)$
subject to (for any $i = 1, \dots, n$)
$\alpha_i \geq 0,\, $
and to the constraint from the minimization in $b$
$\sum_{i=1}^n \alpha_i y_i = 0.$
Here the kernel is defined by $k(\mathbf{x}_i,\mathbf{x}_j)=\mathbf{x}_i\cdot\mathbf{x}_j.$
$W$ can be computed thanks to the $\alpha$ terms:
$\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i.$
ML | SVM的更多相关文章
- OpenCV3 Ref SVM : cv::ml::SVM Class Reference
OpenCV3 Ref SVM : cv::ml::SVM Class Reference OpenCV2: #include <opencv2/core/core.hpp>#inclu ...
- Unknown/unsupported SVM type in function 'cv::ml::SVMImpl::checkParams'
1.在使用PYTHON[Python 3.6.8]训练样本时报错如下: Traceback (most recent call last): File "I:\Eclipse\Python\ ...
- 【OpenCV】opencv3.0中的SVM训练 mnist 手写字体识别
前言: SVM(支持向量机)一种训练分类器的学习方法 mnist 是一个手写字体图像数据库,训练样本有60000个,测试样本有10000个 LibSVM 一个常用的SVM框架 OpenCV3.0 中的 ...
- SVM+HOG特征训练分类器
#1,概念 在机器学习领域,支持向量机SVM(Support Vector Machine)是一个有监督的学习模型,通常用来进行模式识别.分类.以及回归分析. SVM的主要思想可以概括为两点:⑴它是针 ...
- [code segments] OpenCV3.0 SVM with C++ interface
talk is cheap, show you the code: /***************************************************************** ...
- EasyPR源码剖析(7):车牌判断之SVM
前面的文章中我们主要介绍了车牌定位的相关技术,但是定位出来的相关区域可能并非是真实的车牌区域,EasyPR通过SVM支持向量机,一种机器学习算法来判定截取的图块是否是真的“车牌”,本节主要对相关的技术 ...
- opencv3.1线性可分svm例子及函数分析
https://www.cnblogs.com/qinguoyi/p/7272218.html //摘自:http://docs.opencv.org/2.4/doc/tutorials/ml/int ...
- 简单HOG+SVM mnist手写数字分类
使用工具 :VS2013 + OpenCV 3.1 数据集:minst 训练数据:60000张 测试数据:10000张 输出模型:HOG_SVM_DATA.xml 数据准备 train-images- ...
- opencv 视觉项目学习笔记(二): 基于 svm 和 knn 车牌识别
车牌识别的属于常见的 模式识别 ,其基本流程为下面三个步骤: 1) 分割: 检测并检测图像中感兴趣区域: 2)特征提取: 对字符图像集中的每个部分进行提取: 3)分类: 判断图像快是不是车牌或者 每个 ...
随机推荐
- 《零基础入门学习Python》【第一版】视频课后答案第001讲
测试题答案: 0. Python 是什么类型的语言? Python是脚本语言 脚本语言(Scripting language)是电脑编程语言,因此也能让开发者藉以编写出让电脑听命行事的程序.以简单的方 ...
- GoF23种设计模式之行为型模式之模板方法
概述 定义一个操作中的算法的骨架,而将一些步骤延迟到子类中. TemplateMethod使得子类可以不改变一个算法的结构即可重定义该算法的某些特定步骤. 适用性 1.一次性实现一个算法的不变的部分, ...
- loc与iloc函数的使用
Pandas中loc和iloc函数用法详解(源码+实例) https://blog.csdn.net/w_weiying/article/details/81411257 Pandas中loc,il ...
- C#语言入门
1.基础知识 2.数据类型 3.控制语句 4.
- poj 1995 快速幂
题意:给出A1,…,AH,B1,…,BH以及M,求(A1^B1+A2^B2+ … +AH^BH)mod M. 思路:快速幂 实例 3^11 11=2^0+2^1+2^3 => 3^1*3 ...
- PAT Basic 1076
1076 Wifi密码 下面是微博上流传的一张照片:“各位亲爱的同学们,鉴于大家有时需要使用 wifi,又怕耽误亲们的学习,现将 wifi 密码设置为下列数学题答案:A-1:B-2:C-3:D-4:请 ...
- Linux学习-Linux的账号与群组
使用者识别码: UID 与 GID Linux 主机并不会直接认识 你的"帐号名称"的,他仅认识 ID 啊 (ID 就是一组号码啦). 由于计算机仅认识 0 与 1,所 以主机对于 ...
- 利用http录制jmeter脚本
1.在WorkBench下新建HTTP(S) Test Script Recorder,默认端口号为8080,假如8080被占用,则使用其他端口号:且为了使录制保存到线程组里,也同时新建一个线程组Tr ...
- Linux下配置LAMP环境
先准备相关软件,并确保服务器已经安装了gcc,gcc-c++,make三个软件,以便后续编译过程. 首先安装, libxml2 ftp://xmlsoft.org/libxml2/ 下载最新版本(我的 ...
- Selenium WebDriver-通过ActionChains实现页面元素拖拽
#encoding=utf-8 import unittest import time import chardet from selenium import webdriver class Visi ...