Spark0.9.0机器学习包MLlib-Classification代码阅读
本章主要讲述MLlib包里面的分类算法实现,目前实现的有LogisticRegression、SVM、NaiveBayes ,前两种算法针对各自的目标优化函数跟正则项,调用了Optimization模块下的随机梯度的优化,并行实现的策略主要在随机梯度的计算,而贝叶斯的的并行策略主要是计算类别的先验概率跟特征的条件概率上面,详细情况如下
LogisticRegression.scala文件
/**
* Classification model trained using Logistic Regression.
*
* @param weights Weights computed for every feature.
* @param intercept Intercept computed for this model.
*/
class LogisticRegressionModel(
override val weights: Array[Double],
override val intercept: Double)
extends GeneralizedLinearModel(weights, intercept)
with ClassificationModel with Serializable {
override def predictPoint(dataMatrix: DoubleMatrix, weightMatrix: DoubleMatrix,
intercept: Double) = {
val margin = dataMatrix.mmul(weightMatrix).get(0) + intercept
round(1.0/ (1.0 + math.exp(margin * -1)))
}
}
逻辑回归的predictPoint函数,函数输入:待预测的数据样本,回归系数weights,intercept截距项,由于逻辑回归的判别函数f=1/(1+exp(-wx)),在代码中margin=-wx,最后返回1/(1+exp(-wx))值的四舍五入,也就是预测标签。
class LogisticRegressionWithSGD private (
var stepSize: Double,
var numIterations: Int,
var regParam: Double,
var miniBatchFraction: Double)
extends GeneralizedLinearAlgorithm[LogisticRegressionModel]
with Serializable {
val gradient = new LogisticGradient()
val updater = new SimpleUpdater()
override val optimizer = new GradientDescent(gradient, updater)
.setStepSize(stepSize)
.setNumIterations(numIterations)
.setRegParam(regParam)
.setMiniBatchFraction(miniBatchFraction)
override val validators = List(DataValidators.classificationLabels)
/**
* Construct a LogisticRegression object with default parameters
*/
def this() = this(1.0, 100, 0.0, 1.0)
def createModel(weights: Array[Double], intercept: Double) = {
new LogisticRegressionModel(weights, intercept)
}
}
源代码 先定义了gradient,updater实例(在optimization文件下下面),其中损失函数用了log-loss,没有用正则项参数,接着重写optimizer 优化算子,最后对该类成员变量stepSize,numIterations,regParam,miniBatchFraction设置默认数值。
object LogisticRegressionWithSGD {
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double,
miniBatchFraction: Double,
initialWeights: Array[Double])
: LogisticRegressionModel =
{
new LogisticRegressionWithSGD(stepSize, numIterations, 0.0, miniBatchFraction).run(
input, initialWeights)
}
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double,
miniBatchFraction: Double)
: LogisticRegressionModel =
{
new LogisticRegressionWithSGD(stepSize, numIterations, 0.0, miniBatchFraction).run(
input)
}
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double)
: LogisticRegressionModel =
{
train(input, numIterations, stepSize, 1.0)
}
def train(
input: RDD[LabeledPoint],
numIterations: Int)
: LogisticRegressionModel =
{
train(input, numIterations, 1.0, 1.0)
}
def main(args: Array[String]) {
if (args.length != 4) {
println("Usage: LogisticRegression <master> <input_dir> <step_size> " +
"<niters>")
System.exit(1)
}
val sc = new SparkContext(args(0), "LogisticRegression")
val data = MLUtils.loadLabeledData(sc, args(1))
val model = LogisticRegressionWithSGD.train(data, args(3).toInt, args(2).toDouble)
println("Weights: " + model.weights.mkString("[", ", ", "]"))
println("Intercept: " + model.intercept)
sc.stop()
}
}
代码中,根据不同的输入定义了4种train的方式,在main函数里面,用到了MLUtils.loadLabeledData(sc,args(1)),该函数把文件输入<标签>,<特征1>,<特征2>...转换成定义的RDD[LabeledPoint]形式。接着调用LR进行训练,最后打印回归系数跟截距项
class SVMModel(
override val weights: Array[Double],
override val intercept: Double)
extends GeneralizedLinearModel(weights, intercept)
with ClassificationModel with Serializable {
override def predictPoint(dataMatrix: DoubleMatrix, weightMatrix: DoubleMatrix,
intercept: Double) = {
val margin = dataMatrix.dot(weightMatrix) + intercept
if (margin < 0) 0.0 else 1.0
}
}
class SVMWithSGD private (
var stepSize: Double,
var numIterations: Int,
var regParam: Double,
var miniBatchFraction: Double)
extends GeneralizedLinearAlgorithm[SVMModel] with Serializable {
val gradient = new HingeGradient()
val updater = new SquaredL2Updater()
override val optimizer = new GradientDescent(gradient, updater)
.setStepSize(stepSize)
.setNumIterations(numIterations)
.setRegParam(regParam)
.setMiniBatchFraction(miniBatchFraction)
override val validators = List(DataValidators.classificationLabels)
def this() = this(1.0, 100, 1.0, 1.0)
def createModel(weights: Array[Double], intercept: Double) = {
new SVMModel(weights, intercept)
}
}
跟LR类似,gradient 换成了对hinge-loss的求梯度,updater换成了对L2正则
object SVMWithSGD {
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double,
regParam: Double,
miniBatchFraction: Double,
initialWeights: Array[Double])
: SVMModel =
{
new SVMWithSGD(stepSize, numIterations, regParam, miniBatchFraction).run(input,
initialWeights)
}
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double,
regParam: Double,
miniBatchFraction: Double)
: SVMModel =
{
new SVMWithSGD(stepSize, numIterations, regParam, miniBatchFraction).run(input)
}
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double,
regParam: Double)
: SVMModel =
{
train(input, numIterations, stepSize, regParam, 1.0)
}
def train(
input: RDD[LabeledPoint],
numIterations: Int)
: SVMModel =
{
train(input, numIterations, 1.0, 1.0, 1.0)
}
def main(args: Array[String]) {
if (args.length != 5) {
println("Usage: SVM <master> <input_dir> <step_size> <regularization_parameter> <niters>")
System.exit(1)
}
val sc = new SparkContext(args(0), "SVM")
val data = MLUtils.loadLabeledData(sc, args(1))
val model = SVMWithSGD.train(data, args(4).toInt, args(2).toDouble, args(3).toDouble)
println("Weights: " + model.weights.mkString("[", ", ", "]"))
println("Intercept: " + model.intercept)
sc.stop()
}
}
class NaiveBayesModel(val pi: Array[Double], val theta: Array[Array[Double]])
extends ClassificationModel with Serializable {
// Create a column vector that can be used for predictions
private val _pi = new DoubleMatrix(pi.length, 1, pi: _*)
private val _theta = new DoubleMatrix(theta)
def predict(testData: RDD[Array[Double]]): RDD[Double] = testData.map(predict)
def predict(testData: Array[Double]): Double = {
val dataMatrix = new DoubleMatrix(testData.length, 1, testData: _*)
val result = _pi.add(_theta.mmul(dataMatrix))
result.argmax()
}
}
朴素贝叶斯分类器,NaiveBayesModel的输入是:训练后得到的,标签类别先验概率pi (P(y=0),P(y=1),...,P(y=K)),特征属性在指定类别下出现的条件概率theta(P(x=1 / y)),对于特征转化为TF-IDF形式可以用来文本分类,当特征转化为0-1编码的时候,基于伯努利模型可以用来分类,第一个predict函数的输入是测试数据集,第二个predict函数的输入是单个测试样本。原本的贝叶斯定理是 根据P(y|x)~ P(x|y)P(y),这里实现的时候,是对两边取了对数,加法的计算效率比乘法更高,最后,返回result.argmax() 也就是后验概率最大的那个类别
class NaiveBayes private (var lambda: Double)
extends Serializable with Logging
{
def this() = this(1.0)
/** Set the smoothing parameter. Default: 1.0. */
def setLambda(lambda: Double): NaiveBayes = {
this.lambda = lambda
this
}
def run(data: RDD[LabeledPoint]) = {
val zeroCombiner = mutable.Map.empty[Int, (Int, DoubleMatrix)]
val aggregated = data.aggregate(zeroCombiner)({(combiner, point) =>
point match {
case LabeledPoint(label, features) =>
val (count, featuresSum) = combiner.getOrElse(label.toInt, (0, DoubleMatrix.zeros(1)))
val fs = new DoubleMatrix(features.length, 1, features: _*)
combiner += label.toInt -> (count + 1, featuresSum.addi(fs))
}
}, { (lhs, rhs) =>
for ((label, (c, fs)) <- rhs) {
val (count, featuresSum) = lhs.getOrElse(label, (0, DoubleMatrix.zeros(1)))
lhs(label) = (count + c, featuresSum.addi(fs))
}
lhs
})
// Kinds of label
val C = aggregated.size
// Total sample count
val N = aggregated.values.map(_._1).sum
val pi = new Array[Double](C)
val theta = new Array[Array[Double]](C)
val piLogDenom = math.log(N + C * lambda)
for ((label, (count, fs)) <- aggregated) {
val thetaLogDenom = math.log(fs.sum() + fs.length * lambda)
pi(label) = math.log(count + lambda) - piLogDenom
theta(label) = fs.toArray.map(f => math.log(f + lambda) - thetaLogDenom)
}
new NaiveBayesModel(pi, theta)
}
}
这个类是实现贝叶斯算法,lambda参数是用来避免P(X|Y)=0的尴尬(学术界叫法:拉普拉斯平滑),核心代码在data.aggregate,首先定义了zeroCombiner这个map类型数据结构,key表示类别,value是(Int, DoubleMatrix)元组类型,Int表示该类别在训练集中的个数(以便求先验概率),DoubleMatrix表示各个特征在该类别下的条件概率
object NaiveBayes {
def train(input: RDD[LabeledPoint]): NaiveBayesModel = {
new NaiveBayes().run(input)
}
def train(input: RDD[LabeledPoint], lambda: Double): NaiveBayesModel = {
new NaiveBayes(lambda).run(input)
}
def main(args: Array[String]) {
if (args.length != 2 && args.length != 3) {
println("Usage: NaiveBayes <master> <input_dir> [<lambda>]")
System.exit(1)
}
val sc = new SparkContext(args(0), "NaiveBayes")
val data = MLUtils.loadLabeledData(sc, args(1))
val model = if (args.length == 2) {
NaiveBayes.train(data)
} else {
NaiveBayes.train(data, args(2).toDouble)
}
println("Pi: " + model.pi.mkString("[", ", ", "]"))
println("Theta:\n" + model.theta.map(_.mkString("[", ", ", "]")).mkString("[", "\n ", "]"))
sc.stop()
}
}
贝叶斯训练方式分有无lambda参数,main函数先定义SparkContext,然后把数据集转化成RDD[LabelPoint]类型,经过训练,打印pi跟theta,最后八卦一下,这个算法是在Intel工作,微博名叫灵魂机器大神写的,可以follow他的github网址https://github.com/soulmachine
Spark0.9.0机器学习包MLlib-Classification代码阅读的更多相关文章
- Spark0.9.0机器学习包MLlib-Optimization代码阅读
基于Spark的一个生态产品--MLlib,实现了经典的机器学算法,源码分8个文件夹,classification文件夹下面包含NB.LR.SVM的实现,clustering文件夹下面包 ...
- Spark MLlib 示例代码阅读
阅读前提:有一定的机器学习基础, 本文重点面向的是应用,至于机器学习的相关复杂理论和优化理论,还是多多看论文,初学者推荐Ng的公开课 /* * Licensed to the Apache Softw ...
- spark0.9.0安装
利用周末的时间安装学习了下最近很火的Spark0.9.0(江湖传言,要革hadoop命,O(∩_∩)O),并体验了该框架下的机器学习包MLlib(spark解决的一个重点就是高效的运行迭代算法),下面 ...
- Spark机器学习之MLlib整理分析
友情提示: 本文档根据林大贵的<Python+Spark 2.0 + Hadoop机器学习与大数据实战>整理得到,代码均为书中提供的源码(python 2.X版本). 本文的可以利用pan ...
- Spark2.0机器学习系列之3:决策树
概述 分类决策树模型是一种描述对实例进行分类的树形结构. 决策树可以看为一个if-then规则集合,具有“互斥完备”性质 .决策树基本上都是 采用的是贪心(即非回溯)的算法,自顶向下递归分治构造. 生 ...
- 小姐姐带你一起学:如何用Python实现7种机器学习算法(附代码)
小姐姐带你一起学:如何用Python实现7种机器学习算法(附代码) Python 被称为是最接近 AI 的语言.最近一位名叫Anna-Lena Popkes的小姐姐在GitHub上分享了自己如何使用P ...
- Ubuntu安装Python机器学习包
1.安装pip $ mkdir ~/.pip $ vi ~/.pip/pip.conf [global] trusted-host=mirrors.aliyun.com index-url=http: ...
- 解决Socket粘包问题——C#代码
解决Socket粘包问题——C#代码 前天晚上,曾经的一个同事问我socket发送消息如果太频繁接收方就会有消息重叠,因为当时在外面,没有多加思考 第一反应还以为是多线程导致的数据不同步导致的,让他加 ...
- spark MLlib Classification and regression 学习
二分类:SVMs,logistic regression,decision trees,random forests,gradient-boosted trees,naive Bayes 多分类: ...
随机推荐
- Android学习(十二) ContentProvider
一.ContentProvider简介 当应用继承ContentProvider类,并重写该类用于提供数据和存储数据的方法,就可以向其他应用共享其数据.虽然使用其他方法也可以对外共享数据, ...
- 单点登录cas常见问题(四) - ticket有哪些存储方式?
配置文件ticketRegistry.xml负责配置ticket的存储方式,registry是注冊表,登记薄的意思 经常使用的存储方式包含 1.DefaultTicketRegistry:默认的.存储 ...
- Java + Selenium + WebDriver八大元素定位方式
UI自动化测试的第一步就是进行元素定位,下面给大家介绍一下Selenium + WebDriver的八大元素定位方式.现在我们就以百度搜索框为例进行元素定位,如下图: 一.By.name() Java ...
- 企业级分布式存储应用与实战MogileFS、FastDFS
项目实战9—企业级分布式存储应用与实战MogileFS.FastDFS 目录 实战一:企业级分布式存储应用与实战 mogilefs 实现 原理 1.环境准备 2.下载安装,每个机器都一样 3.数据 ...
- html实现网站全局按钮点击后置灰,不允许连续点击
<script> document.addEventListener("mouseup", upHandler, true); function upHandler(e ...
- webAPI 405
web.config 配置 <system.webServer> <modules> <remove name="WebDAVModule" /> ...
- SpringMvc入门教程
1.新建demo4 web项目, 导入spring包(使用的是spring4.2) 2.修改WEB-INF下的WEB.XML内容为 <?xml version="1.0" ...
- linux中查找文件并合并文件
find ./src -name '*.txt' -exec cat '{}' \; > test.txt
- hadoop partitioner个数与reducer个数的试验
job.setPartitionerClass(myPartitioner.class);//设置了5个 job.setNumReduceTasks(2); 1.当分区数等于rducer数量时,正常运 ...
- Linux下文件的堵塞与非堵塞对部分系统调用的影响
1.基本概念 所谓的堵塞,即内核在对文件操作I/O系统调用时.假设条件不满足(可能须要产生I/O),则内核会将该进程挂起.非堵塞则是发现条件不满足就会马上返回. 此外须要注意的是非堵塞并非轮询.不然就 ...