spark 针对决策树进行交叉验证
from pyspark import SparkContext, SQLContext
from pyspark.ml import Pipeline
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import StringIndexer, VectorIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator # Load the data stored in LIBSVM format as a DataFrame.
data = sqlContext.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt") # Index labels, adding metadata to the label column.
# Fit on whole dataset to include all labels in index.
labelIndexer = StringIndexer(inputCol="label", outputCol="indexedLabel").fit(data)
# Automatically identify categorical features, and index them.
# We specify maxCategories so features with > 4 distinct values are treated as continuous.
featureIndexer =\
VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=4).fit(data) # Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3]) # Train a DecisionTree model.
dt = DecisionTreeClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures") # Chain indexers and tree in a Pipeline
pipeline = Pipeline(stages=[labelIndexer, featureIndexer, dt]) # Train model. This also runs the indexers.
model = pipeline.fit(trainingData) # Make predictions.
predictions = model.transform(testData) # Select example rows to display.
predictions.select("prediction", "indexedLabel", "features").show(5) # Select (prediction, true label) and compute test error
evaluator = MulticlassClassificationEvaluator(
labelCol="indexedLabel", predictionCol="prediction", metricName="precision")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g " % (1.0 - accuracy)) treeModel = model.stages[2]
# summary only
print(treeModel) ############################# from pyspark.ml.tuning import ParamGridBuilder, CrossValidator # Create ParamGrid for Cross Validation
paramGrid = (ParamGridBuilder()
.addGrid(lr.regParam, [0.01, 0.5, 2.0])
.addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])
.addGrid(lr.maxIter, [1, 5, 10])
.build())
Copy to clipboardCopy
# Create 5-fold CrossValidator
cv = CrossValidator(estimator=lr, estimatorParamMaps=paramGrid, evaluator=evaluator, numFolds=5) # Run cross validations
cvModel = cv.fit(trainingData)
# this will likely take a fair amount of time because of the amount of models that we're creating and testing # Use test set here so we can measure the accuracy of our model on new data
predictions = cvModel.transform(testData) # cvModel uses the best model found from the Cross Validation
# Evaluate best model
evaluator.evaluate(predictions) #We can also access the model’s feature weights and intercepts easily print 'Model Intercept: ', cvModel.bestModel.intercept
ML provides CrossValidator class which can be used to perform cross-validation and parameter search. Assuming your data is already preprocessed you can add cross-validation as follows: import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.tuning.{ParamGridBuilder, CrossValidator}
import org.apache.spark.ml.classification.RandomForestClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator // [label: double, features: vector]
trainingData org.apache.spark.sql.DataFrame = ???
val nFolds: Int = ???
val NumTrees: Int = ???
val metric: String = ??? val rf = new RandomForestClassifier()
.setLabelCol("label")
.setFeaturesCol("features")
.setNumTrees(NumTrees) val pipeline = new Pipeline().setStages(Array(rf)) val paramGrid = new ParamGridBuilder().build() // No parameter search val evaluator = new MulticlassClassificationEvaluator()
.setLabelCol("label")
.setPredictionCol("prediction")
// "f1" (default), "weightedPrecision", "weightedRecall", "accuracy"
.setMetricName(metric) val cv = new CrossValidator()
// ml.Pipeline with ml.classification.RandomForestClassifier
.setEstimator(pipeline)
// ml.evaluation.MulticlassClassificationEvaluator
.setEvaluator(evaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(nFolds) val model = cv.fit(trainingData) // trainingData: DataFrame
Using PySpark: from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import MulticlassClassificationEvaluator trainingData = ... # DataFrame[label: double, features: vector]
numFolds = ... # Integer rf = RandomForestClassifier(labelCol="label", featuresCol="features")
evaluator = MulticlassClassificationEvaluator() # + other params as in Scala pipeline = Pipeline(stages=[rf]) crossval = CrossValidator(
estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
numFolds=numFolds) model = crossval.fit(trainingData)
spark 针对决策树进行交叉验证的更多相关文章
- Spark机器学习——模型选择与参数调优之交叉验证
spark 模型选择与超参调优 机器学习可以简单的归纳为 通过数据训练y = f(x) 的过程,因此定义完训练模型之后,就需要考虑如何选择最终我们认为最优的模型. 如何选择最优的模型,就是本篇的主要内 ...
- 什么是机器学习的分类算法?【K-近邻算法(KNN)、交叉验证、朴素贝叶斯算法、决策树、随机森林】
1.K-近邻算法(KNN) 1.1 定义 (KNN,K-NearestNeighbor) 如果一个样本在特征空间中的k个最相似(即特征空间中最邻近)的样本中的大多数属于某一个类别,则该样本也属于这个类 ...
- Spark2.0机器学习系列之2:基于Pipeline、交叉验证、ParamMap的模型选择和超参数调优
Spark中的CrossValidation Spark中采用是k折交叉验证 (k-fold cross validation).举个例子,例如10折交叉验证(10-fold cross valida ...
- 几种交叉验证(cross validation)方式的比较
模型评价的目的:通过模型评价,我们知道当前训练模型的好坏,泛化能力如何?从而知道是否可以应用在解决问题上,如果不行,那又是哪里出了问题? train_test_split 在分类问题中,我们通常通过对 ...
- sklearn 中的交叉验证
sklearn中的交叉验证(Cross-Validation) sklearn是利用python进行机器学习中一个非常全面和好用的第三方库,用过的都说好.今天主要记录一下sklearn中关于交叉验证的 ...
- sklearn中的交叉验证(Cross-Validation)
这个repo 用来记录一些python技巧.书籍.学习链接等,欢迎stargithub地址sklearn是利用python进行机器学习中一个非常全面和好用的第三方库,用过的都说好.今天主要记录一下sk ...
- 【scikit-learn】交叉验证及其用于參数选择、模型选择、特征选择的样例
内容概要¶ 训练集/測试集切割用于模型验证的缺点 K折交叉验证是怎样克服之前的不足 交叉验证怎样用于选择调节參数.选择模型.选择特征 改善交叉验证 1. 模型验证回想¶ 进行模型验证的一个重要目 ...
- 十折交叉验证10-fold cross validation, 数据集划分 训练集 验证集 测试集
机器学习 数据挖掘 数据集划分 训练集 验证集 测试集 Q:如何将数据集划分为测试数据集和训练数据集? A:three ways: 1.像sklearn一样,提供一个将数据集切分成训练集和测试集的函数 ...
- 机器学习- Sklearn (交叉验证和Pipeline)
前面一节咱们已经介绍了决策树的原理已经在sklearn中的应用.那么这里还有两个数据处理和sklearn应用中的小知识点咱们还没有讲,但是在实践中却会经常要用到的,那就是交叉验证cross_valid ...
随机推荐
- WinServer-AD操作常用powershell命令
powershell 操作AD常用命令 查询AD中默认的密码策略 Get-ADDefaultDomainPasswordPolicy 查询AD中密码永不过期的用户 Get-ADUser -Filter ...
- SQL SERVER-安装和卸载
卸载后无法正常安装SQL SERVER 删除了本机的SQL SERVER以后,我发现我本机的SQL SERVER 再也安装不上了,这个一个比较严重的问题,要每天定时备份数据库到指定的地方才能防止数据丢 ...
- HDU 2879
利用x<n的信息,可以证得当n为素数时,he[n]=2;同时,若n 为素数,则有HE[N^K]=2;因为若等式成立则有n|x(x-1).抓住这个证即可. 至于符合积性函数,想了很久也没想出来,看 ...
- Android程序之全国天气预报查询(聚合数据开发)
一.项目演示效果例如以下: 项目源码下载地址: http://pan.baidu.com/s/1pL6o5Mb password:5myq 二.使用 聚合数据SDK: (1)聚合数据官网地址:http ...
- mysql的查询练习1
1.多表查询
- 17、lambda表达式
一.简介 lambda表达式允许你通过表达式来代替功能接口,lambda表达式就和方法一样,它提供了一个正常的参数列表和一个使用这些参数的主体(body,可以是一个表达式或一个代码块),它还增强了集合 ...
- poj_1185状压dp
用二维数组写了好久,失败啊.. #include<iostream> #include<string.h> #include<cstdio> #include< ...
- ES task管理
Task Management API The Task Management API is new and should still be considered a beta feature. Th ...
- Under ubuntu 12.04,install sublime text 2
Sublime Text is an awesome text editor. If you’ve never heard of it, you should check it out right n ...
- 升级Ubuntu18.04后遇到的坑
升级过程: 直接do-release-update 就可以直接从16.04更新到18.04了. 中间会提升更新一些配置文件, 我大部分都选择了N. 然后就成功升级到18.04了, 显卡驱动什么的都 ...