class pyspark.mllib.tree.RandomForest[source]

Learning algorithm for a random forest model for classification or regression.

New in version 1.2.0.

supportedFeatureSubsetStrategies = ('auto', 'all', 'sqrt', 'log2', 'onethird')
classmethod trainClassifier(datanumClassescategoricalFeaturesInfonumTreesfeatureSubsetStrategy='auto'impurity='gini'maxDepth=4maxBins=32seed=None)[source]

Train a random forest model for binary or multiclass classification.

Parameters:
  • data – Training dataset: RDD of LabeledPoint. Labels should take values {0, 1, ..., numClasses-1}.
  • numClasses – Number of classes for classification.
  • categoricalFeaturesInfo – Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, ..., k-1}.
  • numTrees – Number of trees in the random forest.
  • featureSubsetStrategy – Number of features to consider for splits at each node. Supported values: “auto”, “all”, “sqrt”, “log2”, “onethird”. If “auto” is set, this parameter is set based on numTrees: if numTrees == 1, set to “all”; if numTrees > 1 (forest) set to “sqrt”. (default: “auto”)
  • impurity – Criterion used for information gain calculation. Supported values: “gini” or “entropy”. (default: “gini”)
  • maxDepth – Maximum depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes). (default: 4)
  • maxBins – Maximum number of bins used for splitting features. (default: 32)
  • seed – Random seed for bootstrapping and choosing feature subsets. Set as None to generate seed based on system time. (default: None)
Returns:

RandomForestModel that can be used for prediction.

Example usage:

>>> from pyspark.mllib.regression import LabeledPoint
>>> from pyspark.mllib.tree import RandomForest
>>>
>>> data = [
... LabeledPoint(0.0, [0.0]),
... LabeledPoint(0.0, [1.0]),
... LabeledPoint(1.0, [2.0]),
... LabeledPoint(1.0, [3.0])
... ]
>>> model = RandomForest.trainClassifier(sc.parallelize(data), 2, {}, 3, seed=42)
>>> model.numTrees()
3
>>> model.totalNumNodes()
7
>>> print(model)
TreeEnsembleModel classifier with 3 trees >>> print(model.toDebugString())
TreeEnsembleModel classifier with 3 trees Tree 0:
Predict: 1.0
Tree 1:
If (feature 0 <= 1.0)
Predict: 0.0
Else (feature 0 > 1.0)
Predict: 1.0
Tree 2:
If (feature 0 <= 1.0)
Predict: 0.0
Else (feature 0 > 1.0)
Predict: 1.0 >>> model.predict([2.0])
1.0
>>> model.predict([0.0])
0.0
>>> rdd = sc.parallelize([[3.0], [1.0]])
>>> model.predict(rdd).collect()
[1.0, 0.0]

New in version 1.2.0.

摘自:https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.tree.DecisionTree

python spark 随机森林入门demo的更多相关文章

  1. Spark随机森林实现学习

    前言 最近阅读了spark mllib(版本:spark 1.3)中Random Forest的实现,发现在分布式的数据结构上实现迭代算法时,有些地方与单机环境不一样.单机上一些直观的操作(递归),在 ...

  2. 用Python实现随机森林算法,深度学习

    用Python实现随机森林算法,深度学习 拥有高方差使得决策树(secision tress)在处理特定训练数据集时其结果显得相对脆弱.bagging(bootstrap aggregating 的缩 ...

  3. 随机森林算法demo python spark

    关键参数 最重要的,常常需要调试以提高算法效果的有两个参数:numTrees,maxDepth. numTrees(决策树的个数):增加决策树的个数会降低预测结果的方差,这样在测试时会有更高的accu ...

  4. Python中随机森林的实现与解释

    使用像Scikit-Learn这样的库,现在很容易在Python中实现数百种机器学习算法.这很容易,我们通常不需要任何关于模型如何工作的潜在知识来使用它.虽然不需要了解所有细节,但了解机器学习模型是如 ...

  5. python实现随机森林、逻辑回归和朴素贝叶斯的新闻文本分类

    实现本文的文本数据可以在THUCTC下载也可以自己手动爬虫生成, 本文主要参考:https://blog.csdn.net/hao5335156/article/details/82716923 nb ...

  6. Spark随机森林实战

    package big.data.analyse.ml.randomforest import org.apache.spark.ml.Pipeline import org.apache.spark ...

  7. spark 随机森林算法案例实战

    随机森林算法 由多个决策树构成的森林,算法分类结果由这些决策树投票得到,决策树在生成的过程当中分别在行方向和列方向上添加随机过程,行方向上构建决策树时采用放回抽样(bootstraping)得到训练数 ...

  8. python的随机森林模型调参

    一.一般的模型调参原则 1.调参前提:模型调参其实是没有定论,需要根据不同的数据集和不同的模型去调.但是有一些调参的思想是有规律可循的,首先我们可以知道,模型不准确只有两种情况:一是过拟合,而是欠拟合 ...

  9. Python之随机森林实战

    代码实现: # -*- coding: utf-8 -*- """ Created on Tue Sep 4 09:38:57 2018 @author: zhen &q ...

随机推荐

  1. MemcachedClient 使用说明

    上一篇介绍了Memcached基本使用方法<Memcached使用手册>,下面介绍java如何操作memcached.使用的是java_memcached-release_2.6.6. 一 ...

  2. 实例化vue发生了什么(详解vue生命周期)

    const app = new Vue({ el:"#app', data:{ message:'hello,lifePeriod' }, methods:{ init(){ console ...

  3. ThinkPHP3.2.3对数据的添、删、改、查(CURD)

    对数据的添加: public function form() { parent::common(); $obj = D('Leave'); if (IS_POST) { $data = I('post ...

  4. 上传文件到linux系统方法

    linux,windows filezilla 方法/步骤     下载 Filezilla client工具,此客户端为免费软件,下载完成后安装,安装一路下一步在这里就不过多阐述了,下载地址 htt ...

  5. Ubuntu16下安装lamp

    1.安装php7 sudo apt-get install php7.0 php7.0-mcrypt 2.安装MySQL sudo apt-get install mysql-server 输入 su ...

  6. python3设置打开文件的编码

    f = open(file_path,'r',encoding='utf8') 用起来很方便,不需要先读取再转码了.

  7. AS3.0+PHP写入mySQL

    php中$_POST变量是一个数组,用于收集来自method="post"的值,内容是有HTTP POST方法发送的变量名称和值. 从带有POST方法的表单发送的信息,对任何人都是 ...

  8. jquery Contains 实现查询

    var filter = $(this).val(); var filterResult = $(this).find('h2:Contains(' + filter + ')'); if (filt ...

  9. HDU_5810_数学,概率,方差

    题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=5810 大意:将n个球往m个盒子中投,每个球被投入每个盒子的概率相等,求方差. 看题解说,这是二项分布( ...

  10. InnoDB undo log物理结构的初始化

    水平有限,如果有误请指出.一直以来未对Innodb 的undo进行好好的学习,最近刚好有点时间准备学习一下,通过阿里内核月报和自己看代码的综合总结一下.本文环境: 代码版本 percona 5.7.2 ...