author:yangjing

time:2018-10-22


Gradient boosting decision tree

1.main diea

The main idea behind GBDT is to combine many simple models(also known as week kernels),like shallow trees.Each tree can only provide good predictions on part of the data,and so more and more trees are added to iteratively improve performance.

2.parameters setting

the algorithm is a bit more sensitive to parameter settings than random forests,but can provide better accuracy if the parameters are set correctly.

  • number of trees

    By increasing n_estimators ,also increasing the model complexity,as the model has more chances to correct misticks on the training set.
  • learning rate

    controns how strongly each tree tries to correct the misticks of the previous trees.A higher learning rate means each tree can make stronger correctinos,allowing for more complex models.
  • max_depth

    or alternatively max_leaf_nodes.Usyally max_depth is set very low for gradient-boosted models,often not deeper than five splits.

3.code

from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
cancer=load_breast_cancer()
X_train,X_test,y_train,y_test=train_test_split(cancer.data,cancer.target,random_state=0)
gbrt=GradientBoostingClassifier(random_state=0)
gbrt.fit(X_train,y_train)
gbrt.score(X_test,y_test)

In [261]: X_train,X_test,y_train,y_test=train_test_split(cancer.data,cancer.target,random_state=0)
...: gbrt=GradientBoostingClassifier(random_state=0)
...: gbrt.fit(X_train,y_train)
...: gbrt.score(X_test,y_test)
...:
Out[261]: 0.958041958041958 In [262]: gbrt.feature_importances_
Out[262]:
array([0.01337291, 0.04201687, 0.0208666 , 0.01889077, 0.01028091,
0.03215986, 0.02074619, 0.11678956, 0.00820024, 0.00074312,
0.02042134, 0.00680047, 0.02023052, 0.03907398, 0.05406751,
0.04795741, 0.02358101, 0.00934718, 0.00593481, 0.0239241 ,
0.05354265, 0.06160083, 0.10961728, 0.07395201, 0.01867851,
0.03842953, 0.01915824, 0.07128703, 0.01773659, 0.00059199]) In [263]: gbrt.learning_rate
Out[263]: 0.1 In [264]: gbrt.max_depth
Out[264]: 3 In [265]: len(gbrt.estimators_)
Out[266]: 100 In [272]: gbrt.get_params()
Out[272]:
{'criterion': 'friedman_mse',
'init': None,
'learning_rate': 0.1,
'loss': 'deviance',
'max_depth': 3,
'max_features': None,
'max_leaf_nodes': None,
'min_impurity_decrease': 0.0,
'min_impurity_split': None,
'min_samples_leaf': 1,
'min_samples_split': 2,
'min_weight_fraction_leaf': 0.0,
'n_estimators': 100,
'presort': 'auto',
'random_state': 0,
'subsample': 1.0,
'verbose': 0,
'warm_start': False}

Random forest

In [230]: y
Out[230]:
array([1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0,
0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1,
0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0,
0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0], dtype=int64) In [231]: axes.ravel()
Out[231]:
array([<matplotlib.axes._subplots.AxesSubplot object at 0x000001F46F3694A8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000001F46C099F28>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000001F46E6E3BE0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000001F46BEB72E8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000001F46ED67198>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000001F46F292C88>],
dtype=object) In [232]: from sklearn.model_selection import train_test_split In [233]: X_trai,X_test,y_train,y_test=train_test_split(X,y,stratify=y,random_state=42) In [234]: len(X_trai)
Out[234]: 75 In [235]: fores=RandomForestClassifier(n_estimators=5,random_state=2) In [236]: fores.fit(X_trai,y_train)
Out[236]:
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=1,
oob_score=False, random_state=2, verbose=0, warm_start=False) In [237]: fores.score(X_test,y_test)
Out[237]: 0.92 In [238]: fores.estimators_
Out[238]:
[DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False,
random_state=1872583848, splitter='best'),
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False,
random_state=794921487, splitter='best'),
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False,
random_state=111352301, splitter='best'),
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False,
random_state=1853453896, splitter='best'),
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False,
random_state=213298710, splitter='best')]

GBDT,随机森林的更多相关文章

  1. ObjectT5:在线随机森林-Multi-Forest-A chameleon in track in

    原文::Multi-Forest:A chameleon in tracking,CVPR2014  下的蛋...原文 使用随机森林的优势,在于可以使用GPU把每棵树分到一个流处理器里运行,容易并行化 ...

  2. 机器学习中的算法(1)-决策树模型组合之随机森林与GBDT

    版权声明: 本文由LeftNotEasy发布于http://leftnoteasy.cnblogs.com, 本文可以被全部的转载或者部分使用,但请注明出处,如果有问题,请联系wheeleast@gm ...

  3. 机器学习中的算法——决策树模型组合之随机森林与GBDT

    前言: 决策树这种算法有着很多良好的特性,比如说训练时间复杂度较低,预测的过程比较快速,模型容易展示(容易将得到的决策树做成图片展示出来)等.但是同时,单决策树又有一些不好的地方,比如说容易over- ...

  4. 决策树模型组合之(在线)随机森林与GBDT

    前言: 决策树这种算法有着很多良好的特性,比如说训练时间复杂度较低,预测的过程比较快速,模型容易展示(容易将得到的决策树做成图片展示出来)等.但是同时, 单决策树又有一些不好的地方,比如说容易over ...

  5. 机器学习中的算法-决策树模型组合之随机森林与GBDT

    机器学习中的算法(1)-决策树模型组合之随机森林与GBDT 版权声明: 本文由LeftNotEasy发布于http://leftnoteasy.cnblogs.com, 本文可以被全部的转载或者部分使 ...

  6. 随机森林与GBDT

    前言: 决策树这种算法有着很多良好的特性,比如说训练时间复杂度较低,预测的过程比较快速,模型容易展示(容易将得到的决策树做成图片展示出来)等.但是同时,单决策树又有一些不好的地方,比如说容易over- ...

  7. 决策树模型组合之随机森林与GBDT

    版权声明: 本文由LeftNotEasy发布于http://leftnoteasy.cnblogs.com, 本文可以被全部的转载或者部分使用,但请注明出处,如果有问题,请联系wheeleast@gm ...

  8. 随机森林和GBDT

    1. 随机森林 Random Forest(随机森林)是Bagging的扩展变体,它在以决策树 为基学习器构建Bagging集成的基础上,进一步在决策树的训练过程中引入了随机特征选择,因此可以概括RF ...

  9. 随机森林RF、XGBoost、GBDT和LightGBM的原理和区别

    目录 1.基本知识点介绍 2.各个算法原理 2.1 随机森林 -- RandomForest 2.2 XGBoost算法 2.3 GBDT算法(Gradient Boosting Decision T ...

随机推荐

  1. centos6.5 python2.7.8 安装scrapy总是出错【解决】

    pip install Scrapy 报错: UnicodeDecodeError: 'ascii' codec can't decode byte 0xb4 in position python s ...

  2. CSU 1330 字符识别? 【找规律】

    你的任务是写一个程序进行字符识别.别担心,你只需要识别1, 2, 3,如下: .*.  ***  *** .*.  ..*  ..* .*.  ***  *** .*.  *..  ..* .*.  ...

  3. Hibernate 配置文件与实体类

    今天在配置Hibernate的时候碰到了这样的错误: MappingException: Unknown entity 仔细想了一下,应该是因为我在开始时使用了 Configuration confi ...

  4. 暴力 【p4092】[HEOI2016/TJOI2016]树

    Description 在2016年,佳媛姐姐刚刚学习了树,非常开心.现在他想解决这样一个问题:给定一颗有根树(根为1),有以下两种操作: 标记操作:对某个结点打上标记(在最开始,只有结点1有标记,其 ...

  5. 1、Flask实战第1天:第一个Flask程序

    Flask是流行的python web框架...(* ̄︶ ̄) 零基础到企业级论坛实战,人生苦短,我用python,开启FLask之旅吧... 安装开发环境 下载Python win版安装包 双击运行, ...

  6. java 面向接口编程的理解

    初学者可能在学习中会有很多疑惑,为什么要这样,明明可以那样实现,这样做的好处又是什么? 可能会的人觉得很简单很容易理解,甚至可能觉得问的问题很智障,但对于小白来说可能是苦思冥想都不得其解的. 自己身为 ...

  7. 【R笔记】日期处理

    R语言学习笔记:日期处理 1.取出当前日期 Sys.Date() [1] "2014-10-29" date() #注意:这种方法返回的是字符串类型 [1] "Wed O ...

  8. IO 流(File)

    1.创建文件 package com.ic.demo01; import java.io.File; import java.io.IOException; public class FileDemo ...

  9. HashMap源码-使用说明部分

    /* * Implementation notes. * 使用说明 * * This map usually acts as a binned (bucketed) hash table, but * ...

  10. Vmware+Virtualbox+Ubuntu+debian+USB转串口+kermit

    当前的环境是:在Win7笔记本主机上安装VirtualBox+Ubuntu12_04,串口使用USB转串口 如果使用的虚拟机是VirtualBox: 如果使用的虚拟机是Vmware: 执行这步后,主机 ...