Xgboost参数调节
转自:https://segmentfault.com/a/1190000014040317
整体:
- # 1.调试n_estimators
- cv_params = {'n_estimators': [550, 575, 600, 650, 675]}
- other_params = {'learning_rate': 0.1, 'n_estimators': 600, 'max_depth': 5, 'min_child_weight': 1, 'seed': 0,
- 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1}
- # 2.调试max_depth、min_child_weight
- # cv_params = {'max_depth': [3, 4, 5, 6, 7, 8, 9, 10], 'min_child_weight': [1, 2, 3, 4, 5, 6]}
- # other_params = {'learning_rate': 0.1, 'n_estimators': 550, 'max_depth': 5, 'min_child_weight': 1, 'seed': 0,
- # 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1}
- # 3.调试gamma
- # cv_params = {'gamma': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]}
- # other_params = {'learning_rate': 0.1, 'n_estimators': 550, 'max_depth': 4, 'min_child_weight': 5, 'seed': 0,
- # 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1}
- # 4.调试subsample、colsample_bytree
- # cv_params = {'subsample': [0.6, 0.7, 0.8, 0.9], 'colsample_bytree': [0.6, 0.7, 0.8, 0.9]}
- # other_params = {'learning_rate': 0.1, 'n_estimators': 550, 'max_depth': 4, 'min_child_weight': 5, 'seed': 0,
- # 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0.1, 'reg_alpha': 0, 'reg_lambda': 1}
- # 5.调试reg_alpha、reg_lambda
- # cv_params = {'reg_alpha': [0.05, 0.1, 1, 2, 3], 'reg_lambda': [0.05, 0.1, 1, 2, 3]}
- # other_params = {'learning_rate': 0.1, 'n_estimators': 550, 'max_depth': 4, 'min_child_weight': 5, 'seed': 0,
- # 'subsample': 0.7, 'colsample_bytree': 0.7, 'gamma': 0.1, 'reg_alpha': 0, 'reg_lambda': 1}
- # 6.调试learning_rate
- # cv_params = {'learning_rate': [0.01, 0.05, 0.07, 0.1, 0.2]}
- # other_params = {'learning_rate': 0.1, 'n_estimators': 550, 'max_depth': 4, 'min_child_weight': 5, 'seed': 0,
- # 'subsample': 0.7, 'colsample_bytree': 0.7, 'gamma': 0.1, 'reg_alpha': 1, 'reg_lambda': 1}
- model = xgb.XGBClassifier(**other_params)
- optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, cv=5, verbose=1, n_jobs=4)
- optimized_GBM.fit(X_train, y_train)
- evalute_result = optimized_GBM.grid_scores_
- print('每轮迭代运行结果:{0}'.format(evalute_result))
- print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_))
- print('最佳模型得分:{0}'.format(optimized_GBM.best_score_))
1.调节最大迭代次数n_estimators
- # 最佳迭代次数:n_estimators
- from xgboost import XGBRegressor
- from sklearn.model_selection import GridSearchCV
- cv_params = {'n_estimators': [20,30,40]}
- other_params = {'learning_rate': 0.1, 'n_estimators': 500, 'max_depth': 5, 'min_child_weight': 1, 'seed': 0,
- 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1}
- model = XGBRegressor(**other_params)
- optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring='r2', cv=3, verbose=1, n_jobs=-1)
- optimized_GBM.fit(x_data, y_data)
- evalute_result =optimized_GBM.return_train_score
- print('每轮迭代运行结果:{0}'.format(evalute_result))
- print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_))
- print('最佳模型得分:{0}'.format(optimized_GBM.best_score_))
2.调试的参数是min_child_weight以及max_depth:
- # 调试的参数是min_child_weight以及max_depth:
- cv_params = {'max_depth': [3, 4, 5, 6, 7, 8, 9, 10], 'min_child_weight': [6,7,8]}
- other_params = {'learning_rate': 0.1, 'n_estimators': 20, 'max_depth': 5, 'min_child_weight': 1, 'seed': 0,
- 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1}
- model = XGBRegressor(**other_params)
- optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring='r2', cv=3, verbose=1, n_jobs=-1)
- optimized_GBM.fit(x_data, y_data)
- evalute_result =optimized_GBM.return_train_score
- print('每轮迭代运行结果:{0}'.format(evalute_result))
- print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_))
- print('最佳模型得分:{0}'.format(optimized_GBM.best_score_))
3.调试参数:gamma:
- # 调试参数:gamma:
- cv_params = {'gamma': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]}
- other_params = {'learning_rate': 0.1, 'n_estimators': 20, 'max_depth': 4, 'min_child_weight': 6, 'seed': 0,
- 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1}
- model = XGBRegressor(**other_params)
- optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring='r2', cv=3, verbose=1, n_jobs=-1)
- optimized_GBM.fit(x_data, y_data)
- evalute_result =optimized_GBM.return_train_score
- print('每轮迭代运行结果:{0}'.format(evalute_result))
- print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_))
- print('最佳模型得分:{0}'.format(optimized_GBM.best_score_))
4. 调试subsample以及colsample_bytree:
- # 调试subsample以及colsample_bytree:
- cv_params = {'subsample': [0.6, 0.7, 0.8, 0.9], 'colsample_bytree': [0.6, 0.7, 0.8, 0.9]}
- other_params = {'learning_rate': 0.1, 'n_estimators': 20, 'max_depth': 4, 'min_child_weight': 6, 'seed': 0,
- 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0.2, 'reg_alpha': 0, 'reg_lambda': 1}
- model = XGBRegressor(**other_params)
- optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring='r2', cv=3, verbose=1, n_jobs=4)
- optimized_GBM.fit(x_data, y_data)
- evalute_result =optimized_GBM.return_train_score
- print('每轮迭代运行结果:{0}'.format(evalute_result))
- print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_))
- print('最佳模型得分:{0}'.format(optimized_GBM.best_score_))
5.调试reg_alpha以及reg_lambda:
- # 调试reg_alpha以及reg_lambda:
- cv_params = {'reg_alpha': [0.05, 0.1, 1, 2, 3], 'reg_lambda': [0.05, 0.1, 1, 2, 3]}
- other_params = {'learning_rate': 0.1, 'n_estimators': 20, 'max_depth': 4, 'min_child_weight': 6, 'seed': 0,
- 'subsample': 0.8, 'colsample_bytree': 0.9, 'gamma': 0.2, 'reg_alpha': 0, 'reg_lambda': 1}
- model = XGBRegressor(**other_params)
- optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring='r2', cv=3, verbose=1, n_jobs=4)
- optimized_GBM.fit(x_data, y_data)
- evalute_result =optimized_GBM.return_train_score
- print('每轮迭代运行结果:{0}'.format(evalute_result))
- print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_))
- print('最佳模型得分:{0}'.format(optimized_GBM.best_score_))
6.调试learning_rate:
- # 调试learning_rate,一般这时候要调小学习率来测试:
- cv_params = {'learning_rate': [0.01, 0.05, 0.07, 0.1, 0.2]}
- other_params = {'learning_rate': 0.1, 'n_estimators': 20, 'max_depth': 4, 'min_child_weight': 6, 'seed': 0,
- 'subsample': 0.8, 'colsample_bytree': 0.9, 'gamma': 0.2, 'reg_alpha': 0.1, 'reg_lambda': 1}
- model = XGBRegressor(**other_params)
- optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring='r2', cv=3, verbose=1, n_jobs=4)
- optimized_GBM.fit(x_data, y_data)
- evalute_result =optimized_GBM.return_train_score
- print('每轮迭代运行结果:{0}'.format(evalute_result))
- print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_))
- print('最佳模型得分:{0}'.format(optimized_GBM.best_score_))
Xgboost参数调节的更多相关文章
- XGBoost参数调优
XGBoost参数调优 http://blog.csdn.net/hhy518518/article/details/54988024 摘要: 转载:http://blog.csdn.NET/han_ ...
- XGBoost参数调优完全指南(附Python代码)
XGBoost参数调优完全指南(附Python代码):http://www.2cto.com/kf/201607/528771.html https://www.zhihu.com/question/ ...
- XGBoost参数
XGBoost参数 转自http://blog.csdn.net/zc02051126/article/details/46711047 在运行XGboost之前,必须设置三种类型成熟:general ...
- linux 内核参数VM调优 之 参数调节和场景分析
1. pdflush刷新脏数据条件 (linux IO 内核参数调优 之 原理和参数介绍)上一章节讲述了IO内核调优介个重要参数参数. 总结可知cached中的脏数据满足如下几个条件中一个或者多个的时 ...
- xgboost 参数
XGBoost 参数 在运行XGBoost程序之前,必须设置三种类型的参数:通用类型参数(general parameters).booster参数和学习任务参数(task parameters). ...
- (转)linux IO 内核参数调优 之 参数调节和场景分析
1. pdflush刷新脏数据条件 (linux IO 内核参数调优 之 原理和参数介绍)上一章节讲述了IO内核调优介个重要参数参数. 总结可知cached中的脏数据满足如下几个条件中一个或者多个的时 ...
- inux IO 内核参数调优 之 参数调节和场景分析
http://backend.blog.163.com/blog/static/2022941262013112081215609/ http://blog.csdn.net/icycode/arti ...
- 【转】XGBoost参数调优完全指南(附Python代码)
xgboost入门非常经典的材料,虽然读起来比较吃力,但是会有很大的帮助: 英文原文链接:https://www.analyticsvidhya.com/blog/2016/03/complete-g ...
- 机器学习——XGBoost大杀器,XGBoost模型原理,XGBoost参数含义
0.随机森林的思考 随机森林的决策树是分别采样建立的,各个决策树之间是相对独立的.那么,在我们得到了第k-1棵决策树之后,能否通过现有的样本和决策树的信息, 对第m颗树的建立产生有益的影响呢?在随机森 ...
随机推荐
- Same Tree 深度优先
Given two binary trees, write a function to check if they are equal or not. Two binary trees are con ...
- Mac终端打开AndroidStudio已创建模拟器
目的 偶尔我们只是想运行模拟器,并不想打开AndroidStudio,这时我们可以从终端找到emulator,通过emulator来启动指定名称的模拟器 步骤 1.找到emulator所在位置 fin ...
- 【Mysql的那些事】数据库之ORM操作
1:ORM的基础操作(必会) <1> all(): 查询所有结果 <2> filter(**kwargs): 它包含了与所给筛选条件相匹配的对象 <3> get(* ...
- 【UTR #1】ydc的大树
[UTR #1]ydc的大树 全网唯一一篇题解我看不懂 所以说一下我的O(nlogn)做法: 以1号点为根节点 一个黑点如果有多个相邻的节点出去都能找到最远的黑点,那么这个黑点就是无敌的 所以考虑每个 ...
- LA 4119 Always an integer (数论+模拟)
ACM-ICPC Live Archive 一道模拟题,题意是问一个给出的多项式代入正整数得到的值是否总是整数. 这题是一道数论题,其实对于这个式子,我们只要计算1~最高次项是否都满足即可. 做的时候 ...
- Project Euler Problem 7-10001st prime
素数线性筛 MAXN = 110100 prime = [0 for i in range(210000)] for i in range(2,MAXN): if prime[i] == 0: pri ...
- Python--day24--继承
A_son.__bases__查看继承的父类是哪些 A. object是所有类的祖宗,所有的类都默认继承了object类. python中可以多继承 继承与抽象,先抽象再继承: example:
- [转]C#操作word模板插入文字、图片及表格详细步骤
c#操作word模板插入文字.图片及表格 1.建立word模板文件 person.dot用书签 标示相关字段的填充位置 2.建立web应用程序 加入Microsoft.Office.Interop.W ...
- 【js】vue 2.5.1 源码学习 (七) 初始化之 initState 响应式系统基本思路
大体思路(六) 本节内容: 一.生命周期的钩子函数的实现 ==> callHook(vm , 'beforeCreate') beforeCreate 实例创建之后 事件数据还未创建 二.初始化 ...
- js基础——面向对象(构造函数)
1.面向对象:类的标志,通过类可创建多个具有相同属性和方法的对象 2.创建对象 1)工厂模式方式:避免重复实例化但未能解决识别问题 function boss(name, age) { ...