XGBoost使用教程(进阶篇)三
一、Importing all the libraries
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn import metrics
from sklearn.metrics import accuracy_score
二、Reading the file
还是蘑菇数据集,直接采用Kaggle竞赛中22维特征 https://www.kaggle.com/uciml/mushroom-classification
数据集下载地址:http://download.csdn.net/download/u011630575/10266626
# path to where the data lies
dpath = './data/'
data = pd.read_csv(dpath+"mushrooms.csv")
data.head(6)
三、Let us check if there is any null values
data.isnull().sum() #检查数据有没有空值
四、check if we have two claasification. Either the mushroom is poisonous or edible
data['class'].unique() #检查是否只有蘑菇的种类,有毒,可使用
print(data.dtypes)
五、check if 22 features(1st one is label) and 8124 instances
data.shape  #22个特征 8124个样例  第一个是标签
六、The dataset has values in strings. We need to convert all the unique values to integers. Thus we perform label encoding on the data 标准化标签
from sklearn.preprocessing import LabelEncoder
labelencoder=LabelEncoder()   #标准化标签,将标签值统一转换成range(标签值个数-1)范围内
for col in data.columns:
    data[col] = labelencoder.fit_transform(data[col])
data.head()
Separating features and label
X = data.iloc[:,1:23]  # 获取1-23行特征
y = data.iloc[:, 0]  # 获取0行标签
X.head()
y.head()
Splitting the data into training and testing dataset
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=4)
七、default Logistic Regression
from sklearn.linear_model import LogisticRegression
model_LR= LogisticRegression()
model_LR.fit(X_train,y_train)
y_prob = model_LR.predict_proba(X_test)[:,1] # This will give you positive class prediction probabilities  
y_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.
model_LR.score(X_test, y_pred)
注:np.where(condition,x,y) 是三元运算符,conditon条件成立则结果为x,否则为y。
accuracy
auc_roc=metrics.roc_auc_score(y_test,y_pred)
print(auc_roc)
八、Logistic Regression(Tuned model) 调整模型
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn import metrics
LR_model= LogisticRegression()
tuned_parameters = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] ,
              'penalty': ['l1','l2']
                   }
九、CV
from sklearn.model_selection import GridSearchCV
LR= GridSearchCV(LR_model, tuned_parameters,cv=10)
LR.fit(X_train,y_train)
print(LR.best_params_)
y_prob = LR.predict_proba(X_test)[:,1] # This will give you positive class prediction probabilities  
y_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.
LR.score(X_test, y_pred)
auc_roc=metrics.roc_auc_score(y_test,y_pred)
print(auc_roc)
十、Default Decision Tree model
from sklearn.tree import DecisionTreeClassifier
model_tree = DecisionTreeClassifier()
model_tree.fit(X_train, y_train)
y_prob = model_tree.predict_proba(X_test)[:,1] # This will give you positive class prediction probabilities  
y_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.
model_tree.score(X_test, y_pred)
auc_roc=metrics.roc_auc_score(y_test,y_pred)
auc_roc
十一、Let us tune the hyperparameters of the Decision tree model
from sklearn.tree import DecisionTreeClassifier
model_DD = DecisionTreeClassifier()
tuned_parameters= {   'max_features': ["auto","sqrt","log2"],
                  'min_samples_leaf': range(1,100,1) , 'max_depth': range(1,50,1)
                  }
#tuned_parameters= { 'max_features': ["auto","sqrt","log2"]  }
#If “auto”, then max_features=sqrt(n_features).
from sklearn.model_selection import GridSearchCV
DD = GridSearchCV(model_DD, tuned_parameters,cv=10)
DD.fit(X_train, y_train)
print(DD.grid_scores_)
print(DD.best_score_)
print(DD.best_params_)
y_prob = DD.predict_proba(X_test)[:,1] # This will give you positive class prediction probabilities
y_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.
DD.score(X_test, y_pred)
auc_roc=metrics.classification_report(y_test,y_pred)
print(auc_roc)
十二、Default Random Forest
from sklearn.ensemble import RandomForestClassifier
model_RR=RandomForestClassifier()
model_RR.fit(X_train,y_train)
y_prob = model_RR.predict_proba(X_test)[:,1] # This will give you positive class prediction probabilities
y_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.
model_RR.score(X_test, y_pred)
auc_roc=metrics.roc_auc_score(y_test,y_pred)
auc_roc
十三、Let us tuned the parameters of Random Forest just for the purpose of knowledge
1) max_features
2) n_estimators 估计量
3) min_sample_leaf
from sklearn.ensemble import RandomForestClassifier
model_RR=RandomForestClassifier()
tuned_parameters = {'min_samples_leaf' range(10,100,10), 'n_estimators' : range(10,100,10),
                    'max_features':['auto','sqrt','log2']
                    }
from sklearn.model_selection import GridSearchCV
RR = GridSearchCV(model_RR, tuned_parameters,cv=10)
RR.fit(X_train,y_train)
print(RR.grid_scores_)
print(RR.best_score_)
print(RR.best_params_)
y_prob = RR.predict_proba(X_test)[:,1] # This will give you positive class prediction probabilities
y_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.
RR_model.score(X_test, y_pred)
auc_roc=metrics.roc_auc_score(y_test,y_pred)
auc_roc
十四、Default  XGBoost
from xgboost import XGBClassifier
model_XGB=XGBClassifier()
model_XGB.fit(X_train,y_train)
y_prob = model_XGB.predict_proba(X_test)[:,1] # This will give you positive class prediction probabilities  
y_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.
model_XGB.score(X_test, y_pred)
auc_roc=metrics.roc_auc_score(y_test,y_pred)
auc_roc
十五、特征重要性
在XGBoost中特征重要性已经自动算好,存放在featureimportances
print(model_XGB.feature_importances_)
from matplotlib import pyplot
pyplot.bar(range(len(model_XGB.feature_importances_)), model_XGB.feature_importances_)
pyplot.show()
# plot feature importance using built-in function
from xgboost import plot_importance
plot_importance(model_XGB)
pyplot.show()
可以根据特征重要性进行特征选择
from numpy import sort
from sklearn.feature_selection import SelectFromModel
# Fit model using each importance as a threshold
thresholds = sort(model_XGB.feature_importances_)
for thresh in thresholds:
  # select features using threshold
  selection = SelectFromModel(model_XGB, threshold=thresh, prefit=True)
  select_X_train = selection.transform(X_train)
  # train model
  selection_model = XGBClassifier()
  selection_model.fit(select_X_train, y_train)
# eval model
  select_X_test = selection.transform(X_test)
  y_pred = selection_model.predict(select_X_test)
  predictions = [round(value) for value in y_pred]
  accuracy = accuracy_score(y_test, predictions)
  print("Thresh=%.3f, n=%d, Accuracy: %.2f%%" % (thresh, select_X_train.shape[1],
      accuracy*100.0))
XGBoost使用教程(进阶篇)三的更多相关文章
- RabbitMQ基础教程之使用进阶篇
		
RabbitMQ基础教程之使用进阶篇 相关博文,推荐查看: RabbitMq基础教程之安装与测试 RabbitMq基础教程之基本概念 RabbitMQ基础教程之基本使用篇 I. 背景 前一篇基本使用篇 ...
 - 《手把手教你》系列进阶篇之1-python+ selenium自动化测试 - python基础扫盲(详细教程)
		
1. 简介 如果你从一开始就跟着宏哥看博客文章到这里,基础篇和练习篇的文章.如果你认真看过,并且手动去敲过每一篇的脚本代码,那边恭喜你,至少说你算真正会利用Python+Selenium编写自动化脚本 ...
 - Membership三步曲之进阶篇 - 深入剖析Provider Model
		
Membership 三步曲之进阶篇 - 深入剖析Provider Model 本文的目标是让每一个人都知道Provider Model 是什么,并且能灵活的在自己的项目中使用它. Membershi ...
 - 【MongoDB】NoSQL Manager for MongoDB 教程(进阶篇)
		
项目做完,有点时间,接着写下第二篇吧.回顾戳这里 基础篇:安装.连接mongodb.使用shell.增删改查.表复制 本文属于进阶篇,为什么叫进阶篇,仅仅是因为这些功能属于DB范畴,一般使用的不多, ...
 - PHP 进阶篇:面向对象的设计原则,自动加载类,类型提示,traits,命名空间,spl的使用,反射的使用,php常用设计模式  (麦子学员 第三阶段)
		
以下是进阶篇的内容:面向对象的设计原则,自动加载类,类型提示,traits,命名空间,spl的使用,反射的使用,php常用设计模式 ================================== ...
 - 2. web前端开发分享-css,js进阶篇
		
一,css进阶篇: 等css哪些事儿看了两三遍之后,需要对看过的知识综合应用,这时候需要大量的实践经验, 简单的想法:把qq首页全屏另存为jpg然后通过ps工具切图结合css转换成html,有无从下手 ...
 - web前端开发分享-css,js进阶篇
		
一,css进阶篇: 等css哪些事儿看了两三遍之后,需要对看过的知识综合应用,这时候需要大量的实践 经验, 简单的想法:把qq首页全屏另存为jpg然后通过ps工具切图结合css转换成html,有无 从 ...
 - 进阶篇,第二章:MC与Forge的Event系统
		
<基于1.8 Forge的Minecraft mod制作经验分享> 这一章其实才应该是第一章,矿物生成里面用到了Event的一些内容.如果你对之前矿物生成那一章的将算法插入ORE_GEN_ ...
 - Spring+SpringMVC+MyBatis+easyUI整合进阶篇(十五)阶段总结
		
作者:13 GitHub:https://github.com/ZHENFENG13 版权声明:本文为原创文章,未经允许不得转载. 一 每个阶段在结尾时都会有一个阶段总结,在<SSM整合基础篇& ...
 - [转]抢先Mark!微信公众平台开发进阶篇资源集锦
		
FROM : http://www.csdn.net/article/2014-08-01/2820986 由CSDN和<程序员>杂志联合主办的 2014年微信开发者大会 将于8月23日在 ...
 
随机推荐
- 树莓派安装opencv3及其扩展库
			
https://www.cnblogs.com/Pyrokine/p/8921285.html 目标编译针对python的opencv以及扩展库 环境树莓派4和3B+都可以python3.7.3 py ...
 - 切换node版本
			
首先将原来的安装包删了,在控制面板中删除然后在https://nodejs.org/dist/找到想要的版本号 再找到msi文件
 - [LeetCode] 786. K-th Smallest Prime Fraction 第K小的质分数
			
A sorted list A contains 1, plus some number of primes. Then, for every p < q in the list, we co ...
 - [LeetCode] 457. Circular Array Loop 环形数组循环
			
You are given a circular array nums of positive and negative integers. If a number k at an index is ...
 - [LeetCode] 394. Decode String 解码字符串
			
Given an encoded string, return it's decoded string. The encoding rule is: k[encoded_string], where ...
 - 3,[VS] 编程时的有必要掌握的小技巧_______________________________请从下面第 1 篇看起
			
本文导览: 善用“并排显示窗口”功能 做作业/测试时使用 多项目 多个源文件 多个子函数 使用Visual Studio team代码同步工具,及时把项目文件保存到云端 关闭括号分号自动联想 技巧是提 ...
 - loj 2955 「NOIP2018」保卫王国 - 树链剖分 - 动态规划
			
题目传送门 传送门 想抄一个短一点ddp板子.然后照着Jode抄,莫名其妙多了90行和1.3k. Code /** * loj * Problem#2955 * Accepted * Time: 26 ...
 - 如何为python 2.7安装tensorflow?
			
“TensorFlow在Windows上支持Python 3.5.x和3.6.x.” 因此,您无法在Windows上使用Python 2.7的tensorflow 如果您被迫使用Python 2.7, ...
 - Java学习清单
			
转自: csdn/zuochao_2013/article/details/76795164 · Java基础部分 *Java基础才是重中之重,只有基础打牢了,学习各种框架才能游刃有余. 1, ...
 - Deep Learning专栏--强化学习之从 Policy Gradient 到 A3C(3)
			
在之前的强化学习文章里,我们讲到了经典的MDP模型来描述强化学习,其解法包括value iteration和policy iteration,这类经典解法基于已知的转移概率矩阵P,而在实际应用中,我们 ...