Feature Selection Can Reduce Overfitting And RF Show Feature Importance
一、特征选择可以减少过拟合代码实例
该实例来自机器学习实战第四章
#coding=utf-8 '''
We use KNN to show that feature selection maybe reduce overfitting
''' from sklearn.base import clone
from itertools import combinations
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score class SBS():
def __init__(self, estimator, k_features, scoring=accuracy_score, test_size=0.25, random_state=1):
self.scoring = scoring
self.estimator = clone(estimator)
self.k_features = k_features
self.test_size = test_size
self.random_state = random_state def fit(self, X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = self.test_size, random_state=self.random_state)
dim = X_train.shape[1] self.indices_ = tuple(range(dim))
self.subsets_ = [self.indices_]
score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_) self.scores_ = [score] while dim > self.k_features:
scores = []
subsets = [] for p in combinations(self.indices_, r=dim-1):
score = self._calc_score(X_train, y_train, X_test, y_test, p)
scores.append(score)
subsets.append(p)
best = np.argmax(scores)
self.indices_ = subsets[best]
self.subsets_.append(self.indices_)
dim -= 1 self.scores_.append(scores[best]) self.k_score_ = self.scores_[-1] return self def transform(self, X):
return X[:, self.indices_] def _calc_score(self, X_train, y_train, X_test, y_test, indices):
self.estimator.fit(X_train[:, indices], y_train)
y_pred = self.estimator.predict(X_test[:, indices])
score = self.scoring(y_test, y_pred) return score import pandas as pd
from sklearn.model_selection import train_test_split df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline'] X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test) from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
knn = KNeighborsClassifier(n_neighbors=2)
sbs = SBS(knn, k_features=1)
sbs.fit(X_train_std, y_train) k_feat = [len(k) for k in sbs.subsets_] plt.figure(figsize=(8,10))#must be a tuple
plt.subplot(2,1,1)
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylim([0.7, 1.1])
plt.ylabel('Accuracy')
plt.xlabel('Number of features')
plt.grid()
#plt.show() #Let's see what those five features are that yield such a good performance on validation dataset
#subsets_的第九个元素是选择了13个特征中的五个来进行训练
k5 = list(sbs.subsets_[8])
print(df_wine.columns[1:][k5])
'''
Index(['Alcohol', 'Malic acid', 'Alcalinity of ash', 'Hue', 'Proline'], dtype='object')
''' #Let's evaluate the performance of the KNN classifer on the original test set
knn.fit(X_train_std, y_train)
print("Training Accuracy:", knn.score(X_train_std, y_train))
print("Test Accuracy:", knn.score(X_test_std, y_test))
'''
Training accuracy: 0.9838709677419355
Test Accuracy: 0.9444444444444444
'''
#We find a slight degree of overftting if we used all the 13 features on training knn.fit(X_train_std[:, k5], y_train)
print("Training Accuracy:", knn.score(X_train_std[:, k5], y_train))
print("Test Accuracy:", knn.score(X_test_std[:, k5], y_test))
'''
Training Accuracy: 0.9596774193548387
Test Accuracy: 0.9629629629629629
'''
#We reduced overfitting and the prediction accuracy improved. #RF Show Feature Importance
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_wine.columns[1:]
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1] for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f+1, 30, feat_labels[indices[f]], importances[indices[f]])) plt.subplot(2,1,2)
plt.title("Feature Importances")
plt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center')
plt.xticks(range(X_train.shape[1]), feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
plt.show()

Feature Selection Can Reduce Overfitting And RF Show Feature Importance的更多相关文章
- 10-3[RF] feature selection
main idea: 计算每一个feature的重要性,选取重要性前k的feature: 衡量一个feature重要的方式:如果一个feature重要,则在这个feature上加上noise,会对最后 ...
- The Practical Importance of Feature Selection(变量筛选重要性)
python机器学习-乳腺癌细胞挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003&u ...
- 【转】[特征选择] An Introduction to Feature Selection 翻译
中文原文链接:http://www.cnblogs.com/AHappyCat/p/5318042.html 英文原文链接: An Introduction to Feature Selection ...
- 单因素特征选择--Univariate Feature Selection
An example showing univariate feature selection. Noisy (non informative) features are added to the i ...
- highly variable gene | 高变异基因的选择 | feature selection | 特征选择
在做单细胞的时候,有很多基因属于noise,就是变化没有规律,或者无显著变化的基因.在后续分析之前,我们需要把它们去掉. 以下是一种找出highly variable gene的方法: The fea ...
- 机器学习-特征选择 Feature Selection 研究报告
原文:http://www.cnblogs.com/xbinworld/archive/2012/11/27/2791504.html 机器学习-特征选择 Feature Selection 研究报告 ...
- the steps that may be taken to solve a feature selection problem:特征选择的步骤
參考:JMLR的paper<an introduction to variable and feature selection> we summarize the steps that m ...
- [Feature] Feature selection
Ref: 1.13. Feature selection Ref: 1.13. 特征选择(Feature selection) 大纲列表 3.1 Filter 3.1.1 方差选择法 3.1.2 相关 ...
- [Feature] Feature selection - Embedded topic
基于惩罚项的特征选择法 一.直接对特征筛选 Ref: 1.13.4. 使用SelectFromModel选择特征(Feature selection using SelectFromModel) 通过 ...
随机推荐
- 《Linux内核设计与实现》 第三章学习笔记
一.进程 1.进程就是处于执行期的程序(目标码存放在某种存储介质上).但进程并不仅仅局限于一段可执行程序代码,通常进程还要包含其他资源.执行线程,简称线程(thread),是在进程中活动的对象. 2. ...
- Linux内核分析 读书笔记 (第十八章)
第十八章 调试 18.1 准备开始 1. 需要的只是: 一个bug 一个藏匿bug的内核版本 相关内核代码的知识和运气 2. 在跟踪bug的时候,掌握的信息越多越好. 18.2 内核中的bug 1. ...
- pandas获取groupby分组里最大值所在的行,获取第一个等操作
pandas获取groupby分组里最大值所在的行 10/May 2016 python pandas pandas获取groupby分组里最大值所在的行 如下面这个DataFrame,按照Mt分组, ...
- SpringBoot初识
作用 SpringBoot是为了简化Spring应用的创建.运行.调试.部署等等而出现的,使用它可以专注业务开发,不需要太多的xml的配置. 核心功能 1.内嵌Servlet容器(tomcat.jet ...
- PAT 1029 旧键盘
https://pintia.cn/problem-sets/994805260223102976/problems/994805292322111488 旧键盘上坏了几个键,于是在敲一段文字的时候, ...
- Xshell 使用数字小键盘进行vim 写入操作.
Copy From http://blog.csdn.net/shenzhen206/article/details/51200869 感谢原作者 在putty或xshell上用vi/vim的时候,开 ...
- SLAM中的变换(旋转与位移)表示方法
1.旋转矩阵 注:旋转矩阵标题下涉及到的SLAM均不包含位移. 根据同一点P在不同坐标系下e(e1,e2,e3)e'(e1',e2',e3')的坐标a(a1,a2,a3)a'(a1',a2',a3') ...
- JetBrains全系列破解
教程开始: 进入自己安装idea路径的bin目录下,将刚刚下载好的JetbrainsCrack.jar复制到此目录下: 还是在bin目录下,找到idea.exe.vmoptions和idea64.ex ...
- 基于C#.NET的高端智能化网络爬虫(二)(攻破携程网)
本篇故事的起因是携程旅游网的一位技术经理,豪言壮举的扬言要通过他的超高智商,完美碾压爬虫开发人员,作为一个业余的爬虫开发爱好者,这样的言论我当然不能置之不理.因此就诞生了以及这一篇高级爬虫的开发教程. ...
- loadrunner测试结果三
结果摘要: 场景执行情况: 该部分给出了本次测试场景的名称.结果存放路径 及 场景的持续时间 统计信息摘要 statistic summary 该部分给出了场景执行结束后并发数.总吞吐量.平均每秒吞吐 ...