一、特征选择可以减少过拟合代码实例

 该实例来自机器学习实战第四章

#coding=utf-8

'''
We use KNN to show that feature selection maybe reduce overfitting
''' from sklearn.base import clone
from itertools import combinations
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score class SBS():
def __init__(self, estimator, k_features, scoring=accuracy_score, test_size=0.25, random_state=1):
self.scoring = scoring
self.estimator = clone(estimator)
self.k_features = k_features
self.test_size = test_size
self.random_state = random_state def fit(self, X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = self.test_size, random_state=self.random_state)
dim = X_train.shape[1] self.indices_ = tuple(range(dim))
self.subsets_ = [self.indices_]
score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_) self.scores_ = [score] while dim > self.k_features:
scores = []
subsets = [] for p in combinations(self.indices_, r=dim-1):
score = self._calc_score(X_train, y_train, X_test, y_test, p)
scores.append(score)
subsets.append(p)
best = np.argmax(scores)
self.indices_ = subsets[best]
self.subsets_.append(self.indices_)
dim -= 1 self.scores_.append(scores[best]) self.k_score_ = self.scores_[-1] return self def transform(self, X):
return X[:, self.indices_] def _calc_score(self, X_train, y_train, X_test, y_test, indices):
self.estimator.fit(X_train[:, indices], y_train)
y_pred = self.estimator.predict(X_test[:, indices])
score = self.scoring(y_test, y_pred) return score import pandas as pd
from sklearn.model_selection import train_test_split df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline'] X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test) from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
knn = KNeighborsClassifier(n_neighbors=2)
sbs = SBS(knn, k_features=1)
sbs.fit(X_train_std, y_train) k_feat = [len(k) for k in sbs.subsets_] plt.figure(figsize=(8,10))#must be a tuple
plt.subplot(2,1,1)
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylim([0.7, 1.1])
plt.ylabel('Accuracy')
plt.xlabel('Number of features')
plt.grid()
#plt.show() #Let's see what those five features are that yield such a good performance on validation dataset
#subsets_的第九个元素是选择了13个特征中的五个来进行训练
k5 = list(sbs.subsets_[8])
print(df_wine.columns[1:][k5])
'''
Index(['Alcohol', 'Malic acid', 'Alcalinity of ash', 'Hue', 'Proline'], dtype='object')
''' #Let's evaluate the performance of the KNN classifer on the original test set
knn.fit(X_train_std, y_train)
print("Training Accuracy:", knn.score(X_train_std, y_train))
print("Test Accuracy:", knn.score(X_test_std, y_test))
'''
Training accuracy: 0.9838709677419355
Test Accuracy: 0.9444444444444444
'''
#We find a slight degree of overftting if we used all the 13 features on training knn.fit(X_train_std[:, k5], y_train)
print("Training Accuracy:", knn.score(X_train_std[:, k5], y_train))
print("Test Accuracy:", knn.score(X_test_std[:, k5], y_test))
'''
Training Accuracy: 0.9596774193548387
Test Accuracy: 0.9629629629629629
'''
#We reduced overfitting and the prediction accuracy improved. #RF Show Feature Importance
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_wine.columns[1:]
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1] for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f+1, 30, feat_labels[indices[f]], importances[indices[f]])) plt.subplot(2,1,2)
plt.title("Feature Importances")
plt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center')
plt.xticks(range(X_train.shape[1]), feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
plt.show()

Feature Selection Can Reduce Overfitting And RF Show Feature Importance的更多相关文章

  1. 10-3[RF] feature selection

    main idea: 计算每一个feature的重要性,选取重要性前k的feature: 衡量一个feature重要的方式:如果一个feature重要,则在这个feature上加上noise,会对最后 ...

  2. The Practical Importance of Feature Selection(变量筛选重要性)

    python机器学习-乳腺癌细胞挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003&u ...

  3. 【转】[特征选择] An Introduction to Feature Selection 翻译

    中文原文链接:http://www.cnblogs.com/AHappyCat/p/5318042.html 英文原文链接: An Introduction to Feature Selection ...

  4. 单因素特征选择--Univariate Feature Selection

    An example showing univariate feature selection. Noisy (non informative) features are added to the i ...

  5. highly variable gene | 高变异基因的选择 | feature selection | 特征选择

    在做单细胞的时候,有很多基因属于noise,就是变化没有规律,或者无显著变化的基因.在后续分析之前,我们需要把它们去掉. 以下是一种找出highly variable gene的方法: The fea ...

  6. 机器学习-特征选择 Feature Selection 研究报告

    原文:http://www.cnblogs.com/xbinworld/archive/2012/11/27/2791504.html 机器学习-特征选择 Feature Selection 研究报告 ...

  7. the steps that may be taken to solve a feature selection problem:特征选择的步骤

    參考:JMLR的paper<an introduction to variable and feature selection> we summarize the steps that m ...

  8. [Feature] Feature selection

    Ref: 1.13. Feature selection Ref: 1.13. 特征选择(Feature selection) 大纲列表 3.1 Filter 3.1.1 方差选择法 3.1.2 相关 ...

  9. [Feature] Feature selection - Embedded topic

    基于惩罚项的特征选择法 一.直接对特征筛选 Ref: 1.13.4. 使用SelectFromModel选择特征(Feature selection using SelectFromModel) 通过 ...

随机推荐

  1. 关于对springboot程序配置文件使用jasypt开源工具自定义加密

    一.前言 在工作中遇到需要把配置文件加密的要求,很容易就在网上找到了开源插件 jasypt  (https://github.com/ulisesbocchio/jasypt-spring-boot# ...

  2. 结对项目https://github.com/bxoing1994/test/blob/master/源代码

    所选项目名称:文本替换      结对人:曲承玉 github地址 :https://github.com/bxoing1994/test/blob/master/源代码 结对人github地址:ht ...

  3. Java 类的加载

    package com.cwcec.p2; class C { public static final int SIZE; static { SIZE = 100; System.out.printl ...

  4. 【Alpha阶段】测试报告

    buglist:链接 1.测试找出的BUG 从上线之前黑盒测试结果bug清单: 录入报告的按钮变灰 浏览器浏览时网站崩溃 实验报告显示不出 收藏夹在点击多次后变为 1071生成报告数据不对 个人收藏点 ...

  5. Golang的panic和recover

    panic 关键字panic的作用是制造一次宕机,宕机就代表程序运行终止,但是已经“生效”的延迟函数仍会执行(即已经压入栈的defer延迟函数,panic之前的). 为什么要制造宕机呢?是因为宕机不容 ...

  6. HTML使用button的一个小坑

    https://www.w3schools.com/TAGs/att_button_type.asp Definition and Usage The type attribute specifies ...

  7. VUE的语法笔记

    v-model = 'content' {{contents}} //vue 双向视图的绑定 v-text 只能返回一个文本内容 v-html 不仅可以返回文本内容还可以返回html标签 v-for ...

  8. 美国运营商推送假5G图标:用户当场蒙圈了

    面对5G大潮,大家都想“争当第一”.美国运营商AT&T想出奇招,打算玩一把“障眼法”. 据外媒报道,AT&T的用户从明年开始会在手机右上角看到“5G E”的图标.当然,这并不是他们的手 ...

  9. BZOJ2282 SDOI2011消防/NOIP2007树网的核(二分答案+树形dp)

    要求最大值最小容易想到二分答案.首先对每个点求出子树中与其最远的距离是多少,二分答案后就可以标记上一些必须在所选择路径中的点,并且这些点是不应存在祖先关系的.那么如果剩下的点数量>=3,显然该答 ...

  10. Java过滤器Filter的使用详解

    过滤器 过滤器是处于客户端与服务器资源文件之间的一道过滤网,在访问资源文件之前,通过一系列的过滤器对请求进行修改.判断等,把不符合规则的请求在中途拦截或修改.也可以对响应进行过滤,拦截或修改响应. 如 ...