sklearn scoring . xgboost.train . ---> rsme
http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter
3.3.1. The scoring parameter: defining model evaluation rules
Model selection and evaluation using tools, such as model_selection.GridSearchCV andmodel_selection.cross_val_score, take a scoring parameter that controls what metric they apply to the estimators evaluated.
3.3.1.1. Common cases: predefined values
For the most common use cases, you can designate a scorer object with the scoring parameter; the table below shows all possible values. All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric.
| Scoring | Function | Comment |
|---|---|---|
| Classification | ||
| ‘accuracy’ | metrics.accuracy_score |
|
| ‘average_precision’ | metrics.average_precision_score |
|
| ‘f1’ | metrics.f1_score |
for binary targets |
| ‘f1_micro’ | metrics.f1_score |
micro-averaged |
| ‘f1_macro’ | metrics.f1_score |
macro-averaged |
| ‘f1_weighted’ | metrics.f1_score |
weighted average |
| ‘f1_samples’ | metrics.f1_score |
by multilabel sample |
| ‘neg_log_loss’ | metrics.log_loss |
requires predict_proba support |
| ‘precision’ etc. | metrics.precision_score |
suffixes apply as with ‘f1’ |
| ‘recall’ etc. | metrics.recall_score |
suffixes apply as with ‘f1’ |
| ‘roc_auc’ | metrics.roc_auc_score |
|
| Clustering | ||
| ‘adjusted_rand_score’ | metrics.adjusted_rand_score |
|
| Regression | ||
| ‘neg_mean_absolute_error’ | metrics.mean_absolute_error |
|
| ‘neg_mean_squared_error’ | metrics.mean_squared_error |
|
| ‘neg_median_absolute_error’ | metrics.median_absolute_error |
|
| ‘r2’ | metrics.r2_score |
Usage examples:
>>> from sklearn import svm, datasets
>>> from sklearn.model_selection import cross_val_score
>>> iris = datasets.load_iris()
>>> X, y = iris.data, iris.target
>>> clf = svm.SVC(probability=True, random_state=0)
>>> cross_val_score(clf, X, y, scoring='neg_log_loss')
array([-0.07..., -0.16..., -0.06...])
>>> model = svm.SVC()
>>> cross_val_score(model, X, y, scoring='wrong_choice')
Traceback (most recent call last):
ValueError: 'wrong_choice' is not a valid scoring value. Valid options are ['accuracy', 'adjusted_rand_score', 'average_precision', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_median_absolute_error', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc']
Note
The values listed by the ValueError exception correspond to the functions measuring prediction accuracy described in the following sections. The scorer objects for those functions are stored in the dictionarysklearn.metrics.SCORERS.
3.3.1.2. Defining your scoring strategy from metric functions
The module sklearn.metric also exposes a set of simple functions measuring a prediction error given ground truth and prediction:
- functions ending with
_scorereturn a value to maximize, the higher the better. - functions ending with
_erroror_lossreturn a value to minimize, the lower the better. When converting into a scorer object usingmake_scorer, set thegreater_is_betterparameter to False (True by default; see the parameter description below).
Metrics available for various machine learning tasks are detailed in sections below.
Many metrics are not given names to be used as scoring values, sometimes because they require additional parameters, such as fbeta_score. In such cases, you need to generate an appropriate scoring object. The simplest way to generate a callable object for scoring is by using make_scorer. That function converts metrics into callables that can be used for model evaluation.
One typical use case is to wrap an existing metric function from the library with non-default values for its parameters, such as the beta parameter for the fbeta_score function:
>>> from sklearn.metrics import fbeta_score, make_scorer
>>> ftwo_scorer = make_scorer(fbeta_score, beta=2)
>>> from sklearn.model_selection import GridSearchCV
>>> from sklearn.svm import LinearSVC
>>> grid = GridSearchCV(LinearSVC(), param_grid={'C': [1, 10]}, scoring=ftwo_scorer)
sklearn scoring . xgboost.train . ---> rsme的更多相关文章
- 【集成学习】sklearn中xgboost模块的XGBClassifier函数
# 常规参数 booster gbtree 树模型做为基分类器(默认) gbliner 线性模型做为基分类器 silent silent=0时,不输出中间过程(默认) silent=1时,输出中间过程 ...
- 【集成学习】sklearn中xgboost模块中plot_importance函数(绘图--特征重要性)
直接上代码,简单 # -*- coding: utf-8 -*- """ ################################################ ...
- sklearn中xgboost模块中plot_importance函数(特征重要性)
# -*- coding: utf-8 -*- """ ######################################################### ...
- Python机器学习笔记:XgBoost算法
前言 1,Xgboost简介 Xgboost是Boosting算法的其中一种,Boosting算法的思想是将许多弱分类器集成在一起,形成一个强分类器.因为Xgboost是一种提升树模型,所以它是将许多 ...
- XGBoost和LightGBM的参数以及调参
一.XGBoost参数解释 XGBoost的参数一共分为三类: 通用参数:宏观函数控制. Booster参数:控制每一步的booster(tree/regression).booster参数一般可以调 ...
- xgboost:
https://www.zybuluo.com/Dounm/note/1031900 GBDT算法详解 http://mlnote.com/2016/10/05/a-guide-to-xgboost- ...
- XGBoost 重要参数(调参使用)
XGBoost 重要参数(调参使用) 数据比赛Kaggle,天池中最常见的就是XGBoost和LightGBM. 模型是在数据比赛中尤为重要的,但是实际上,在比赛的过程中,大部分朋友在模型上花的时间却 ...
- xgboost的遗传算法调参
遗传算法适应度的选择: 机器学习的适应度可以是任何性能指标 —准确度,精确度,召回率,F1分数等等.根据适应度值,我们选择表现最佳的父母(“适者生存”),作为幸存的种群. 交配: 存活下来的群体中的父 ...
- xgboost: 速度快效果好的boosting模型
转自:http://cos.name/2015/03/xgboost/ 本文作者:何通,SupStat Inc(总部在纽约,中国分部为北京数博思达信息科技有限公司)数据科学家,加拿大Simon Fra ...
随机推荐
- mysql中OPTIMIZE TABLE的作用及使用
来看看手册中关于 OPTIMIZE 的描述: OPTIMIZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tbl_name [, tbl_name] ... 如果您已经删除 ...
- spring新心得
一直觉得spring是最厉害的框架,说说最近从依葫芦画瓢到现在慢慢摸索他的思想的过程 以前什么都不懂,在xml上抄网上的东西,到大概知道是什么运作的 三种配装方式 1,<spring实战> ...
- php与JAVA的RSA加密互通
Java 版本RSA 进行加密解密 在网上查询了好几天,最终找到解决方案,网络上都是通过Cipher.getInstance("RSA"); 而改成Cipher.getInstan ...
- js对字符串进行编码方法总结
在用javascript对URL字符串进行编码中,虽然escape().encodeURI().encodeURIComponent()三种方法都能对一些影响URL完整性的特殊字符进行过滤.但后两者是 ...
- android中状态栏透明
设置 Acitivity 所在 window 的属性 @Override protected void onCreate(Bundle savedInstanceState) { super.onCr ...
- (转)WebApi 上传文件
本文转载自:http://www.cnblogs.com/zj1111184556/p/3494502.html public class FileUploadController : ApiCont ...
- CCPC2018-湖南全国邀请赛 K 2018
K.2018 题目描述 Given a, b, c, d , find out the number of pairs of integers ( x, y ) where a ≤ x ≤ b, c ...
- 1133 Splitting A Linked List
题意:把链表按规则调整,使小于0的先输出,然后输出键值在[0,k]的,最后输出键值大于k的. 思路:利用vector<Node> v,v1,v2,v3.遍历链表,把小于0的push到v1中 ...
- MFC入门经典
今天向同学请教了下MFC的入门问题,当真有种"听君一席话,胜读十年书"的感觉.我个人以为每个学习C++控制台类型编程的新手都希望能够把小黑窗变为交互简单的窗口程序,这就促使我们学习 ...
- matlab神经网络工具箱创建神经网络
为了看懂师兄的文章中使用的方法,研究了一下神经网络 昨天花了一天的时间查怎么写程序,但是费了半天劲,不能运行,百度知道里倒是有一个,可以运行的,先贴着做标本 % 生成训练样本集 clear all; ...