https://en.wikipedia.org/wiki/Ensemble_learning

Stacking

Stacking (sometimes called stacked generalization) involves training a learning algorithm to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm is trained to make a final prediction using all the predictions of the other algorithms as additional inputs. If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although in practice, a single-layer logistic regression model is often used as the combiner.

Stacking typically yields performance better than any single one of the trained models.[22] It has been successfully used on both supervised learning tasks (regression,[23]classification and distance learning [24]) and unsupervised learning (density estimation).[25] It has also been used to estimate bagging's error rate.[3][26] It has been reported to out-perform Bayesian model-averaging.[27] The two top-performers in the Netflix competition utilized blending, which may be considered to be a form of stacking.[28]

https://arxiv.org/pdf/0911.0460.pdf

【显著提升协同过滤的准确性】

Ensemble methods, such as stacking, are designed to boost predictive accuracy by blending the predictions of multiple machine learning models. Recent work has shown that the use of meta-features, additional inputs describing each example in a dataset, can boost the performance of ensemble methods, but the greatest reported gains have come from nonlinear procedures requiring significant tuning and training time. Here, we present a linear technique, Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for improved accuracy while retaining the well-known virtues of linear regression regarding speed, stability, and interpretability. FWLS combines model predictions linearly using coefficients that are themselves linear functions of meta-features. This technique was a key facet of the solution of the second place team in the recently concluded Netflix Prize competition. Significant increases in accuracy over standard linear stacking are demonstrated on the Netflix Prize collaborative filtering dataset.

【a blend of blends - stacking--调和 混合 堆积 调和的调和 】

“Stacking” is a technique in which the predictions of a collection of models are given as inputs to a second-level learning algorithm. This second-level algorithm is trained to combine the model predictions optimally to form a final set of predictions. Many machine learning practitioners have had success using stacking and related techniques to boost prediction accuracy beyond the level obtained by any of the individual models. In some contexts, stacking is also referred to as blending, and we will use the terms interchangeably here. Since its introduction [23], modellers have employed stacking successfuly on a wide variety of problems, including chemometrics [8], spam filtering [16], and large collections of datasets drawn from the UCI Machine learning repository [21, 7]. One prominent recent example of the

power of model blending was the Netflix Prize1 collaborative filtering competition. The team BellKor’s Pragmatic Chaos won the $1 million prize using a blend of hundreds of different models [22, 11, 14]. Indeed, the winning solution was a blend at multiple levels, i.e., a blend of blends. Intuition suggests that the reliability of a model may vary as a function of the conditions in which it is used. For instance, in a collaborative filtering context where we wish to predict the preferences of customers for various products, the amount of data collected may vary significantly depending on which customer or which product is under consideration. Model A may be more reliable than model B for users who have rated many products, but model B may outperform model A for users who have only rated a few products. In an attempt to capitalize on this intuition, many researchers have developed approaches that attempt to improve the accuracy of stacked regression by adapting the blending on the basis of side information. Such an additional source of information, like the number of products rated by a user or the number of days since a product was released, is often referred to as a “meta-feature,” and we will use that terminology here.

stacked generalization 堆积正则化 堆积泛化 加权特征线性堆积的更多相关文章

  1. Ensemble Learning: Bootstrap aggregating (Bagging) & Boosting & Stacked generalization (Stacking)

    Booststrap aggregating (有些地方译作:引导聚集),也就是通常为大家所熟知的bagging.在维基上被定义为一种提升机器学习算法稳定性和准确性的元算法,常用于统计分类和回归中. ...

  2. 机器学习中模型泛化能力和过拟合现象(overfitting)的矛盾、以及其主要缓解方法正则化技术原理初探

    1. 偏差与方差 - 机器学习算法泛化性能分析 在一个项目中,我们通过设计和训练得到了一个model,该model的泛化可能很好,也可能不尽如人意,其背后的决定因素是什么呢?或者说我们可以从哪些方面去 ...

  3. R语言 绘图——条形图可以将堆积条形图与百分比堆积条形图配合使用

    在使用堆积条形图时候,新增一个百分比堆积条形图,可以加深读者印象. 封装一个function函数后只需要在调用的数据上改一下pos=‘fill’的代码即可.比较方便. 案例: # 封装函数 fun1& ...

  4. C++编程之面向对象的三个基本特征

    面向对象的三个基本特征是:封装.继承.多态. 封装 封装最好理解了.封装是面向对象的特征之一,是对象和类概念的主要特性. 封装,也就是把客观事物封装成抽象的类,并且类可以把自己的数据和方法只让可信的类 ...

  5. 机器学习入门13 - 正则化:稀疏性 (Regularization for Sparsity)

    原文链接:https://developers.google.com/machine-learning/crash-course/regularization-for-sparsity/ 1- L₁正 ...

  6. 【cs229-Lecture11】贝叶斯统计正则化

    本节知识点: 贝叶斯统计及规范化 在线学习 如何使用机器学习算法解决具体问题:设定诊断方法,迅速发现问题 贝叶斯统计及规范化(防止过拟合的方法) 就是要找更好的估计方法来减少过度拟合情况的发生. 回顾 ...

  7. Andrew Ng-ML-第八章-正则化

    1.过度拟合overfitting 过度拟合,因为有太多的特征+过少的训练数据,学习到的假设可能很适应训练集,但是不能泛化到新的样例.即泛化generalize能力差. 解决办法: 1.手动/使用选择 ...

  8. 线性回归和正则化(Regularization)

    python风控建模实战lendingClub(博主录制,包含大量回归建模脚本和和正则化解释,2K超清分辨率) https://study.163.com/course/courseMain.htm? ...

  9. coursera机器学习-logistic回归,正则化

    #对coursera上Andrew Ng老师开的机器学习课程的笔记和心得: #注:此笔记是我自己认为本节课里比较重要.难理解或容易忘记的内容并做了些补充,并非是课堂详细笔记和要点: #标记为<补 ...

随机推荐

  1. ELK之filebeat收集多类型日志

    1.IP规划 10.0.0.33:filebeat+tomcat,filebeat收集系统日志.tomcat日志发送到logstash 10.0.0.32:logstash,将日志写入reids(in ...

  2. Codeforces Round #490 (Div. 3)

    感觉现在\(div3\)的题目也不错啊? 或许是我变辣鸡了吧....... 代码戳这里 A. Mishka and Contes 从两边去掉所有\(≤k\)的数,统计剩余个数即可 B. Reversi ...

  3. linux基础学习7

      Linux 的开机流程分析 1. 加载 BIOS 的硬件信息与进行自我测试,并依据设定取得第一个可开机的装置: 2. 读取并执行第一个开机装置内 MBR 的 boot Loader (亦即是 gr ...

  4. Android 进度条对话框ProgressDialog

    <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android=&quo ...

  5. 【spring cloud】@EnableTransactionManagement注解的意义

    @EnableTransactionManagement注解的意义

  6. mac python 切换系统默认版本

    1 找到所安装python路径/usr/local/Cellar/python/2.7.13/bin2 vim ~/.bash_profile 3 添加如下代码: PATH="/usr/lo ...

  7. D-Link service.cgi远程命令执行漏洞复现

    1.1 概述 友讯集团(D-Link),成立于1986年,1994年10月于台湾证券交易所挂牌上市,为台湾第一家上市的网络公司,以自创D-Link品牌行销全球,产品遍及100多个国家. 1月17日,C ...

  8. oracle如何获得新插入记录的id

    .对于提交(最后一次操作commit了)的话可以查询那个提交段 SELECT 列名1,列名2…… FROM 表名 VERSIONS BETWEEN TIMESTAMP MINVALUE AND MAX ...

  9. redis参数配置

    redis.conf配置文件 配置项 值 说明 slave-read-only yes slave是否只读 slave-serve-stale-data yes 当slave与master断开连接,s ...

  10. GlusterFS分布式文件系统高速管理

    TaoCloud XDFS基于GlusterFS开源分布式文件系统,进行了系统优化.project化.定制化和产品化工作,五年以上的实践积累了大量实践经验,包含客户案例.最佳实践.定制开发.咨询服务和 ...