https://en.wikipedia.org/wiki/Ensemble_learning

Stacking

Stacking (sometimes called stacked generalization) involves training a learning algorithm to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm is trained to make a final prediction using all the predictions of the other algorithms as additional inputs. If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although in practice, a single-layer logistic regression model is often used as the combiner.

Stacking typically yields performance better than any single one of the trained models.[22] It has been successfully used on both supervised learning tasks (regression,[23]classification and distance learning [24]) and unsupervised learning (density estimation).[25] It has also been used to estimate bagging's error rate.[3][26] It has been reported to out-perform Bayesian model-averaging.[27] The two top-performers in the Netflix competition utilized blending, which may be considered to be a form of stacking.[28]

https://arxiv.org/pdf/0911.0460.pdf

【显著提升协同过滤的准确性】

Ensemble methods, such as stacking, are designed to boost predictive accuracy by blending the predictions of multiple machine learning models. Recent work has shown that the use of meta-features, additional inputs describing each example in a dataset, can boost the performance of ensemble methods, but the greatest reported gains have come from nonlinear procedures requiring significant tuning and training time. Here, we present a linear technique, Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for improved accuracy while retaining the well-known virtues of linear regression regarding speed, stability, and interpretability. FWLS combines model predictions linearly using coefficients that are themselves linear functions of meta-features. This technique was a key facet of the solution of the second place team in the recently concluded Netflix Prize competition. Significant increases in accuracy over standard linear stacking are demonstrated on the Netflix Prize collaborative filtering dataset.

【a blend of blends - stacking--调和 混合 堆积 调和的调和 】

“Stacking” is a technique in which the predictions of a collection of models are given as inputs to a second-level learning algorithm. This second-level algorithm is trained to combine the model predictions optimally to form a final set of predictions. Many machine learning practitioners have had success using stacking and related techniques to boost prediction accuracy beyond the level obtained by any of the individual models. In some contexts, stacking is also referred to as blending, and we will use the terms interchangeably here. Since its introduction [23], modellers have employed stacking successfuly on a wide variety of problems, including chemometrics [8], spam filtering [16], and large collections of datasets drawn from the UCI Machine learning repository [21, 7]. One prominent recent example of the

power of model blending was the Netflix Prize1 collaborative filtering competition. The team BellKor’s Pragmatic Chaos won the $1 million prize using a blend of hundreds of different models [22, 11, 14]. Indeed, the winning solution was a blend at multiple levels, i.e., a blend of blends. Intuition suggests that the reliability of a model may vary as a function of the conditions in which it is used. For instance, in a collaborative filtering context where we wish to predict the preferences of customers for various products, the amount of data collected may vary significantly depending on which customer or which product is under consideration. Model A may be more reliable than model B for users who have rated many products, but model B may outperform model A for users who have only rated a few products. In an attempt to capitalize on this intuition, many researchers have developed approaches that attempt to improve the accuracy of stacked regression by adapting the blending on the basis of side information. Such an additional source of information, like the number of products rated by a user or the number of days since a product was released, is often referred to as a “meta-feature,” and we will use that terminology here.

stacked generalization 堆积正则化 堆积泛化 加权特征线性堆积的更多相关文章

  1. Ensemble Learning: Bootstrap aggregating (Bagging) & Boosting & Stacked generalization (Stacking)

    Booststrap aggregating (有些地方译作:引导聚集),也就是通常为大家所熟知的bagging.在维基上被定义为一种提升机器学习算法稳定性和准确性的元算法,常用于统计分类和回归中. ...

  2. 机器学习中模型泛化能力和过拟合现象(overfitting)的矛盾、以及其主要缓解方法正则化技术原理初探

    1. 偏差与方差 - 机器学习算法泛化性能分析 在一个项目中,我们通过设计和训练得到了一个model,该model的泛化可能很好,也可能不尽如人意,其背后的决定因素是什么呢?或者说我们可以从哪些方面去 ...

  3. R语言 绘图——条形图可以将堆积条形图与百分比堆积条形图配合使用

    在使用堆积条形图时候,新增一个百分比堆积条形图,可以加深读者印象. 封装一个function函数后只需要在调用的数据上改一下pos=‘fill’的代码即可.比较方便. 案例: # 封装函数 fun1& ...

  4. C++编程之面向对象的三个基本特征

    面向对象的三个基本特征是:封装.继承.多态. 封装 封装最好理解了.封装是面向对象的特征之一,是对象和类概念的主要特性. 封装,也就是把客观事物封装成抽象的类,并且类可以把自己的数据和方法只让可信的类 ...

  5. 机器学习入门13 - 正则化:稀疏性 (Regularization for Sparsity)

    原文链接:https://developers.google.com/machine-learning/crash-course/regularization-for-sparsity/ 1- L₁正 ...

  6. 【cs229-Lecture11】贝叶斯统计正则化

    本节知识点: 贝叶斯统计及规范化 在线学习 如何使用机器学习算法解决具体问题:设定诊断方法,迅速发现问题 贝叶斯统计及规范化(防止过拟合的方法) 就是要找更好的估计方法来减少过度拟合情况的发生. 回顾 ...

  7. Andrew Ng-ML-第八章-正则化

    1.过度拟合overfitting 过度拟合,因为有太多的特征+过少的训练数据,学习到的假设可能很适应训练集,但是不能泛化到新的样例.即泛化generalize能力差. 解决办法: 1.手动/使用选择 ...

  8. 线性回归和正则化(Regularization)

    python风控建模实战lendingClub(博主录制,包含大量回归建模脚本和和正则化解释,2K超清分辨率) https://study.163.com/course/courseMain.htm? ...

  9. coursera机器学习-logistic回归,正则化

    #对coursera上Andrew Ng老师开的机器学习课程的笔记和心得: #注:此笔记是我自己认为本节课里比较重要.难理解或容易忘记的内容并做了些补充,并非是课堂详细笔记和要点: #标记为<补 ...

随机推荐

  1. JMeter进行http接口测试

    Jmter工具设计之初是用于做性能测试的,它在实现对各种接口的调用方面已经做的比较成熟,因此,本次直接使用Jmeter工具来完成对Http接口的测试. 一.开发接口测试案例的整体方案: 第一步:我们要 ...

  2. Wildcard Matching - LeetCode

    Implement wildcard pattern matching with support for '?' and '*'. '?' Matches any single character. ...

  3. 任务驱动,对比式学习.NET开发系列之开篇------开源2个小框架(一个Winform框架,一个Web框架)

    一 源码位置 1. Winform框架 2. web框架 二 高效学习编程的办法 1 任务驱动方式学习软件开发 大部分人学习软件开发技术是通过看书,看视频,听老师上课的方式.这些方式有一个共同点即按知 ...

  4. DedeCMS 列表页调用图集内容多张图片的方法

    新做一个以图片为主的网站,采用的DEDECMS图集,列表页要求直接调内容面的大图,解决方法如下:(主要是采用php的正则匹配函数preg_match_all函数来巩固复习下该函数:preg_match ...

  5. MySQL中批量删除指定前缀表的sql语句

    有时候我们在安装一些cms的时候,这些cms都是带表前缀的方便区分数据,但有时候我们测试完需要删除的时候又有别的前缀表就可以参考下面的方法 代码如下:Select CONCAT( 'drop tabl ...

  6. sudo如何保持当前用户的环境变量?

    现象,我在/etc/profile里设置全局代理,然后使用命令 1.curl http://www.baidu.com  走代理 2.sudo curl http://www.baidu.com 并没 ...

  7. 海量数据插入数据库效率对照測试 ---ADO.NET下SqlBulkCopy()对照LINQ 下InsertAllOnSubmit()

    摘要:使用.NET相关技术向数据库中插入海量数据是经常使用操作.本文对照ADO.NET和LINQ两种技术.分别使用SqlBulkCopy()和InsertAllOnSubmit()方法进行操作. 得出 ...

  8. MetaQ对接SparkStreaming示例代码

    由于JavaReceiverInputDStream<String> lines = ssc.receiverStream(Receiver<T> receiver) 中 没有 ...

  9. 几种Tab的实现方法

    转载请注明出处,谢谢! 学了这久Android,今天来总结一下几种Tab的实现方法 实现方法一: ViewPage来实现 首先创建一个top.xml布局和一个bottom.xml布局,然后在主界面中通 ...

  10. hdfs笔记

    Distributed File System 数据量越来越多,在一个操作系统管辖的范围存不下了,那么就分配到更多的操作系统管理的磁盘中,但是不方便管理和维护,因此迫切需要一种系统来管理多台机器上的文 ...