1.Structured prediction methods are essentially a combination of classification and graphical modeling.

2.They combine the ability of graphical models to compactly model multivariate data with the ability of classification methods to perform prediction using large sets of input features.

3.The input x is divided into feature vectors {x0,x1, . . . ,xT }. Each xs contains various information about the word at position s, such as its identity, orthographic features such as prefixes and suffixes, membership in domain-specific lexicons, and information in semantic databases such as WordNet.

4.CRFs are essentially a way of combining the advantages of discriminative classification and graphical modeling, combining the ability to compactly model multivariate outputs y with the ability to leverage a large number of input features x for prediction.

5.The difference between generative models and CRFs is thus exactly analogous to the difference between the naive Bayes and logistic regression classifiers. Indeed, the multinomial logistic regression model can be seen as the simplest kind of CRF, in which there is only one output variable.

6.The insight of the graphical modeling perspective is that a distribution over very many variables can often be represented as a product of local functions that each depend on a much smaller subset of variables. This factorization turns out to have a close connection to certain conditional independence relationships among the variables — both types of information being easily summarized by a graph. Indeed, this relationship between factorization, conditional independence, and graph structure comprises much of the power of the graphical modeling framework: the conditional independence viewpoint is most useful for designing models, and the factorization viewpoint is most useful for designing inference algorithms.

7.The principal advantage of discriminative modeling is that it is better suited to including rich, overlapping features.

8.In principle, it may not be clear why these approaches should be so different, because we can always convert between the two methods using Bayes rule. For example, in the naive Bayes model, it is easy to convert the joint p(y)p(x|y) into a conditional distribution p(y|x). Indeed, this conditional has the same form as the logistic regression model (2.9). And if we managed to obtain a “true” generative model for the data, that is, a distribution p∗(y,x) = p∗(y)p∗(x|y) from which the data were actually sampled, then we could simply compute the true p∗(y|x), which is exactly the target of the discriminative approach. But it is precisely because we never have the true distribution that the two approaches are different in practice. Estimating p(y)p(x|y) first, and then computing the resulting p(y|x) (the generative approach)yields a different estimate than estimating p(y|x) directly. In other words, generative and discriminative models both have the aim of stimating p(y|x), but they get there in different ways.

an introduction to conditional random fields的更多相关文章

  1. (转)Image Segmentation with Tensorflow using CNNs and Conditional Random Fields

    Daniil's blog Machine Learning and Computer Vision artisan. About/ Blog/ Image Segmentation with Ten ...

  2. 论文翻译:Conditional Random Fields as Recurrent Neural Networks

    Conditional Random Fields as Recurrent Neural Networks ICCV2015    cite237 1摘要: 像素级标注的重要性(语义分割 图像理解) ...

  3. Conditional Random Fields (CRF) 初理解

    1,Conditional Random Fields

  4. 条件随机场 Conditional Random Fields

    简介 假设你有冠西哥一天生活中的照片(这些照片是按时间排好序的),然后你很无聊的想给每张照片打标签(Tag),比如这张是冠西哥在吃饭,那张是冠西哥在睡觉,那么你该怎么做呢? 一种方法是不管这些照片的序 ...

  5. NLP —— 图模型(二)条件随机场(Conditional random field,CRF)

    本文简单整理了以下内容: (一)马尔可夫随机场(Markov random field,无向图模型)简单回顾 (二)条件随机场(Conditional random field,CRF) 这篇写的非常 ...

  6. 条件随机场(conditional random field,CRF)模型初探

    0. 引言 0x1:为什么会有条件随机场?它解决了什么问题? 在开始学习CRF条件随机场之前,我们需要先了解一下这个算法的来龙去脉,它是在什么情况下被提出的,是从哪个算法演进而来的,它又解决了哪些问题 ...

  7. 条件随机场conditional random field

    主要翻译自http://blog.echen.me/2012/01/03/introduction-to-conditional-random-fields/,原作者是MIT的大神,加入了一些我自己的 ...

  8. 条件随机场Conditional Random Field-CRF入门级理解

    条件随机场Conditional Random Field-CRF入门级理解   有向图与无向图模型 CRF模型是一个无向概率图模型,更宽泛地说,它是一个概率图模型.现实世界的一些问题可以用概率图模型 ...

  9. 马尔可夫随机场(Markov random fields) 概率无向图模型 马尔科夫网(Markov network)

    上面两篇博客,解释了概率有向图(贝叶斯网),和用其解释条件独立.本篇将研究马尔可夫随机场(Markov random fields),也叫无向图模型,或称为马尔科夫网(Markov network) ...

随机推荐

  1. .NET 自带的动态代理+Expression 实现AOP

    下面代码(摘抄之别处,原创在哪不知)是采用TransparentProxy和RealProxy实现对象的动态代理.碍于其使用反射掉用方法,所以就小试着将反射改成Expression以提高执行的效率.第 ...

  2. Visitor模式,Decorator模式,Extension Object模式

    Modem结构 Visitor模式 对于被访问(Modem)层次结构中的每一个派生类,访问者(Visitor)层次中都有一个对应的方法. 从派生类到方法的90度旋转. 新增类似的Windows配置函数 ...

  3. Visual Studio 2012完美的拥抱GitHub

    详情请查看http://www.aehyok.com/Blog/Detail/73.html 个人网站地址:aehyok.com QQ 技术群号:206058845,验证码为:aehyok 本文文章链 ...

  4. Python 实现有道翻译命令行版

    一.个人需求 由于一直用Linux系统,对于词典的支持特别不好,对于我这英语渣渣的人来说,当看英文文档就一直卡壳,之前用惯了有道词典,感觉很不错,虽然有网页版的但是对于全站英文的网页来说并不支持.索性 ...

  5. 史上最强大的js图表库——ECharts带你入门(转)

    出处:http://www.cnblogs.com/zrtqsk/p/4019412.html PS:之前的那篇博客Highcharts——让你的网页上图表画的飞起 ,评论中,花儿笑弯了腰 和 Sta ...

  6. Parallel的陷阱

    ,).ToArray(); ; Parallel.For<int>( fromInclusive: , toExclusive: nums.Length, /* 陷阱 */ localIn ...

  7. linux中ssh可以登录sftp不能登录解决办法

    我的服务器一直正常使用,平时使用secureCRT进行管理,使用secureFX进行文件的上传下载,突然有一天secureFX连接的时候出问题了,secureFX的日志如下: i SecureFX 版 ...

  8. pip 安装psycopg的错误

    psycopg包安装有点问题,特别是在windows下,pip从requirements.txt批量安装总是出错,发现是这个包的问题. 这里需要用easy_install来装,因为gfw的问题,最好下 ...

  9. Codeforces Round #382 (Div. 2)C. Tennis Championship 动态规划

    C. Tennis Championship 题目链接 http://codeforces.com/contest/735/problem/C 题面 Famous Brazil city Rio de ...

  10. Python一点注意

    1. pickle pickle模块中的两个主要函数是dump()和load().dump()函数接受一个文件句柄和一个数据对象作为参数,把数据对象以特定的格式保存到给定的文件中.当我们使用load( ...