Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood
Abstract
Bayesian networks are a powerful probabilistic representation, and their use for classification has received considerable attention. However, they tend to perform poorly when learned in the standard way. This is attributable to a mismatch between the objective function used (likelihood or a function thereof) and the goal of classification (maximizing accuracy or conditional likelihood). Unfortunately, the computational cost of optimizing structure and parameters for conditional likelihood is prohibitive. In this paper we show that a simple approximation – choosing structures by maximizing conditional likelihood while setting parameters by maximum likelihood – yields good results. On a large suite of benchmark datasets, this approach produces better class probability estimates than naïve Bayes, TAN, and generatively-trained Bayesian networks.
1. Introduction
The simplicity and surprisingly high accuracy of the naïve Bayes classifier have led to its wide use, and to many attempts to extend it. In particular, naïve Bayes is a special case of a Bayesian network, and learning the structure and parameters of an unrestricted Bayesian network would appear to be a logical means of improvement. However, Friedman et al. found that naive Bayes easily outperforms such unrestricted Bayesian network classifiers on a large sample of benchmark datasets. This explanation was that the scoring functions used in standard Bayesian network learning attempt to optimize the likelihood of the entire data, rather than just the conditional likelihood of the class given the attributes. Such scoring results in suboptimal choices during the search process whenever the two functions favor differing changes to the network. The natural solution would then be to use conditional likelihood as the objective function. Unfortunately, Friedman et al. observed that, while maximum likelihood parameters can be efficiently computed in closed form, this is not true of conditional likelihood. The latter must be optimized using numerical methods, and doing so at each search step would be prohibitively expensive. Friedman et al. thus abandoned this avenue, leaving the investigation of possible heuristic alternatives to it as an important direction for future research. In this paper, we show that the simple heuristic of setting the parameters by maximum likelihood while choosing the structure by conditional likelihood is accurate and efficient.
Friedman et al. chose instead to extend naive Bayes by allowing a slightly less restricted structure (one parent per variable in addition to the class) while still optimizing likelihood. They showed that TAN, the resulting algorithm, was indeed more accurate than naive Bayes on benchmark datasets. We compare our algorithm to TAN and naive Bayes on the same datasets, and show that it outperforms both in the accuracy of class probability estimates, while outperforming naive Bayes and tying TAN in classification error.
2. Bayesian Networks
A Bayesian network encodes the joint probability distribution of a set of
2.1 Learning Bayesian Networks
Given an i.i.d. training set
When the structure of the network is known, this reduces to estimating
Since on average adding an arc never decreases likelihood on the training data, using the log likelihood as the scoring function can lead to severe overfitting. This problem can be overcome in a number of ways. The simplest one, which is often surprisingly effective, is to limit the number of parents a variable can have. Another alternative is to add a complexity penalty to the log-likelihood. For example, the MDL method minimizes Bayesian Dirichlet (BD) score:
where
2.2 Bayesian Network Classifiers
The goal of classification is to correctly predict the value of a designated discrete class variable predictors or attributes
naïve Bayes classifier is a Bayesian network where the class has no parents and each attribute has the class as its sole parent. Friedman et al.'s TAN algorithm uses a variant of the Chow and Liu method to produce a network where each variable has one other parent in addition to the class. More generally, a Bayesian network learned using any of the methods described above can be used as a classifier. All of these are generative models in the sense that they are learned by maximizing the log likelihood of the entire data being generated by the model,
conditional log likelihood
Notice discriminative learning, because it would focus on correctly discriminating between classes. The problem with this approach is that, unlike
3. The BNC Algorithm
We now introduce BNC, an algorithm for learning the structure of a Bayesian network classifier by maximizing conditional likelihood. BNC is similar to the hill climbing algorithm of Heckerman et al. except that it uses the conditional log likelihood of the class as the primary objective function. BNC starts from an empty network, and at each step considers adding each possible new arc (i.e., all those that do not create cycles) and deleting or reversing each current arc. BNC pre-discretizes continuous values and ignores missing values in the same way that TAN does.
We consider two versions of BNC. The first,
The second version,
The goal of
Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood的更多相关文章
- 概率图模型(PGM):贝叶斯网(Bayesian network)初探
1. 从贝叶斯方法(思想)说起 - 我对世界的看法随世界变化而随时变化 用一句话概括贝叶斯方法创始人Thomas Bayes的观点就是:任何时候,我对世界总有一个主观的先验判断,但是这个判断会随着世界 ...
- Learning Deconvolution Network for Semantic Segme小结
题目:Learning Deconvolution Network for Semantic Segmentation 作者:Hyeonwoo Noh, Seunghoon Hong, Bohyung ...
- [论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding
[论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding 本文结构 解决问题 主要贡献 算法原理 实验结果 参考文献 ...
- [Scikit-learn] Dynamic Bayesian Network - Conditional Random Field
李航,第十一章,条件随机场 参考:[PGM] Markov Networks 携代码:用 Python 通过马尔可夫随机场(MRF)与 Ising Model 进行二值图降噪[推荐!] CRF:htt ...
- 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)
本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率模型都成立,无论随机变量是离散还是连续,还是两者的结合. 部分图为手写,由于本人字很丑,望见谅 ...
- 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)
转:http://www.cnblogs.com/Dzhouqi/p/3204481.html本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率 ...
- [Scikit-learn] Dynamic Bayesian Network - HMM
Warning The sklearn.hmm module has now been deprecated due to it no longer matching the scope and th ...
- 概率图模型(PGM) —— 贝叶斯网络(Bayesian Network)
概率图模型是图论与概率方法的结合产物.Probabilistic graphical models are a joint probability distribution defined over ...
- 3.贝叶斯网络表示(The Bayesian Network Representation)
对于一个n随机变量的联合分布,一般需要2**n-1个参数来表示这个分布.但是,我们可以通过随机变量之间的独立性,减少参数的个数. naive Beyes model: Bayesian Network ...
随机推荐
- 【LeetCode】Hamming Distance
问题网址 https://leetcode.com/problems/hamming-distance/ 就是一个异或后,求1的位数的问题. 看到问题之后,首先困扰是: int能不能求异或?是不是要转 ...
- Dubai Princess and Prince!
萨拉玛公主,生于1999年 哈曼丹王子 玛丽亚姆公主,出生于1991年
- 浅谈HTTPS以及Fiddler抓取HTTPS协议
最近想尝试基于Fiddler的录制功能做一些接口的获取和处理工作,碰到的一个问题就是简单连接Fiddler只能抓取HTTP协议,关键的登录请求等HTTPS协议都没有捕捉到,所以想让Fiddler能够同 ...
- 移动端html模版
<!DOCTYPE html><html><head> <title>时钟</title> <meta charset="u ...
- 推荐10款免费的在线UI测试工具
发布网站之前至关重要的一步是网站测试.网站测试要求我们全面地运行网站并通过所有基本测试,如响应式设计测试.安全测试.易用性测试.跨浏览器兼容性.网站速度测试等. 网站测试对SEO.搜索引擎排名.转换率 ...
- Access使用join进行多个表联合查询的问题
Access是支持三表或三表以上的join查询的,但是要加括号,如果不加的话,会报错,括号的作用是决定join的顺序.例如: SELECT *FROM (aa LEFT JOIN bb ON aa.a ...
- java replace和replaceAll
replace和replaceAll是JAVA中常用的替换字符的方法 public String replace(char oldChar, char newChar) 在字符串中用n ...
- mysql 存储 emoji报错( Incorrect string value: '\xF0\x9F\x98\x84\xF0\x9F)的解决方案
1.报错原因: mysql utf-8 编码储存的是 2-3个的字节,而emoji则是4个字节. 2.解决办法: 修改mysql的配置文件,windows下的为my.ini(linux下的为my.cn ...
- sphinx索引分析——文件格式和字典是double array trie 检索树,索引存储 – 多路归并排序,文档id压缩 – Variable Byte Coding
1 概述 这是基于开源的sphinx全文检索引擎的架构代码分析,本篇主要描述index索引服务的分析.当前分析的版本 sphinx-2.0.4 2 index 功能 3 文件表 4 索引文件结构 4. ...
- Ingress 记萌新的第一次连多重(xjbl)
之前为了升七,ArtanisWei学长告诉我可以去紫金园雕塑[这是什么地方啊],顺带靠卖萌骗了一桶key 于是屁颠屁颠的跑去按照群里攻略开始连多重[馒头 by handsomepeach],连了一百年 ...