Abstract

Bayesian networks are a powerful probabilistic representation, and their use for classification has received considerable attention. However, they tend to perform poorly when learned in the standard way. This is attributable to a mismatch between the objective function used (likelihood or a function thereof) and the goal of classification (maximizing accuracy or conditional likelihood). Unfortunately, the computational cost of optimizing structure and parameters for conditional likelihood is prohibitive. In this paper we show that a simple approximation – choosing structures by maximizing conditional likelihood while setting parameters by maximum likelihood – yields good results. On a large suite of benchmark datasets, this approach produces better class probability estimates than naïve Bayes, TAN, and generatively-trained Bayesian networks.

1. Introduction

The simplicity and surprisingly high accuracy of the naïve Bayes classifier have led to its wide use, and to many attempts to extend it. In particular, naïve Bayes is a special case of a Bayesian network, and learning the structure and parameters of an unrestricted Bayesian network would appear to be a logical means of improvement. However, Friedman et al. found that naive Bayes easily outperforms such unrestricted Bayesian network classifiers on a large sample of benchmark datasets. This explanation was that the scoring functions used in standard Bayesian network learning attempt to optimize the likelihood of the entire data, rather than just the conditional likelihood of the class given the attributes. Such scoring results in suboptimal choices during the search process whenever the two functions favor differing changes to the network. The natural solution would then be to use conditional likelihood as the objective function. Unfortunately, Friedman et al. observed that, while maximum likelihood parameters can be efficiently computed in closed form, this is not true of conditional likelihood. The latter must be optimized using numerical methods, and doing so at each search step would be prohibitively expensive. Friedman et al. thus abandoned this avenue, leaving the investigation of possible heuristic alternatives to it as an important direction for future research. In this paper, we show that the simple heuristic of setting the parameters by maximum likelihood while choosing the structure by conditional likelihood is accurate and efficient.

Friedman et al. chose instead to extend naive Bayes by allowing a slightly less restricted structure (one parent per variable in addition to the class) while still optimizing likelihood. They showed that TAN, the resulting algorithm, was indeed more accurate than naive Bayes on benchmark datasets. We compare our algorithm to TAN and naive Bayes on the same datasets, and show that it outperforms both in the accuracy of class probability estimates, while outperforming naive Bayes and tying TAN in classification error.

2. Bayesian Networks

A Bayesian network encodes the joint probability distribution of a set of

2.1 Learning Bayesian Networks

Given an i.i.d. training set

When the structure of the network is known, this reduces to estimating

Since on average adding an arc never decreases likelihood on the training data, using the log likelihood as the scoring function can lead to severe overfitting. This problem can be overcome in a number of ways. The simplest one, which is often surprisingly effective, is to limit the number of parents a variable can have. Another alternative is to add a complexity penalty to the log-likelihood. For example, the MDL method minimizes Bayesian Dirichlet (BD) score:

where

2.2 Bayesian Network Classifiers

The goal of classification is to correctly predict the value of a designated discrete class variable predictors or attributes naïve Bayes classifier is a Bayesian network where the class has no parents and each attribute has the class as its sole parent. Friedman et al.'s TAN algorithm uses a variant of the Chow and Liu method to produce a network where each variable has one other parent in addition to the class. More generally, a Bayesian network learned using any of the methods described above can be used as a classifier. All of these are generative models in the sense that they are learned by maximizing the log likelihood of the entire data being generated by the model, conditional log likelihood

Notice discriminative learning, because it would focus on correctly discriminating between classes. The problem with this approach is that, unlike

3. The BNC Algorithm

We now introduce BNC, an algorithm for learning the structure of a Bayesian network classifier by maximizing conditional likelihood. BNC is similar to the hill climbing algorithm of Heckerman et al. except that it uses the conditional log likelihood of the class as the primary objective function. BNC starts from an empty network, and at each step considers adding each possible new arc (i.e., all those that do not create cycles) and deleting or reversing each current arc. BNC pre-discretizes continuous values and ignores missing values in the same way that TAN does.

We consider two versions of BNC. The first,

The second version,

The goal of

Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood的更多相关文章

  1. 概率图模型(PGM):贝叶斯网(Bayesian network)初探

    1. 从贝叶斯方法(思想)说起 - 我对世界的看法随世界变化而随时变化 用一句话概括贝叶斯方法创始人Thomas Bayes的观点就是:任何时候,我对世界总有一个主观的先验判断,但是这个判断会随着世界 ...

  2. Learning Deconvolution Network for Semantic Segme小结

    题目:Learning Deconvolution Network for Semantic Segmentation 作者:Hyeonwoo Noh, Seunghoon Hong, Bohyung ...

  3. [论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding

    [论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding 本文结构 解决问题 主要贡献 算法原理 实验结果 参考文献 ...

  4. [Scikit-learn] Dynamic Bayesian Network - Conditional Random Field

    李航,第十一章,条件随机场 参考:[PGM] Markov Networks 携代码:用 Python 通过马尔可夫随机场(MRF)与 Ising Model 进行二值图降噪[推荐!] CRF:htt ...

  5. 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)

    本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率模型都成立,无论随机变量是离散还是连续,还是两者的结合. 部分图为手写,由于本人字很丑,望见谅 ...

  6. 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)

    转:http://www.cnblogs.com/Dzhouqi/p/3204481.html本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率 ...

  7. [Scikit-learn] Dynamic Bayesian Network - HMM

    Warning The sklearn.hmm module has now been deprecated due to it no longer matching the scope and th ...

  8. 概率图模型(PGM) —— 贝叶斯网络(Bayesian Network)

    概率图模型是图论与概率方法的结合产物.Probabilistic graphical models are a joint probability distribution defined over ...

  9. 3.贝叶斯网络表示(The Bayesian Network Representation)

    对于一个n随机变量的联合分布,一般需要2**n-1个参数来表示这个分布.但是,我们可以通过随机变量之间的独立性,减少参数的个数. naive Beyes model: Bayesian Network ...

随机推荐

  1. 6.1-CALayer 使用

    @设置圆角 注意点 1圆角效果,并不是在给定frame布局后有,要给定内容后才有 //头像 NSData *data = [[DJXMPPTool sharedInstance].cardAvatar ...

  2. python & c

    http://www.ibm.com/developerworks/cn/linux/l-cn-pythonandc/

  3. LSMW批处理工具操作手册

          目录     1. 创建PROJECT 1 2.第一步:初始界面后点击执行出现如下对话框 3 3.第二步:维护源结构 8 4.第三步:Maintain Source Fields 9 4. ...

  4. jquery获得option的值和对option进行操作

    Query获取Select元素,并选择的Text和Value: $("#select_id").change(function(){//code...}); //为Select添加 ...

  5. Unity3d调用iOS陀螺仪

    How to write gyroscope controller with Unity3d http://blog.heyworks.com/how-to-write-gyroscope-contr ...

  6. Leetcode--Swap Nodes in Pairs

    最傻的方法: ListNode *swapPairs(ListNode *head) { if (head == NULL) return NULL; ListNode *temp = ); List ...

  7. ON DUPLICATE KEY UPDATE重复插入时更新

    mysql当插入重复时更新的方法: 第一种方法: 示例一:插入多条记录 假设有一个主键为 client_id 的 clients 表,可以使用下面的语句: INSERT INTO clients (c ...

  8. HDOJ Problem - 1299

    题意:等式 1 / x + 1 / y = 1 / n (x, y, n ∈ N+ (1) 且 x <= y) ,给出 n,求有多少满足该式子的解.(1 <= n <= 1e9) 题 ...

  9. d3 scale 学习笔记

    讲解scale 的好材料 https://www.dashingd3js.com/d3js-scales

  10. 设计模式学习笔记-Adapter模式

    Adapter模式,就是适配器模式,使两个原本没有关联的类结合一起使用. 平时我们会经常碰到这样的情况,有了两个现成的类,它们之间没有什么联系,但是我们现在既想用其中一个类的方法,同时也想用另外一个类 ...