Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood
Abstract
Bayesian networks are a powerful probabilistic representation, and their use for classification has received considerable attention. However, they tend to perform poorly when learned in the standard way. This is attributable to a mismatch between the objective function used (likelihood or a function thereof) and the goal of classification (maximizing accuracy or conditional likelihood). Unfortunately, the computational cost of optimizing structure and parameters for conditional likelihood is prohibitive. In this paper we show that a simple approximation – choosing structures by maximizing conditional likelihood while setting parameters by maximum likelihood – yields good results. On a large suite of benchmark datasets, this approach produces better class probability estimates than naïve Bayes, TAN, and generatively-trained Bayesian networks.
1. Introduction
The simplicity and surprisingly high accuracy of the naïve Bayes classifier have led to its wide use, and to many attempts to extend it. In particular, naïve Bayes is a special case of a Bayesian network, and learning the structure and parameters of an unrestricted Bayesian network would appear to be a logical means of improvement. However, Friedman et al. found that naive Bayes easily outperforms such unrestricted Bayesian network classifiers on a large sample of benchmark datasets. This explanation was that the scoring functions used in standard Bayesian network learning attempt to optimize the likelihood of the entire data, rather than just the conditional likelihood of the class given the attributes. Such scoring results in suboptimal choices during the search process whenever the two functions favor differing changes to the network. The natural solution would then be to use conditional likelihood as the objective function. Unfortunately, Friedman et al. observed that, while maximum likelihood parameters can be efficiently computed in closed form, this is not true of conditional likelihood. The latter must be optimized using numerical methods, and doing so at each search step would be prohibitively expensive. Friedman et al. thus abandoned this avenue, leaving the investigation of possible heuristic alternatives to it as an important direction for future research. In this paper, we show that the simple heuristic of setting the parameters by maximum likelihood while choosing the structure by conditional likelihood is accurate and efficient.
Friedman et al. chose instead to extend naive Bayes by allowing a slightly less restricted structure (one parent per variable in addition to the class) while still optimizing likelihood. They showed that TAN, the resulting algorithm, was indeed more accurate than naive Bayes on benchmark datasets. We compare our algorithm to TAN and naive Bayes on the same datasets, and show that it outperforms both in the accuracy of class probability estimates, while outperforming naive Bayes and tying TAN in classification error.
2. Bayesian Networks
A Bayesian network encodes the joint probability distribution of a set of
2.1 Learning Bayesian Networks
Given an i.i.d. training set
When the structure of the network is known, this reduces to estimating
Since on average adding an arc never decreases likelihood on the training data, using the log likelihood as the scoring function can lead to severe overfitting. This problem can be overcome in a number of ways. The simplest one, which is often surprisingly effective, is to limit the number of parents a variable can have. Another alternative is to add a complexity penalty to the log-likelihood. For example, the MDL method minimizes Bayesian Dirichlet (BD) score:
where
2.2 Bayesian Network Classifiers
The goal of classification is to correctly predict the value of a designated discrete class variable predictors or attributes
naïve Bayes classifier is a Bayesian network where the class has no parents and each attribute has the class as its sole parent. Friedman et al.'s TAN algorithm uses a variant of the Chow and Liu method to produce a network where each variable has one other parent in addition to the class. More generally, a Bayesian network learned using any of the methods described above can be used as a classifier. All of these are generative models in the sense that they are learned by maximizing the log likelihood of the entire data being generated by the model,
conditional log likelihood
Notice discriminative learning, because it would focus on correctly discriminating between classes. The problem with this approach is that, unlike
3. The BNC Algorithm
We now introduce BNC, an algorithm for learning the structure of a Bayesian network classifier by maximizing conditional likelihood. BNC is similar to the hill climbing algorithm of Heckerman et al. except that it uses the conditional log likelihood of the class as the primary objective function. BNC starts from an empty network, and at each step considers adding each possible new arc (i.e., all those that do not create cycles) and deleting or reversing each current arc. BNC pre-discretizes continuous values and ignores missing values in the same way that TAN does.
We consider two versions of BNC. The first,
The second version,
The goal of
Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood的更多相关文章
- 概率图模型(PGM):贝叶斯网(Bayesian network)初探
1. 从贝叶斯方法(思想)说起 - 我对世界的看法随世界变化而随时变化 用一句话概括贝叶斯方法创始人Thomas Bayes的观点就是:任何时候,我对世界总有一个主观的先验判断,但是这个判断会随着世界 ...
- Learning Deconvolution Network for Semantic Segme小结
题目:Learning Deconvolution Network for Semantic Segmentation 作者:Hyeonwoo Noh, Seunghoon Hong, Bohyung ...
- [论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding
[论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding 本文结构 解决问题 主要贡献 算法原理 实验结果 参考文献 ...
- [Scikit-learn] Dynamic Bayesian Network - Conditional Random Field
李航,第十一章,条件随机场 参考:[PGM] Markov Networks 携代码:用 Python 通过马尔可夫随机场(MRF)与 Ising Model 进行二值图降噪[推荐!] CRF:htt ...
- 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)
本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率模型都成立,无论随机变量是离散还是连续,还是两者的结合. 部分图为手写,由于本人字很丑,望见谅 ...
- 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)
转:http://www.cnblogs.com/Dzhouqi/p/3204481.html本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率 ...
- [Scikit-learn] Dynamic Bayesian Network - HMM
Warning The sklearn.hmm module has now been deprecated due to it no longer matching the scope and th ...
- 概率图模型(PGM) —— 贝叶斯网络(Bayesian Network)
概率图模型是图论与概率方法的结合产物.Probabilistic graphical models are a joint probability distribution defined over ...
- 3.贝叶斯网络表示(The Bayesian Network Representation)
对于一个n随机变量的联合分布,一般需要2**n-1个参数来表示这个分布.但是,我们可以通过随机变量之间的独立性,减少参数的个数. naive Beyes model: Bayesian Network ...
随机推荐
- ubuntu apt-get 时 Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
sudo cp /etc/apt/sources.list ~/ sudo wget "http://pastebin.com/raw.php?i=uzhrtg5M" -O /et ...
- poll()函数的使用
分类: LINUX poll函数用于监测多个等待事件,若事件未发生,进程睡眠,放弃CPU控制权,若监测的任何一个事件发生,poll将唤醒睡眠的进程,并判断是什么等待事件发生,执行相应的操作.poll函 ...
- linux笔记:shell编程-文本处理命令
cut(字段提取命令,也叫列提取命令): printf(格式化输出命令): awk(awk就是把文件逐行的读入,以空格为默认分隔符将每行切片,切开的部分再进行各种分析处理): sed(sed是一个很好 ...
- 第一章 Java的I/O演进之路
I/O基础入门 Java的I/O演进 第一章 Java的I/O演进之路 1.1 I/O基础入门 1.1.1 Linux网络I/O模型简介 根据UNIX网络编程对I/O模型的分类,UNIX提供了5中I/ ...
- Table 'performance_schema.session_variables' doesn't exist
出现标题所示错误时设置如下参数可以解决!set @@global.show_compatibility_56=ON;
- asp.net lodop单个打印
1.首先在列表页面增加以下代码 <%@ Page Language="C#" AutoEventWireup="true" CodeBehind=&quo ...
- include包含头文件的语句中,双引号和尖括号的区别是什么?
include包含头文件的语句中,双引号和尖括号的区别是什么? #include <> 格式:引用标准库头文件,编译器从标准库目录开始搜索 尖括号表示只在系统默认目录或者括号内的路径查找 ...
- 在JSP中上传图片到数据库中
第一步:建立数据库 create table test_img(id number(4),name varchar(20),img long raw); 第二步:(NewImg.html) <h ...
- java selenium (五) 元素定位大全
页面元素定位是自动化中最重要的事情, selenium Webdriver 提供了很多种元素定位的方法. 测试人员应该熟练掌握各种定位方法. 使用最简单,最稳定的定位方法. 阅读目录 自动化测试步骤 ...
- Spring 定时器的使用
spring定时器应用 相关类: org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean 配置定时远行方法 o ...