what's xxx

In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features.

Naive Bayes is a popular (baseline) method for text categorization, the problem of judging documents as belonging to one category or the other (such as spam or legitimate, sports or politics, etc.) with word frequencies as the features. With appropriate preprocessing, it is competitive in this domain with more advanced methods including support vector machines.

In simple terms, a naive Bayes classifier assumes that the value of a particular feature is unrelated to the presence or absence of any other feature, given the class variable. 

An advantage of naive Bayes is that it only requires a small amount of training data to estimate the parameters (means and variances of the variables) necessary for classification. Because independent variables are assumed, only the variances of the variables for each class need to be determined and not the entire covariance matrix.

Abstractly, the probability model for a classifier is a conditional model

$p(C \vert F_1,\dots,F_n)\,$
over a dependent class variable C with a small number of outcomes or classes, conditional on several feature variables $F_1$ through $F_n$. The problem is that if the number of features n is large or if a feature can take on a large number of values, then basing such a model on probability tables is infeasible. We therefore reformulate the model to make it more tractable.

Using Bayes' theorem, this can be written

$p(C \vert F_1,\dots,F_n) = \frac{p(C) \ p(F_1,\dots,F_n\vert C)}{p(F_1,\dots,F_n)}. \,$
In plain English, using Bayesian Probability terminology, the above equation can be written as

$\mbox{posterior} = \frac{\mbox{prior} \times \mbox{likelihood}}{\mbox{evidence}}. \,$

$\begin{align}
p(C, F_1, \dots, F_n) & = p(C) \ p(F_1,\dots,F_n\vert C) \\
& = p(C) \ p(F_1\vert C) \ p(F_2,\dots,F_n\vert C, F_1) \\
& = p(C) \ p(F_1\vert C) \ p(F_2\vert C, F_1) \ p(F_3,\dots,F_n\vert C, F_1, F_2) \\
& = p(C) \ p(F_1\vert C) \ p(F_2\vert C, F_1) \ p(F_3\vert C, F_1, F_2) \ p(F_4,\dots,F_n\vert C, F_1, F_2, F_3) \\
& = p(C) \ p(F_1\vert C) \ p(F_2\vert C, F_1) \ \dots p(F_n\vert C, F_1, F_2, F_3,\dots,F_{n-1})
\end{align}$

Now the "naive" conditional independence assumptions come into play: assume that each feature $F_i$ is conditionally independent of every other feature $F_j$ for $j\neq i$ given the category C. This means that

$p(F_i \vert C, F_j) = p(F_i \vert C)\,,
p(F_i \vert C, F_j,F_k) = p(F_i \vert C)\,,
p(F_i \vert C, F_j,F_k,F_l) = p(F_i \vert C)\,,$
and so on, for $i\ne j,k,l$. Thus, the joint model can be expressed as

$\begin{align}
p(C \vert F_1, \dots, F_n) & \varpropto p(C, F_1, \dots, F_n) \\
& \varpropto p(C) \ p(F_1\vert C) \ p(F_2\vert C) \ p(F_3\vert C) \ \cdots \\
& \varpropto p(C) \prod_{i=1}^n p(F_i \vert C)\,.
\end{align}$
This means that under the above independence assumptions, the conditional distribution over the class variable C is:

$p(C \vert F_1,\dots,F_n) = \frac{1}{Z} p(C) \prod_{i=1}^n p(F_i \vert C)$
where the evidence $Z = p(F_1, \dots, F_n)$ is a scaling factor dependent only on $F_1,\dots,F_n$, that is, a constant if the values of the feature variables are known.

One common rule is to pick the hypothesis that is most probable; this is known as the maximum a posteriori or MAP decision rule. The corresponding classifier, a Bayes classifier, is the function $\mathrm{classify}$ defined as follows:

$\mathrm{classify}(f_1,\dots,f_n) = \underset{c}{\operatorname{argmax}} \ p(C=c) \displaystyle\prod_{i=1}^n p(F_i=f_i\vert C=c).$

All model parameters (i.e., class priors and feature probability distributions) can be approximated with relative frequencies from the training set. These are maximum likelihood estimates of the probabilities. A class' prior may be calculated by assuming equiprobable classes (i.e., priors = 1 / (number of classes)), or by calculating an estimate for the class probability from the training set (i.e., (prior for a given class) = (number of samples in the class) / (total number of samples)). To estimate the parameters for a feature's distribution, one must assume a distribution or generate nonparametric models for the features from the training set.

Algorithm

1. 计算先验概率,class priors and feature probability distributions; $p(C)$和$Z = p(F_1, \dots, F_n)$

2. 不同特征要假设一个概率分布;$p(F_i \vert C)$;

When dealing with continuous data, a typical assumption is that the continuous values associated with each class are distributed according to a Gaussian distribution.

Another common technique for handling continuous values is to use binning to discretize the feature values, to obtain a new set of Bernoulli-distributed features.

In general, the distribution method is a better choice if there is a small amount of training data, or if the precise distribution of the data is known. The discretization method tends to do better if there is a large amount of training data because it will learn to fit the distribution of the data. Since naive Bayes is typically used when a large amount of data is available (as more computationally expensive models can generally achieve better accuracy), the discretization method is generally preferred over the distribution method.

3. 计算成为每个类的概率,取概率最大的类;

ML | Naive Bayes的更多相关文章

  1. [ML] Naive Bayes for Text Classification

    TF-IDF Algorithm From http://www.ruanyifeng.com/blog/2013/03/tf-idf.html Chapter 1, 知道了"词频" ...

  2. [ML] Naive Bayes for email classification

    20 Newsgroups (Original) Author: Jeffrey H 1. Introduction This is only a test report for naive baye ...

  3. [Scikit-learn] 1.9 Naive Bayes

    Ref: http://scikit-learn.org/stable/modules/naive_bayes.html 1.9.1. Gaussian Naive Bayes 原理可参考:统计学习笔 ...

  4. Naive Bayes Theorem and Application - Theorem

    Naive Bayes Theorm And Application - Theorem Naive Bayes model: 1. Naive Bayes model 2. model: discr ...

  5. 【十大算法实现之naive bayes】朴素贝叶斯算法之文本分类算法的理解与实现

    关于bayes的基础知识,请参考: 基于朴素贝叶斯分类器的文本聚类算法 (上) http://www.cnblogs.com/phinecos/archive/2008/10/21/1315948.h ...

  6. MLLib实践Naive Bayes

    引言 本文基于Spark (1.5.0) ml库提供的pipeline完整地实践一次文本分类.pipeline将串联单词分割(tokenize).单词频数统计(TF),特征向量计算(TF-IDF),朴 ...

  7. 基于Naive Bayes算法的文本分类

    理论 什么是朴素贝叶斯算法? 朴素贝叶斯分类器是一种基于贝叶斯定理的弱分类器,所有朴素贝叶斯分类器都假定样本每个特征与其他特征都不相关.举个例子,如果一种水果其具有红,圆,直径大概3英寸等特征,该水果 ...

  8. 机器学习---用python实现朴素贝叶斯算法(Machine Learning Naive Bayes Algorithm Application)

    在<机器学习---朴素贝叶斯分类器(Machine Learning Naive Bayes Classifier)>一文中,我们介绍了朴素贝叶斯分类器的原理.现在,让我们来实践一下. 在 ...

  9. [Machine Learning & Algorithm] 朴素贝叶斯算法(Naive Bayes)

    生活中很多场合需要用到分类,比如新闻分类.病人分类等等. 本文介绍朴素贝叶斯分类器(Naive Bayes classifier),它是一种简单有效的常用分类算法. 一.病人分类的例子 让我从一个例子 ...

随机推荐

  1. 使用TensorFlow的卷积神经网络识别手写数字(1)-预处理篇

    功能: 将文件夹下的20*20像素黑白图片,根据重心位置绘制到28*28图片上,然后保存.经过预处理的图片有利于数字的准确识别.参见MNIST对图片的要求. 此处可下载已处理好的图片: https:/ ...

  2. Linux入门学习笔记2:终端命令

    LINUX操作系统学习 命令   附带建     cd   .. 当前路径的上一层       ../.. 当前路径的上两层       . 当前路径       - 跳转到上一次所在路径       ...

  3. Centos7重启网卡失败解决方法

    service Network-Manager stop  执行命令解决,如果执行命令还是失败,则是配置文件内容的问题,检查配置文件

  4. private virtual in c++

    source from http://blog.csdn.net/steedhorse/article/details/333664 // Test.cpp #include <iostream ...

  5. Java技术——Java中创建对象的5种方式

    此文为译文 原文连接:https://dzone.com/articles/5-different-ways-to-create-objects-in-java-with-ex 0. 前言 作为Jav ...

  6. 利用Windbg深入理解变量的存储模型

    下面的是一个简单的测试程序,基本包括了所有的变量类型,包括静态的,常量的,全局的,本地的,还有new出来的 #include <iostream> using namespace std; ...

  7. tar.xz结尾的文件的解压缩方法

    例如: codeblocks-13.12-1_i386.debian.stable.tar 这个压缩包也是两层压缩,外面是xz压缩方式,里层是tar压缩方式. 解压缩方法: $xz -d ***.ta ...

  8. Objective-C中的一些特殊的数据类型

    nil nil和C语言的NULL相同,在objc/objc.h中定义.nil表示一个Objctive-C对象,这个对象的指针指向空(没有东西就是空). Nil  首字母大写的Nil和nil有一点不一样 ...

  9. ogre的初始化与启动以及显示对象设置

    ogre的使用方法1---自动设置 1.ogre初始化:首先实例化一个Root对象 Root * root = new Root(); Root * root = new Root("plu ...

  10. 4.Vim编辑器与Shell命令脚本

    第4章 Vim编辑器与Shell命令脚本 章节简述: 本章首先讲解如何使用Vim编辑器来编写.修改文档,然后通过逐个配置主机名称.系统网卡以及Yum软件仓库参数文件等实验,帮助读者加深Vim编辑器中诸 ...