We consider two types of inference:

  1. finding the most likely state of the world consistent with some evidence
  2. computing arbitrary conditional probabilities.

We then discuss two approaches to making inference more tractable on large , relational problems:

  1. lazy inference , in which only the groundings that deviate from a "default" value need to be instantiated;
  2. lifted inference , in which we group indistinguishable atoms together and treat them as a single unit during inference;

3.1 Inference the most probable explanation

A basic inference task is finding the most probable state of the world y given some evidence x, where x is a set of literals;

Formula

For Markov logic , this is formally defined as follows:  $$ \begin{align} arg \; \max_y P(y|x) & = arg \; \max_y \frac{1}{Z_x} exp \left( \sum_i w_in_i(x, \; y) \right) \tag{3.1} \\  & = arg \; \max_y \sum_i w_in_i(x, \; y) \tag{3.1} \end{align}  $$

$n_i(x, y)$: the number of true groundings of clause $i$ ;

The problem reduces to finding the truth assignment that maximizes the sum of weights of satisfied clauses;

Method

MaxWalkSAT:

Repeatedly picking an unsatisfied clause at random and flipping the truth value of one of the atoms in it.

With a certain probability , the atom is chosen randomly;

Otherwise, the atom is chosen to maximize the sum of satisfied clause weights when flipped;

DeltaCost($v$) computes the change in the sum of weights of unsatisfied clauses that results from flipping variable $v$ in the current solution.

Uniform(0,1) returns a uniform deviate from the interval [0,1]

Example

Step1: convert Formula to CNF 

0.646696    $\lnot$Smokes(x) $\lor$ Cancer(x)

1.519900    ( $\lnot$Friends(x,y) $\lor$ $\lnot$Smokes(x) $\lor$ Smokes(y)) $\land$ ( $\lnot$Friends(x,y) $\lor$ $\lnot$Smokes(y) $\lor$ Smokes(x))

Clauses:

  1. $\lnot$Smokes(x) $\lor$ Cancer(x)
  2. $\lnot$Friends(x,y) $\lor$ $\lnot$Smokes(x) $\lor$ Smokes(y)
  3. $\lnot$Friends(x,y) $\lor$ $\lnot$Smokes(y) $\lor$ Smokes(x)

Atoms: Smokes(x) 、Cancer(x) 、Friends(x,y)

Constant: {A, B, C}

Step2: Propositionalizing the domain

The truth value of evidences is depent on themselves;

The truth value of others is assigned randomly;

C:#constant ;          $\alpha(F_i)$: the arity of $F_i$ ;

world size = $\sum_{i}C^{\alpha(F_i)}$     指数级增长!!!

Step3: MaxWalkSAT

3.2 Computing conditional Probabilities

MLNs can answer arbitraty queries of the form "What is the probability that formula $F_1$ holds given that formula $F_2$ does?". If $F_1$ and $F_2$ are two formulas in first-order logic, C is a finite set of constants including any constants that appear in $F_1$ or $F_2$, and L is an MLN, then $$ \begin{align} P(F_1|F_2, L, C) & = P(F_1|F_2,M_{L,C}) \tag{3.2} \\ & = \frac{P(F_1 \land F_2 | M_{L,C})}{P(F_2|M_{L,C})} \tag{3.2} \\ & = \frac{\sum_{x \in \chi_{F_1} \cap \chi_{F_2} P(X=x|M_{L,C}) }}{\sum_{x \in \chi_{F_2}P(X=x|M_{L, C})}} \tag{3.2} \end{align} $$

where $\chi_{f_i}$ is the set of worlds where $F_i$ holds, $M_{L,C}$ is the Markov network defined by L and C;

Problem

MLN inference subsumes probabilistic inference, which is #P-complete, and logical inference, which is NP-complete;

Method: 数据的预处理

We focus on the case where $F_2$ is a conjunction of ground literals , this is the most frequent type in practice.

In this scenario, further efficiency can be gained by applying a generalization of knowledge-based model construction.

The basic idea is to only construct the minimal subset of the ground network required to answer the query.

This network is construct by checking if the atoms that the query formula directly depends on are in the evidence. If they are, the construction is complete. Those that are not are added to the network, and we in turn check the atoms they depend on. This process is repeated until all relevant atoms have been retrieved;

Markov blanket : parents + children + children's parents  , BFS

example

Once the network has been constructed, we can apply any standard inference technique for Markov networks, such as Gibbs sampling;

Problem

One problem with using Gibbs sampling for inference in MLNs is that it breaks down in the presence of deterministic or near-deterministic dependencies. Deterministc dependencies break up the space of possible worlds into regions that are not reachable from each other, violating a basic requirement of MCMC. Near-deterministic dependencies greatly slow down inference, by creating regions of low probability that are very difficult to traverse.

Method

The MC-SAT is a slice sampling MCMC algorithm which uses a combination of satisfiability testing and simulated annealing to sample from the slice. The advantage of using a satisfiability solver(WalkSAT) is that it efficiently finds isolated modes in the distribution, and as a result the Markov chain mixes very rapidly.

Slice sampling is an instance of a widely used approach in MCMC inference that introduces auxiliary variables, $u$, to capture the dependencies between observed variables, $x$. For example, to sample from P(X=$x$) = (1/Z) $\prod_k \phi_k(x_{\{k\}})$, we can define P(X=$x$, U=$u$)=(1/Z) $\prod_k I_{[0, \; \phi_k(x_{\{k\}})]}(u_k)$

MLN 讨论 —— inference的更多相关文章

  1. MLN 讨论 —— 基础知识

    一. MLN相关知识的介绍 1. First-order logic A first-order logic knowledge base (KB) is a set of formulas in f ...

  2. pgm5

    这部分讨论 inference 里面基本的问题,即计算 这类 query,这一般可以认为等价于计算 ,因为我们只需要重新 normalize 一下关于 的分布就得到了需要的值,特别是像 MAP 这类 ...

  3. 论文笔记:Integrated Object Detection and Tracking with Tracklet-Conditioned Detection

    概要 JiFeng老师CVPR2019的另一篇大作,真正地把检测和跟踪做到了一起,之前的一篇大作FGFA首次构建了一个非常干净的视频目标检测框架,但是没有实现帧间box的关联,也就是说没有实现跟踪.而 ...

  4. PRML读书会第十章 Approximate Inference(近似推断,变分推断,KL散度,平均场, Mean Field )

    主讲人 戴玮 (新浪微博: @戴玮_CASIA) Wilbur_中博(1954123) 20:02:04 我们在前面看到,概率推断的核心任务就是计算某分布下的某个函数的期望.或者计算边缘概率分布.条件 ...

  5. 听同事讲 Bayesian statistics: Part 2 - Bayesian inference

    听同事讲 Bayesian statistics: Part 2 - Bayesian inference 摘要:每天坐地铁上班是一件很辛苦的事,需要早起不说,如果早上开会又赶上地铁晚点,更是让人火烧 ...

  6. <A Decomposable Attention Model for Natural Language Inference>(自然语言推理)

    http://www.xue63.com/toutiaojy/20180327G0DXP000.html 本文提出一种简单的自然语言推理任务下的神经网络结构,利用注意力机制(Attention Mec ...

  7. Intelligence Beyond the Edge: Inference on Intermittent Embedded Systems

    郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布! 以下是对本文关键部分的摘抄翻译,详情请参见原文. Abstract 能量收集技术为未来的物联网应用提供了一个很有前景的平台.然而,由于这些 ...

  8. [NodeJS] 优缺点及适用场景讨论

    概述: NodeJS宣称其目标是“旨在提供一种简单的构建可伸缩网络程序的方法”,那么它的出现是为了解决什么问题呢,它有什么优缺点以及它适用于什么场景呢? 本文就个人使用经验对这些问题进行探讨. 一. ...

  9. CSS常见居中讨论

    先来一个常见的案例,把一张图片和下方文字进行居中: 首先处理左右居中,考虑到img是一个行内元素,下方的文字内容也是行内元素,因此直接用text-align即可: <style> .con ...

随机推荐

  1. [ 转载 ]hashCode方法的相关用法

    想要明白hashCode的作用,你必须要先知道Java中的集合. 总的来说,Java中的集合(Collection)有两类,一类是List,再有一类是Set. 你知道它们的区别吗?前者集合内的元素是有 ...

  2. 深入了解jQuery之链式结构

    本文是在阅读了Aaron艾伦的jQuery源码解析(地址:http://www.imooc.com/learn/172)后的个人体会以及笔记.在这里感谢艾伦老师深入浅出的讲解!! 1 什么是链式? 先 ...

  3. BZOJ 2281: [Sdoi2011]黑白棋 (Nim游戏+dp计数)

    题意 这题目有一点问题,应该是在n个格子里有k个棋子,k是偶数.从左到右一白一黑间隔出现.有两个人不妨叫做小白和小黑.两个人轮流操作,每个人可以选 1~d 枚自己颜色的棋子,如果是白色则只能向右移动, ...

  4. 2019ICPC区域赛(银川)总结

    2019ICPC银川 作为第一次打区域赛的我,心情异常激动,加上学校给坐飞机(事实上赶飞机很痛苦). 热身赛很难受,oj上不去,写AC自动机输入没写好.. 现场赛,开场直觉倒着看,发现签到.然后看B, ...

  5. P2461 [SDOI2008]递归数列 矩阵乘法+构造

    还好$QwQ$ 思路:矩阵快速幂 提交:1次 题解: 如图: 注意$n,m$如果小于$k$就不要快速幂了,直接算就行... #include<cstdio> #include<ios ...

  6. 015_linux驱动之_signal

    1. 首先看应用程序 1. 首先分析第二点使用函数signal(SIGIO, my_signal_fun);来设置,当驱动程序传递信号给应用程序时候会调用第一点的程序 2. 第三点是设置相关参数 (二 ...

  7. Java进阶知识21 Spring的AOP编程

    1.概述 Aop:(Aspect Oriented Programming)面向切面编程          功能: 让关注点代码与业务代码分离! 关注点:重复代码就叫做关注点:切面: 关注点形成的类, ...

  8. Java集合总结(一):列表和队列

    java中的具体容器类都不是从头构建的,他们都继承了一些抽象容器类.这些抽象容器类,提供了容器接口的部分实现,方便具体容器类在抽象类的基础上做具体实现.容器类和接口的关系架构图如下: 虚线框表示接口, ...

  9. 关于lda算法的一个博客

    http://www.cnblogs.com/LeftNotEasy/archive/2011/01/08/lda-and-pca-machine-learning.html

  10. Grafana +Zabbix 系列二

    Grafana +Zabbix 系列二 Grafana 简介补充 Grafana自身并不存储数据,数据从其他地方获取.需要配置数据源 Grafana支持从Zabbix中获取数据 Grafana优化图形 ...