Learning LexRank——Graph-based Centrality as Salience in Text Summarization(一)
(1)What is Sentence Centrality and Centroid-based Summarization ?
Extractive summarization works by choosing a subset of the sentences in the original documents. This process can be viewed as identifying the most central sentences in a (multi-document) cluster that give the necessary and sufficient amount of information related to the main theme of the cluster.
The centroid of a cluster is a pseudo-document which consists of words that have tf×idf scores above a predefined threshold, where tf is the frequency of a word in the cluster, and idf values are typically computed over a much larger and similar genre data set.
In centroid-based summarization (Radev, Jing, & Budzikowska, 2000), the sentences that contain more words from the centroid of the cluster are considered as central. This is a measure of how close the sentence is to the centroid of the cluster.
(2)Centrality-based Sentence Salience:
All of our approaches are based on the concept of prestige in social networks. A social network is a mapping of relationships between interacting entities (e.g. people, organizations, computers). Social networks are represented as graphs, where the nodes represent the entities and the links represent the relations between the nodes.
A cluster of documents can be viewed as a network of sentences that are related to each other. We hypothesize that the sentences that are similar to many of the other sentences in a cluster are more central (or salient) to the topic.
There are two points to clarify in this definition of centrality:
1.How to define similarity between two sentences.
2.How to compute the overall centrality of a sentence given its similarity to other sentences.
To define similarity, we use the bag-of-words model to represent each sentence as an N-dimensional vector, where N is the number of all possible words in the target language. For each word that occurs in a sentence, the value of the corresponding dimension in the vector representation of the sentence is the number of occurrences of the word in the sentence times the idf of the word. The similarity between two sentences is then defined by the cosine between two corresponding vectors:

A cluster of documents may be represented by a cosine similarity matrix where each entry in the matrix is the similarity between the corresponding sentence pair.
Figure 1 shows a subset of a cluster used in DUC 2004, and the corresponding cosine similarity matrix. Sentence ID dXsY indicates the Y th sentence in the Xth document.

Figure 1: Intra-sentence cosine similarities in a subset of cluster d1003t from DUC 2004.
This matrix can also be represented as a weighted graph where each edge shows the cosine similarity between a pair of sentence (Figure 2).

Figure 2: Weighted cosine similarity graph for the cluster in Figure 1.
(3)Degree Centrality:
Since we are interested in significant similarities, we can eliminate some low values in this matrix by defining a threshold so that the cluster can be viewed as an (undirected) graph.
Figure 3 shows the graphs that correspond to the adjacency matrices derived by assuming the pair of sentences that have a similarity above 0.1, 0.2, and 0.3, respectively, in Figure 1 are similar to each other. Note that there should also be self links for all of the nodes in the graphs since every sentence is trivially similar to itself. Although we omit the self links for readability, the arguments in the following sections assume that they exist.

-----------------------------------------------------------------------

-----------------------------------------------------------------------

Figure 3: Similarity graphs that correspond to thresholds 0.1, 0.2, and 0.3, respectively, for the cluster in Figure 1.
A simple way of assessing sentence centrality by looking at the graphs in Figure 3 is to count the number of similar sentences for each sentence. We define degree centrality of a sentence as the degree of the corresponding node in the similarity graph. As seen in Table 1, the choice of cosine threshold dramatically influences the interpretation of centrality. Too low thresholds may mistakenly take weak similarities into consideration while too high thresholds may lose many of the similarity relations in a cluster.

Table 1: Degree centrality scores for the graphs in Figure 3. Sentence d4s1 is the most central sentence for thresholds 0.1 and 0.2.
Learning LexRank——Graph-based Centrality as Salience in Text Summarization(一)的更多相关文章
- Deep Learning of Graph Matching 阅读笔记
Deep Learning of Graph Matching 阅读笔记 CVPR2018的一篇文章,主要提出了一种利用深度神经网络实现端到端图匹配(Graph Matching)的方法. 该篇文章理 ...
- Learning Context Graph for Person Search
Learning Context Graph for Person Search 2019-06-24 09:14:03 Paper:http://openaccess.thecvf.com/cont ...
- Learning Conditioned Graph Structures for Interpretable Visual Question Answering
Learning Conditioned Graph Structures for Interpretable Visual Question Answering 2019-05-29 00:29:4 ...
- 《Deep Learning of Graph Matching》论文阅读
1. 论文概述 论文首次将深度学习同图匹配(Graph matching)结合,设计了end-to-end网络去学习图匹配过程. 1.1 网络学习的目标(输出) 是两个图(Graph)之间的相似度矩阵 ...
- DAG-GNN: DAG Structure Learning with Graph Neural Networks
目录 概 主要内容 代码 Yu Y., Chen J., Gao T. and Yu M. DAG-GNN: DAG structure learning with graph neural netw ...
- Graph Based SLAM 基本原理
作者 | Alex 01 引言 SLAM 基本框架大致分为两大类:基于概率的方法如 EKF, UKF, particle filters 和基于图的方法 .基于图的方法本质上是种优化方法,一个以最小化 ...
- 论文解读( N2N)《Node Representation Learning in Graph via Node-to-Neighbourhood Mutual Information Maximization》
论文信息 论文标题:Node Representation Learning in Graph via Node-to-Neighbourhood Mutual Information Maximiz ...
- 论文解读(GMT)《Accurate Learning of Graph Representations with Graph Multiset Pooling》
论文信息 论文标题:Accurate Learning of Graph Representations with Graph Multiset Pooling论文作者:Jinheon Baek, M ...
- Learning Latent Graph Representations for Relational VQA
The key mechanism of transformer-based models is cross-attentions, which implicitly form graphs over ...
随机推荐
- 【转】YUV420P的格式以及转换为RGB565的代码(Android摄像头的输出一般为YUV420P)
http://blog.csdn.net/daisyhd/article/details/38866809 static void cvt_420p_to_rgb565(int width, int ...
- PHP中输出缓冲
在PHP中,当运行echo,print的时候,输出并没有马上通过tcp传给client浏览器显示, 而是将数据写入php buffer.php output_buffering机制,意味在tcp bu ...
- LabVIEW设计模式系列——各种各样的状态机
- iOS开发-javaScript交互
前言 当前混合开发模式迎来了前所未有的发展,跨平台开发.热更新等优点决定了这种模式的重要地位.虽然前端界面在交互.动效等多方面距离原生应用还有差距,但毫无疑问混合开发只会被越来越多的公司接受.在iOS ...
- HDU2085JAVA
核反应堆 Time Limit: 1000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)Total Submis ...
- Android 上使用 iconfont 的一种便捷方案
最近在学习 AIOSO(Alibaba Internal Open Source Organization,即阿里巴巴内部开源组织) 的一个子项目MMCherryUI,这是一个流式布局,可以在运行时做 ...
- spring注解:@PostConstruct和@PreDestroy
关于在spring 容器初始化 bean 和销毁前所做的操作定义方式有三种: 第一种:通过@PostConstruct 和 @PreDestroy 方法 实现初始化和销毁bean之前进行的操作 第二 ...
- Java基础知识强化之集合框架笔记56:Map集合之HashMap集合(HashMap<String,Student>)的案例
1. HashMap集合(HashMap<String,Student>)的案例 HashMap是最常用的Map集合,它的键值对在存储时要根据键的哈希码来确定值放在哪里. HashMap的 ...
- Windows系统下搭建Jenkins环境
1. 安装JDK JDK下载地址: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.ht ...
- 2015前端各大框架比较(angular,vue,react,ant)
前端流行框架大比拼 angular vue react ant-design angularjs angular是个MVVM的框架.针对的是MVVM这整个事.angular的最主要的场景就是单页应用, ...