Do Transformers Really Perform Badfor Graph Representation?

microsoft/Graphormer: This is the official implementation for "Do Transformers Really Perform Bad for Graph Representation?". (github.com)

1 Introduction

作者们发现关键问题在于如何补回Transformer模型的自注意力层丢失掉的图结构信息!不同于序列数据(NLP, Speech)或网格数据(CV),图的结构信息是图数据特有的属性,且对图的性质预测起着重要的作用。

There are many attempts of leveraging Transformer into the graph domain, but the only effective way is replacing some key modules (e.g., feature aggregation) in classic GNN variants by the softmax attention[47,7,22,48,58,43,13]

  • [47] Graph attention networks. ICLR, 2018.
  • [7] Graph transformer for graph-to-sequence learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7464–7471, 2020.
  • [22] Heterogeneous graph transformer. In Proceedings of The Web Conference 2020, pages 2704–2710, 2020.
  • [48] Direct multi-hop attention based graph neural network.arXiv preprint arXiv:2009.14332, 2020.
  • [58] Graph-bert: Only attention is needed forlearning graph representations.arXiv preprint arXiv:2001.05140, 2020.
  • [43] Self-supervised graph transformer on large-scale molecular data. Advances in Neural Information ProcessingSystems, 33, 2020.
  • [13] generalization of transformer networks to graphs. AAAI Workshop on Deep Learning on Graphs: Methods and Applications, 2021
对于每个节点,the self-attention 只计算节点和其他节点之间的语义相似性,而不考虑反映在节点上的图的结构信息和节点对之间的关系。
基于此,研究人员们在图预测任务上提出了Graphormer模型 —— 一个标准的Transformer模型,并且带有三种结构信息编码(中心性编码Centrality Encoding、空间编码Spatial Encoding以及边编码Edge Encoding),帮助Graphormer模型编码图数据的结构信息。
  • Centrality Encoding: capture the node importance in the graph. In particular, we leverage the degree centrality for the centrality encoding, where a learnable vectoris assigned to each node according to its degree and added to the node features in the input layer.
  • Spatial Encoding: capture the structural relation between nodes.
  • Edge Encoding
通过使用上述编码,我们进一步从数学上证明了Graphormer具有很强的表达能力,因为许多流行的GNN变体只是它的特例。
 

2 Graphormer

2.1 Structural Encodings in Graphormer

2.1.1 a Centrality Encoding

In Graphormer, we use the degree centrality, which is one of the standard centrality measures inliterature, as an additional signal to the neural network. To be specific, we develop a Centrality Encoding which assigns each node two real-valued embedding vectors according to its indegree and outdegree.

2.1.2 a Centrality Encoding

An advantage of Transformer is its global receptive field.

Spatial Encoding:

In this paper, we choose φ(vi,vj) to be the distance of the shortest path (SPD) between vi and vj if the two nodes are connected. If not, we set the output ofφto be a special value, i.e., -1. We assign each (feasible) output value a learnable scalar which will serve as a bias term in the self-attention module. Denote Aij as the  (i,j)-element of the Query-Key product matrix A, we have:

2.1.3 Edge Encoding in the Attention

In many graph tasks, edges also have structural features.

In the first method, the edge features areadded to the associated nodes’ features [21,29].

  • [21] Open graph benchmark: Datasets for machine learning on graphs.arXiv preprintarXiv:2005.00687, 2020.
  • [29] Deepergcn: All you need to train deepergcns.arXiv preprint arXiv:2006.07739, 2020

In the second method, for each node, its associated edges’ features will be used together with the node features in the aggregation [15,51,25].

  • [51] How powerful are graph neural networks?InInternational Conference on Learning Representations, 2019.
  • [25] Semi-supervised classification with graph convolutional networks.arXiv preprint arXiv:1609.02907, 2016

However, such ways of using edge feature only propagate the edge information to its associated nodes, which may not be an effective way to leverage edge information in representation of the whole graph.

a new edge encoding method in Graphormer:

3.2 Implementation Details of Graphormer

Graphormer Layer:

  • MHA: multi-head self-attention (MHA)
  • FFN: the feed-forward blocks
  • LN: the layer normalization

Special Node:

生成一个VNODE连接图中所有的点,而它与所有节点的 spatial encodings 是 a distinct learnable scalar

3 Experiments

3.1 OGB Large-Scale Challenge

3.2 Graph Representation

Transformers for Graph Representation的更多相关文章

  1. 论文解读(Graphormer)《Do Transformers Really Perform Bad for Graph Representation?》

    论文信息 论文标题:Do Transformers Really Perform Bad for Graph Representation?论文作者:Chengxuan Ying, Tianle Ca ...

  2. 论文解读GALA《Symmetric Graph Convolutional Autoencoder for Unsupervised Graph Representation Learning》

    论文信息 Title:<Symmetric Graph Convolutional Autoencoder for Unsupervised Graph Representation Learn ...

  3. 论文解读(SUGRL)《Simple Unsupervised Graph Representation Learning》

    Paper Information Title:Simple Unsupervised Graph Representation LearningAuthors: Yujie Mo.Liang Pen ...

  4. 论文解读(GMI)《Graph Representation Learning via Graphical Mutual Information Maximization》2

    Paper Information 论文作者:Zhen Peng.Wenbing Huang.Minnan Luo.Q. Zheng.Yu Rong.Tingyang Xu.Junzhou Huang ...

  5. 论文解读(GMI)《Graph Representation Learning via Graphical Mutual Information Maximization》

    Paper Information 论文作者:Zhen Peng.Wenbing Huang.Minnan Luo.Q. Zheng.Yu Rong.Tingyang Xu.Junzhou Huang ...

  6. 论文解读(GRCCA)《 Graph Representation Learning via Contrasting Cluster Assignments》

    论文信息 论文标题:Graph Representation Learning via Contrasting Cluster Assignments论文作者:Chun-Yang Zhang, Hon ...

  7. 论文解读(MERIT)《Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning》

    论文信息 论文标题:Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning ...

  8. 论文解读(SUBG-CON)《Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning》

    论文信息 论文标题:Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning论文作者:Yizhu Ji ...

  9. 论文阅读 Dynamic Graph Representation Learning Via Self-Attention Networks

    4 Dynamic Graph Representation Learning Via Self-Attention Networks link:https://arxiv.org/abs/1812. ...

随机推荐

  1. Day003 位运算

    位运算 & 按位与,全1才为1,否则为0 | 按位或,全0才为0,否则为1 ^ 按位异或,相同则为0,不通则为1 ~按位取反 <<左移,相当于*2 >>右移,相当于/2 ...

  2. 源码简析XXL-JOB的注册和执行过程

    一,前言 XXL-JOB是一个优秀的国产开源分布式任务调度平台,他有着自己的一套调度注册中心,提供了丰富的调度和阻塞策略等,这些都是可视化的操作,使用起来十分方便. 由于是国产的,所以上手还是比较快的 ...

  3. 使用TK框架中 insert与insertSelective区别

    insertSelective会对字段进行判断再更新(如果为Null就忽略更新),如果你只想插入某些字段,可以用这个方法. insert对你注入的字段全部插入

  4. Vulnerability: Cross Site Request Forgery (CSRF)

    CSRF跨站请求伪造 这是一种网络攻击方式,也被称为one-click attack或者session riding 攻击原理 CSRF攻击利用网站对于用户网页浏览器的信任,挟持用户当前已登陆的Web ...

  5. 初探MFC

    MFC MFC(Microsoft Foundation Classes) 是微软基础类库,也就是用c++类将win32API封装起来. 应用程序对象 MFC程序都是以应用程序对象为核心,且程序中只有 ...

  6. Kafka万亿级消息实战

    一.Kafka应用 本文主要总结当Kafka集群流量达到 万亿级记录/天或者十万亿级记录/天  甚至更高后,我们需要具备哪些能力才能保障集群高可用.高可靠.高性能.高吞吐.安全的运行. 这里总结内容主 ...

  7. Advanced Archive Password Recovery (ARCHPR) 是一个强大的压缩包密码破解工具,适用于ZIP和RAR档案的高度优化的口令恢复工具。

    RAR压缩文件密码破解工具是一款简单易用的RAR文档和ZIP文档密码破解软件,如果你不小心忘了解压密码或是下载的RAR文件需要密码,那么均可以使用本软件进行暴力破解.不管WinRAR /RAR 的密码 ...

  8. Gtkperf介绍

    Gtkperf使用说明一.Gtkperf介绍GtkPerf是一种应用程序设计,测试基于GTK +的性能.问题的关键是建立共同的测试平台,运行预先基于GTK +工具(开放comboboxes ,切换按钮 ...

  9. mate-notification-daemon stopping, Mate notifications timeout

    mate-notification-daemon stopping, Mate notifications timeout Ask Question Asked 9 days ago Viewed 1 ...

  10. C++知识点案例 笔记-4

    1.纯虚函数 2.抽象类 3.内部类 4.运算符重载 5.类的函数重载 6.友元的函数重载 1.纯虚函数 ==纯虚函数== //有时基类中无法给出函数的具体体现,定义纯虚函数可以为派生函数保留一个函数 ...