GraphSAGE 代码解析(四) - models.py
原创文章~转载请注明出处哦。其他部分内容参见以下链接~
GraphSAGE 代码解析(一) - unsupervised_train.py
GraphSAGE 代码解析(三) - aggregators.py
1. 类及其继承关系
Model
/ \
/ \
MLP GeneralizedModel
/ \
/ \
Node2VecModel SampleAndAggregate
首先看Model, GeneralizedModel, SampleAndAggregate这三个类的联系。
其中Model与 GeneralizedModel的区别在于,Model的build()函数中搭建了序列层模型,而在GeneralizedModel中被删去。self.ouput必须在GeneralizedModel的子类build()中被赋值。
class Model(object) 中的build()函数如下:
def build(self):
""" Wrapper for _build() """
with tf.variable_scope(self.name):
self._build() # Build sequential layer model
self.activations.append(self.inputs)
for layer in self.layers:
hidden = layer(self.activations[-1])
self.activations.append(hidden)
self.outputs = self.activations[-1]
# 这部分sequential layer model模型在GeneralizedModel的build()中被删去 # Store model variables for easy access
variables = tf.get_collection(
tf.GraphKeys.GLOBAL_VARIABLES, scope=self.name)
self.vars = {var.name: var for var in variables} # Build metrics
self._loss()
self._accuracy() self.opt_op = self.optimizer.minimize(self.loss)
序列层实现的功能是,给输入,通过layer()返回输出,又将这个输出再次作为输入到下一个layer()中,最终,取最后一层layer的结果作为output.
2. class SampleAndAggregate(GeneralizedModel)
1. __init__():
(1) self.features的由来:
para: features tf.get_variable()-> identity features
| |
self.features self.embeds --> At least one is not None
\ / --> Concat if both are not None
\ /
\ /
self.features
(2) self.dims:
self.dims是一个list, 每一位记录各个神经网络层的维数。
self.dims[0]的值相当于self.features的列数 (0 if features is None else features.shape[1]) + identity_dim),(注意:括号里features为传入的参数,而非self.features)
之后各位为各层output_dim,也就是hidden units的个数。
(3) __init()__函数代码
def __init__(self, placeholders, features, adj, degrees,
layer_infos, concat=True, aggregator_type="mean",
model_size="small", identity_dim=0,
**kwargs):
'''
Args:
- placeholders: Stanford TensorFlow placeholder object.
- features: Numpy array with node features.
NOTE: Pass a None object to train in featureless mode (identity features for nodes)!
- adj: Numpy array with adjacency lists (padded with random re-samples)
- degrees: Numpy array with node degrees.
- layer_infos: List of SAGEInfo namedtuples that describe the parameters of all
the recursive layers. See SAGEInfo definition above.
- concat: whether to concatenate during recursive iterations
- aggregator_type: how to aggregate neighbor information
- model_size: one of "small" and "big"
- identity_dim: Set to positive int to use identity features (slow and cannot generalize, but better accuracy)
'''
super(SampleAndAggregate, self).__init__(**kwargs)
if aggregator_type == "mean":
self.aggregator_cls = MeanAggregator
elif aggregator_type == "seq":
self.aggregator_cls = SeqAggregator
elif aggregator_type == "maxpool":
self.aggregator_cls = MaxPoolingAggregator
elif aggregator_type == "meanpool":
self.aggregator_cls = MeanPoolingAggregator
elif aggregator_type == "gcn":
self.aggregator_cls = GCNAggregator
else:
raise Exception("Unknown aggregator: ", self.aggregator_cls) # get info from placeholders...
self.inputs1 = placeholders["batch1"]
self.inputs2 = placeholders["batch2"]
self.model_size = model_size
self.adj_info = adj
if identity_dim > 0:
self.embeds = tf.get_variable(
"node_embeddings", [adj.get_shape().as_list()[0], identity_dim])
# self.embeds: record the neigh features embeddings
# number of features = identity_dim
# number of neighbors = adj.get_shape().as_list()[0]
else:
self.embeds = None
if features is None:
if identity_dim == 0:
raise Exception(
"Must have a positive value for identity feature dimension if no input features given.")
self.features = self.embeds
else:
self.features = tf.Variable(tf.constant(
features, dtype=tf.float32), trainable=False)
if not self.embeds is None:
self.features = tf.concat([self.embeds, self.features], axis=1)
self.degrees = degrees
self.concat = concat self.dims = [
(0 if features is None else features.shape[1]) + identity_dim]
self.dims.extend(
[layer_infos[i].output_dim for i in range(len(layer_infos))])
self.batch_size = placeholders["batch_size"]
self.placeholders = placeholders
self.layer_infos = layer_infos self.optimizer = tf.train.AdamOptimizer(
learning_rate=FLAGS.learning_rate) self.build()
(2) sample(inputs, layer_infos, batch_size=None)
对于sample的算法描述,详见论文Appendix A, algorithm 2.

代码:
def sample(self, inputs, layer_infos, batch_size=None):
""" Sample neighbors to be the supportive fields for multi-layer convolutions. Args:
inputs: batch inputs
batch_size: the number of inputs (different for batch inputs and negative samples).
""" if batch_size is None:
batch_size = self.batch_size
samples = [inputs]
# size of convolution support at each layer per node
support_size = 1
support_sizes = [support_size] for k in range(len(layer_infos)):
t = len(layer_infos) - k - 1
support_size *= layer_infos[t].num_samples
sampler = layer_infos[t].neigh_sampler node = sampler((samples[k], layer_infos[t].num_samples))
samples.append(tf.reshape(node, [support_size * batch_size, ]))
support_sizes.append(support_size) return samples, support_sizes
sampler = layer_infos[t].neigh_sampler
当函数被调用时,layer_infos会被赋值,在unsupervised_train.py中,其中neigh_sampler被赋为UniformNeighborSampler,其在neigh_samplers.py中定义:class UniformNeighborSampler(Layer)。
目的是对于输入的samples[k] (即为上一步sample得到的节点,如上图依次得到黄色区域表示的samples[0],橙色区域表示的samples[1], 粉色区域表示的samples[2]。其中samples[k]是有由对samples[k - 1]中各节点的邻居采样而得),选取num_samples个数的邻居节点的序号(对应上图N(u))。(返回值是adj_lists, 即为被截断为num_samples列数的邻接矩阵。)
这里注意区别support_size与num_samples:
num_sample为当前深度每个节点u所选取的邻居节点的个数为num_samples;
support_size表示当前节点u的embedding受多少节点信息的影响。其既受当前层num_samples个直接邻居的影响,其邻居也受更先前深度num_samples个邻居的影响,以此类推。故support_size是到目前深度为止的各深度下num_samples的连乘积。则对于batch_size个输入节点,总的support个数为: support_size * batch_size。
最后将support_size存进support_sizes的数组中。
sample() 函数最终返回包含各深度下采样点的samples数组与各深度下各点受支持节点数目的support_sizes数组。
2. def _build(self):
self.neg_samples, _, _ = (tf.nn.fixed_unigram_candidate_sampler(
true_classes=labels,
num_true=1,
num_sampled=FLAGS.neg_sample_size,
unique=False,
range_max=len(self.degrees),
distortion=0.75,
unigrams=self.degrees.tolist()))
(1) tf.nn.fixed_unigram_candidate_sampler:
按照用户提供的概率分布进行采样。
如果类别服从均匀分布,我们就用uniform_candidate_sampler;
如果词作类别,我们知道词服从 Zipfian, 我们就用 log_uniform_candidate_sampler;
如果能够通过统计或者其他渠道知道类别满足某些分布,用 nn.fixed_unigram_candidate_sampler;
如果实在不知道类别分布,我们还可以用 tf.nn.learned_unigram_candidate_sampler。 (2) Paras:
a. num_sampled: sampling_candidates的元素是在没有替换(如果unique = True)的情况下绘制的,
或者是从基本分布中替换(如果unique = False)。 unique = True 可以看作无放回抽样;unique = False 可以看作有放回抽样。 b. distortion: distortion used the word2vec freq energy table formulation
f^(3/4) / total(f^(3/4))
in word2vec energy counted by freq;
in graphsage energy counted by degrees
so in unigrams = [] each ID recored each node's degree c. unigrams: 各个节点的度。 (3) Returns:
a. sampled_candidates: A tensor of type int64 and shape [num_sampled]. The sampled classes.
b. true_expected_count: A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes.
c. sampled_expected_count: A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.
-------addtional---------------
1. self.__class__.__name__.lower()
1 if not name:
2 name = self.__class__.__name__.lower()
self.__class__.__name__.lower(): https://stackoverflow.com/questions/36367736/use-name-as-attribute
1 class MyClass:
2 def __str__(self):
3 return str(self.__class__)
>>> instance = MyClass()
>>> print(instance)
__main__.MyClass
That is because the string version of the class includes the module that it is defined in. In this case, it is defined in the module that is currently being executed, the shell, so it shows up as __main__.MyClass. If we use self.__class__.__name__, however:
1 class MyClass:
2 def __str__(self):
3 return self.__class__.__name__
4
5 instance = MyClass()
6 print(instance)
it outputs:
MyClass
The __name__ attribute of the class does not include the module.
Note: The __name__ attribute gives the name originally given to the class. Any copies will keep the name. For example:
1 class MyClass:
2 def __str__(self):
3 return self.__class__.__name__
4
5 SecondClass = MyClass
6
7 instance = SecondClass()
8 print(instance)
output:
MyClass
That is because the __name__ attribute is defined as part of the class definition. Using SecondClass = MyClass is just assigning another name to the class. It does not modify the class or its name in any way.
2. allowed_kwargs = {'name', 'logging', 'model_size'}
其中name,logging,model_size指什么?
name: String, defines the variable scope of the layer.
logging: Boolean, switches Tensorflow histogram logging on/off
model_size: small / big 见aggregates.py: small: hidden_dim =512; big: hidden_dim = 1024
3. python 中参数*args, **kwargs
https://blog.csdn.net/anhuidelinger/article/details/10011013
1 def foo(*args, **kwargs):
2 print 'args = ', args
3 print 'kwargs = ', kwargs
4 print '---------------------------------------'
5
6 if __name__ == '__main__':
7 foo(1,2,3,4)
8 foo(a=1,b=2,c=3)
9 foo(1,2,3,4, a=1,b=2,c=3)
10 foo('a', 1, None, a=1, b='2', c=3)
11
12 # Output:
13 # args = (1, 2, 3, 4)
14 # kwargs = {}
15
16 # args = ()
17 # kwargs = {'a': 1, 'c': 3, 'b': 2}
18
19 # args = (1, 2, 3, 4)
20 # kwargs = {'a': 1, 'c': 3, 'b': 2}
21
22 # args = ('a', 1, None)
23 # kwargs = {'a': 1, 'c': 3, 'b': '2'}
1. 可以看到,这两个是python中的可变参数。
*args表示任何多个无名参数,它是一个tuple;
**kwargs表示关键字参数,它是一个 dict。
并且同时使用*args和**kwargs时,必须*args参数列要在**kwargs前.
像foo(a=1, b='2', c=3, a', 1, None, )这样调用的话,会提示语法错误“SyntaxError: non-keyword arg after keyword arg”。
2. 何时使用**kwargs:
Using **kwargs and default values is easy. Sometimes, however, you shouldn't be using **kwargs in the first place.
In this case, we're not really making best use of **kwargs.
1 class ExampleClass( object ):
2 def __init__(self, **kwargs):
3 self.val = kwargs.get('val',"default1")
4 self.val2 = kwargs.get('val2',"default2")
The above is a "why bother?" declaration. It is the same as
1 class ExampleClass( object ):
2 def __init__(self, val="default1", val2="default2"):
3 self.val = val
4 self.val2 = val2
When you're using **kwargs, you mean that a keyword is not just optional, but conditional. There are more complex rules than simple default values.
When you're using **kwargs, you usually mean something more like the following, where simple defaults don't apply.
1 class ExampleClass( object ):
2 def __init__(self, **kwargs):
3 self.val = "default1"
4 self.val2 = "default2"
5 if "val" in kwargs:
6 self.val = kwargs["val"]
7 self.val2 = 2*self.val
8 elif "val2" in kwargs:
9 self.val2 = kwargs["val2"]
10 self.val = self.val2 / 2
11 else:
12 raise TypeError( "must provide val= or val2= parameter values" )
3. logging = kwargs.get('logging', False) : default value: false
https://stackoverflow.com/questions/1098549/proper-way-to-use-kwargs-in-python
You can pass a default value to get() for keys that are not in the dictionary:
1 self.val2 = kwargs.get('val2',"default value")
However, if you plan on using a particular argument with a particular default value, why not use named arguments in the first place?
1 def __init__(self, val2="default value", **kwargs):
4. tf.variable_scope()
https://blog.csdn.net/IB_H20/article/details/72936574
5. masked_softmax_cross_entropy ? 见metrics.py
1 # Cross entropy error
2 if self.categorical:
3 self.loss += metrics.masked_softmax_cross_entropy(self.outputs, self.placeholders['labels'],
4 self.placeholders['labels_mask'])
1 def masked_logit_cross_entropy(preds, labels, mask):
2 """Logit cross-entropy loss with masking."""
3 loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=preds, labels=labels)
4 loss = tf.reduce_sum(loss, axis=1)
5 mask = tf.cast(mask, dtype=tf.float32)
6 mask /= tf.maximum(tf.reduce_sum(mask), tf.constant([1.]))
7 loss *= mask
8 return tf.reduce_mean(loss)
=======================================

感谢您的打赏!
(梦想还是要有的,万一您喜欢我的文章呢)
GraphSAGE 代码解析(四) - models.py的更多相关文章
- GraphSAGE 代码解析(三) - aggregators.py
原创文章-转载请注明出处哦.其他部分内容参见以下链接- GraphSAGE 代码解析(一) - unsupervised_train.py GraphSAGE 代码解析(二) - layers.py ...
- GraphSAGE 代码解析(二) - layers.py
原创文章-转载请注明出处哦.其他部分内容参见以下链接- GraphSAGE 代码解析(一) - unsupervised_train.py GraphSAGE 代码解析(三) - aggregator ...
- GraphSAGE 代码解析(一) - unsupervised_train.py
原创文章-转载请注明出处哦.其他部分内容参见以下链接- GraphSAGE 代码解析(二) - layers.py GraphSAGE 代码解析(三) - aggregators.py GraphSA ...
- GraphSAGE 代码解析 - minibatch.py
class EdgeMinibatchIterator """ This minibatch iterator iterates over batches of samp ...
- django搭建web (四) models.py
demo 该demo模型主要是用于问题,选择单个或多个答案的问卷形式应用 # -*- coding: utf-8 -*- from __future__ import unicode_literals ...
- 第三百七十四节,Django+Xadmin打造上线标准的在线教育平台—创建课程app,在models.py文件生成4张表,课程表、课程章节表、课程视频表、课程资源表
第三百七十四节,Django+Xadmin打造上线标准的在线教育平台—创建课程app,在models.py文件生成4张表,课程表.课程章节表.课程视频表.课程资源表 创建名称为app_courses的 ...
- 四 Django框架,models.py模块,数据库操作——创建表、数据类型、索引、admin后台,补充Django目录说明以及全局配置文件配置
Django框架,models.py模块,数据库操作——创建表.数据类型.索引.admin后台,补充Django目录说明以及全局配置文件配置 数据库配置 django默认支持sqlite,mysql, ...
- RobHess的SIFT代码解析步骤四
平台:win10 x64 +VS 2015专业版 +opencv-2.4.11 + gtk_-bundle_2.24.10_win32 主要参考:1.代码:RobHess的SIFT源码 2.书:王永明 ...
- 用 TensorFlow 实现 k-means 聚类代码解析
k-means 是聚类中比较简单的一种.用这个例子说一下感受一下 TensorFlow 的强大功能和语法. 一. TensorFlow 的安装 按照官网上的步骤一步一步来即可,我使用的是 virtua ...
随机推荐
- JS JavaScript闭包和作用域
JavaScript高级程序设计中对闭包的定义:闭包是指有权访问另外一个函数作用域中变量的函数. 从概念上,闭包有两个特点: 1.函数 2.能访问另外一个函数的作用域中的变量 在ES6之前,JavaS ...
- Java基础知识(持续更新中...)
1.成员变量:全局变量/字段(Field),不要称之为属性(错误)直接定义在类中,方法外面 1.类成员变量 使用static修饰的变量 2.实例成员变量 没用使用static修饰的变量 局部变量 ...
- 核心动画(UIView封装动画)-转
一.UIView动画(首尾) 1.简单说明 UIKit直接将动画集成到UIView类中,当内部的一些属性发生改变时,UIView将为这些改变提供动画支持. 执行动画所需要的工作由UIView类自动完成 ...
- Vue组件通讯黑科技
Vue组件通讯 组件可谓是 Vue框架的最有特色之一, 可以将一大块拆分为小零件最后组装起来.这样的好处易于维护.扩展和复用等. 提到 Vue的组件, 相必大家对Vue组件之间的数据流并不陌生.最常规 ...
- [HAOI2007]理想的正方形(随机化,骗分?)
题目描述 有一个a*b的整数组成的矩阵,现请你从中找出一个n*n的正方形区域,使得该区域所有数中的最大值和最小值的差最小. 输入输出格式 输入格式: 第一行为3个整数,分别表示a,b,n的值 第二行至 ...
- Java分享笔记:创建多线程 & 线程同步机制
[1] 创建多线程的两种方式 1.1 通过继承Thread类创建多线程 1.定义Thread类的子类,重写run()方法,在run()方法体中编写子线程要执行的功能. 2.创建子线程的实例对象,相当于 ...
- ABAP术语-Authorization Object
Authorization Object 原文:http://www.cnblogs.com/qiangsheng/archive/2007/12/20/1006585.html Element of ...
- ABAP术语-Accounting Document
Accounting Document 原文:http://www.cnblogs.com/qiangsheng/archive/2007/12/12/991731.html Accounting d ...
- harbor中碰到的问题
harbor部署整体比较简单,但是就是这么简单的东西稍微改变点配置文件就会有不小的问题 1.问题1 部署harbor1.6发现web界面删除的镜像在push一遍上去后,镜像大小为0 且无法删除,这个问 ...
- substr在oracle和mysql中的应用和区别
Oracle: 书写格式: (1)Select substr(字段名(string) , 起始位置(int) , 截取长度(int)) 示例: selectsubstr('123456',0,3)a ...