当前位置:网站首页>Figure neural network framework DGL learning 103 - message passing tutorial
Figure neural network framework DGL learning 103 - message passing tutorial
2022-07-19 22:00:00 【wufeil】
In graph neural network , The transmission of information and the transformation of characteristics , Users can customize . Of course. DGL in , There are also advanced API For calling .
Now let's look at a simple model of web page ranking . Every node has the same PV value ,PV=0.01, Each node will first distribute its own PV Value to surrounding nodes . New at each node PV The value is equal to the aggregation of surrounding nodes , At the same time, it is adjusted by the damping factor . therefore , Every iteration , node PV The value changes as follows :
among ,d Is the damping factor ,N For the node number .N Is the adjacent node of the node ,D Is the output degree of the node (deg)
Purpose : see 10 After iterations , Of each node PV value
First , Import the required modules
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import dgl
import torch
import dgl.function as fn
One 、 Build a diagram
N = 100 # Number of nodes
DAMP=0.85 # Formula d
K = 10 # The number of iterations
g = nx.nx.erdos_renyi_graph(N, 0.1) # Random erdos_renyi chart
g = dgl.DGLGraph(g)
# visualization
nx.draw(g.to_networkx(), with_labels=True, nodes_size=50)
plt.show()
Initialize node PV Value and output degree :
g.ndata['PV'] = torch.ones(N)/N # The initial data of each node is 0.01
g.ndata['deg'] = g.out_degrees(g.nodes()).float() # Output degree of each node
print(g.ndata)
Two 、 Information transmission 、 Four methods of aggregation
(1) Fully user-defined mode
There are four steps :
1. The output of the node PV Value function
def pagerank_message_func(edges):
''' Information output function :param edges: chart g Edge object of , Multiple edges ( Edge objects have src, dst, data Three attributes , Each represents the starting node of the edge , Termination node , Edge feature ) Here, only the output nodes are iterated PV value :return: '''
return {
'PV': edges.src['PV'] / edges.src['deg']}
2. node PV Information aggregation function
def pagerank_reduce_func(nodes):
''' Aggregation of information ( attenuation ) function :param nodes: Input node object :return: '''
megs = torch.sum(nodes.mailbox['PV'], dim=1)
pv = (1-DAMP)/N + DAMP*megs
return {
'PV': pv}
3. Node output function 、 Information aggregation function import graph
# For information transfer function and information aggregation ( attenuation ) Function is registered in g In the network .
g.register_message_func(pagerank_message_func)
g.register_reduce_func(pagerank_reduce_func)
4. Spread forward
# Information is spreading forward
def pagerank_naive(g):
# Stage 1 : Messages send all messages along the node
for u, v in zip(*g.edges()):
g.send((u,v)) # The input is edge
# Stage two : Receiving information , Calculate the new PV value
for v in g.nodes():
g.recv(v) # The input is the termination node
(2) Batch processing method suitable for large drawings
And (1) The method in is similar to , You also need to define the information propagation function first 、 Information aggregation function , And pass these two functions into the graph g in . The difference is the fourth step . Replace step 4 with the following :
ef pagerank_batch(g):
g.send(g.edges())
g.recv(g.nodes())
The principle of batch production :
“ You may want to know whether it is possible to execute in parallel on all nodes reduce, Because each node may have a different number of incoming messages ,
And you can't really put tensors of different lengths “ The stack ” together . Usually ,DGL By grouping nodes according to the number of incoming messages and calling reduce Function to solve the problem .”
(3) Use dgl in level 2 Of API
And (1) similar , The first three steps , All the same , Just modify the fourth step , as follows :
#DGL You can use more advanced API Update diagram (level-2 APIs)
def pagerank_level2(g):
g.update_all()
(4) Use more efficient built-in functions directly from beginning to end
# There is a more effective way , Use dgl Built in functions for . This method performs faster .
def pagerank_builtin(g):
''' 1. fn.copy_src: Build information output function , Spread the information of the starting node 2. fn.sum: Information aggregation :param g: Picture object :return: '''
g.ndata['PV'] = g.ndata['PV']/g.ndata['deg']
g.update_all(message_func=fn.copy_src(src='PV', out='m'),
reduce_func=fn.sum(msg='m', out='m_sum'))
g.ndata['PV'] = (1-DAMP)/N + DAMP * g.ndata['m_sum']
3、 ... and 、 Conduct N Sub iteration
# Conduct K Sub iteration
for k in range(K):
pagerank_builtin(g)
print(g.ndata['PV'])
边栏推荐
- 逻辑回归与交叉熵
- 丘钛股价暴涨的背后:获OPPO超100万个3D结构光模组订单
- 【周报】2022年7月18日
- DeepChem的内置数据集及使用方法
- Hengshuo semiconductor has passed the registration: the annual revenue is 580million, and the actual controller is Chinese Americans
- OpenCV的其他几个滤波器(方盒滤波和均值滤波)
- Nearly 30 wafer factories are under construction in China, and half of the key components of conductors are out of stock
- 使用Deepchem构建化学分子的神经网络和图神经网络模型示例
- 阿里云短信服务使用
- CEO of Qualcomm: if the acquisition of NXP fails, it will conduct a stock repurchase of US $20-30 billion
猜你喜欢
Getting started with Prometheus (II)
DeepChem的内置数据集及使用方法
快可通过注册:年营收7.4亿 段正刚与侯艳丽夫妇为实控人
声网传输层协议 AUT 的总结与展望丨Dev for Dev 专栏
mysql语法使用详细代码版
多层感知机(神经网络)与激活函数
Solution of adding custom components to vscade esp-id
How to create an innovative experience mode for rural night tour projects
Information theory collation
图神经网络框架DGL学习 104——图分类模型(Graph Classification Tutorial)
随机推荐
Hengshuo semiconductor has passed the registration: the annual revenue is 580million, and the actual controller is Chinese Americans
关于以ethtool为主的网络指标统计工具之间统计数据关系的研究
图神经网络框架DGL学习 102——图、节点、边及其特征赋值
小米有品上架QIN多亲AI电话,搭载紫光展锐SC9820E芯片
图像的仿射变换
大国数据跨境规则博弈,谁能抢占“数据经济制高地”?
Tree-Structured LSTM
2022-07-07 零散的知识
现代CMake高级教程
Introduction to smart contract security audit - random number
LeetCode_ 39_ Combined sum
从 0 到 1 开展软件测试
高通收购恩智浦能否获批?外交部回应
PCB线路板塞孔工艺重要性体现在哪里?
The new image of the universe observed by the Weber space telescope will be analyzed by a deep learning framework called Morpheus
边无际 Shifu IoT 开源开发框架 助力物联网应用开发加速十倍
对于高通放弃收购恩智浦,国家市场监管总局正式回应
1--makefile
独热编码与交叉熵损失函数
Information theory collation