WO2020206876A1 - 学习分离表征的图卷积神经网络构建方法及装置 - Google Patents

学习分离表征的图卷积神经网络构建方法及装置 Download PDF

Info

Publication number
WO2020206876A1
WO2020206876A1 PCT/CN2019/098236 CN2019098236W WO2020206876A1 WO 2020206876 A1 WO2020206876 A1 WO 2020206876A1 CN 2019098236 W CN2019098236 W CN 2019098236W WO 2020206876 A1 WO2020206876 A1 WO 2020206876A1
Authority
WO
WIPO (PCT)
Prior art keywords
graph
neural network
convolutional neural
factors
constructing
Prior art date
Application number
PCT/CN2019/098236
Other languages
English (en)
French (fr)
Inventor
朱文武
马坚鑫
崔鹏
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2020206876A1 publication Critical patent/WO2020206876A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Definitions

  • the present invention relates to the technical field of social network analysis, in particular to a method and device for constructing graph convolutional neural network for learning separation and representation.
  • graph neural networks represented by graph convolutional networks are a new generation of end-to-end deep learning technology for processing complex graph structure data such as social networks and information networks.
  • the existing graph neural network defaults that the formation of edges in the graph is driven by the same single factor, so it is impossible to capture the diverse causes behind the actual data.
  • the present invention aims to solve one of the technical problems in the related art at least to a certain extent.
  • an object of the present invention is to provide a method for constructing graph convolutional neural networks for learning to separate representations, which can generate representations that can comprehensively and accurately describe multiple sides of each data point in the graph.
  • Another object of the present invention is to provide a graph convolutional neural network construction device for learning separation and representation.
  • one embodiment of the present invention proposes a graph convolutional neural network construction method for learning to separate representations, including: probabilistic modeling of the formation process of the input graph, and generating descriptions of multiple A probability generation model for latent factors; through the probability generation model, a derivable dynamic EM algorithm (Expectation-Maximization, maximum expectation algorithm) is used in each convolutional layer for inference, and the factors corresponding to each neighbor of each node are obtained , To separate neighbor nodes; in each convolutional layer, the neighbor nodes with different factors construct a representation describing different sides of each node.
  • the multiple factors behind the formation of a graph are considered, these factors are separated to obtain a more accurate and comprehensive representation, and the graph is still retained when the factors are separated
  • the neural network supports the advantages of end-to-end learning and induction learning. After separating each factor, it can generate multiple side representations that can fully and accurately describe each data point in the graph according to each factor.
  • the method for constructing a graph convolutional neural network for learning separation and characterization may also have the following additional technical features:
  • it further includes: superimposing a plurality of each of the convolutional layers to use a preset high-order topology structure.
  • each side corresponds to a factor that has been separated.
  • the factors of the input graph are plural.
  • another embodiment of the present invention proposes a graph convolutional neural network construction device for learning separation and representation, including: a modeling module for probabilistic modeling of the formation process of the input graph and generating multiple descriptions A probabilistic generation model for potential factors that may lead to the formation of an edge; the inference module is used to use the derivable dynamic EM algorithm for reasoning in each convolutional layer through the probability generation model to obtain the neighbors of each node Corresponding factors to separate neighbor nodes; and a building module for constructing a representation describing different sides of each node according to the neighbor nodes of different factors in each convolutional layer.
  • the graph convolutional neural network construction device for learning separation and representation of the embodiment of the present invention considers the multiple factors behind the formation of a graph, separates these factors to obtain a more accurate and comprehensive representation, and retains the graph when separating each factor
  • the neural network supports the advantages of end-to-end learning and induction learning. After separating each factor, it can generate multiple side representations that can fully and accurately describe each data point in the graph according to each factor.
  • the apparatus for constructing a graph convolutional neural network for learning separation and characterization may also have the following additional technical features:
  • it further includes: a superimposing module, configured to superimpose a plurality of each of the convolutional layers to utilize a preset high-order topology.
  • each side corresponds to a factor that has been separated.
  • the factors of the input graph are plural.
  • Fig. 1 is a flowchart of a method for constructing graph convolutional neural networks for learning to separate representations according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for constructing a graph convolutional neural network for learning separation and representation according to a specific embodiment of the present invention
  • Fig. 3 is a schematic structural diagram of a graph convolutional neural network construction device for learning separation and characterization according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for constructing a graph convolutional neural network for learning separation and characterization according to an embodiment of the present invention.
  • the method for constructing a graph convolutional neural network for learning separation and representation includes the following steps:
  • step S101 probabilistic modeling is performed on the formation process of the input graph, and a probabilistic generation model describing multiple potential factors that may cause an edge to be formed is generated.
  • the formation process of the input graph is probabilistically modeled, and the established probability generation model describes multiple potential factors that may lead to the formation of an edge.
  • the inference module based on the probabilistic generative model can unsupervisedly discover the potential factors driving the formation of each edge after a node and its neighbors in a given graph, and classify the neighbors according to their corresponding factors Or separate.
  • the factors of the input graph are plural.
  • step S102 a derivable dynamic EM algorithm is used in each convolutional layer to perform inference through the probability generation model, and the factors corresponding to each neighbor of each node are obtained to separate the neighbor nodes.
  • step S103 in each convolutional layer, a representation describing different sides of each node is constructed according to neighbor nodes of different factors.
  • the embodiment of the present invention proposes a new graph convolutional layer applying factor separation technology, and the graph convolutional layer can output to each node a representation that can accurately and comprehensively describe its multiple sides. That is to say, in the embodiment of the present invention, the graph convolution layer applies factor separation technology, and after the factor separation is performed, multiple graph convolution operations are applied to process information corresponding to each factor in parallel and independently.
  • the factor separation technology is a kind of unsupervised discovery of the potential factors that promote the formation of each edge after a node and its neighbors in a given graph, and the neighbors are classified/separated according to their corresponding factors. technology.
  • a more comprehensive and accurate user portrait is automatically generated; and in the recommendation system, the interaction between users, items and other individuals naturally forms a picture, which is achieved through the The method can more accurately and comprehensively capture multiple points of interest or needs of users.
  • the method of the embodiment of the present invention further includes: superimposing a plurality of each convolutional layer to utilize a preset high-level topology.
  • the embodiment of the present invention proposes a graph convolutional neural network with multiple above-mentioned new graph convolutional layers superimposed, which can further utilize additional information such as high-order topological structures in the graph. That is to say, the graph convolutional neural network of the embodiment of the present invention superimposes multiple new graph convolutional layers described above to further utilize additional information such as the high-order topology structure in the graph.
  • the embodiment of the present invention mainly aims at the challenge caused by trying to discover and separate multiple factors when performing graph convolution, and proposes targeted measures, so that the output of the improved graph convolutional neural network can be more accurate and comprehensive. Describe the representation of the data point:
  • Challenge 1 Graph data usually does not indicate the specific factors that promote the formation of an edge.
  • the embodiment of the present invention proposes an unsupervised technique based on a probability generation model to infer the potential factor corresponding to each edge.
  • the method for constructing a graph convolutional neural network for learning separation and representation proposed in the embodiment of the present invention, it is considered that there may be multiple factors that contribute to the formation of a graph, and multiple potential factors can be inferred unsupervisedly, and they After separating and separating each factor, it is possible to generate a comprehensive and accurate description of the multiple side representations of each data point in the graph, thus considering that the number of factors that promote the formation of a graph may be plural. Separate these different factors when performing graph convolution, and obtain a more accurate and comprehensive description of multiple different aspects of each data point in the graph.
  • Fig. 3 is a schematic structural diagram of a graph convolutional neural network construction device for learning separation and characterization according to an embodiment of the present invention.
  • the graph convolutional neural network construction device 10 for learning separation and representation includes: a modeling module 100, an inference module 200, and a construction module 300.
  • the modeling module 100 is used to perform probability modeling on the formation process of the input graph, and generate a probability generation model describing multiple potential factors that may cause an edge to be formed.
  • the inference module 200 is configured to use a derivable dynamic EM algorithm in each convolutional layer to perform inference through the probability generation model, and obtain the factors corresponding to each neighbor of each node to separate the neighbor nodes.
  • the construction module 300 is used to construct a representation describing different sides of each node according to neighbor nodes of different factors in each convolutional layer.
  • the device 10 of the embodiment of the present invention can generate multiple side representations that can comprehensively and accurately describe each data point in the figure according to various factors.
  • the device 10 of the embodiment of the present invention further includes: a superimposing module.
  • the superposition module is used to superimpose each convolutional layer in multiples to utilize the preset high-order topology structure.
  • each side corresponds to a factor that has been separated.
  • the factors of the input graph are plural.
  • the graph convolutional neural network construction device for learning separation and representation proposed in the embodiment of the present invention, it is considered that there may be multiple factors that contribute to the formation of a graph, and multiple potential factors can be inferred unsupervisedly, and they After separating and separating each factor, it is possible to generate a comprehensive and accurate description of the multiple side representations of each data point in the graph, thus considering that the number of factors that promote the formation of a graph may be plural. Separate these different factors when performing graph convolution, and obtain a more accurate and comprehensive description of multiple different aspects of each data point in the graph.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present invention, "a plurality of” means at least two, such as two, three, etc., unless otherwise specifically defined.
  • the terms “installed”, “connected”, “connected”, “fixed” and other terms should be understood in a broad sense, for example, it can be a fixed connection or a detachable connection. , Or integrated; it can be mechanically connected or electrically connected; it can be directly connected or indirectly connected through an intermediary, it can be the internal communication of two components or the interaction relationship between two components, unless otherwise specified The limit.
  • installed can be a fixed connection or a detachable connection. , Or integrated; it can be mechanically connected or electrically connected; it can be directly connected or indirectly connected through an intermediary, it can be the internal communication of two components or the interaction relationship between two components, unless otherwise specified The limit.
  • the specific meaning of the above-mentioned terms in the present invention can be understood according to specific circumstances.
  • the first feature “on” or “under” the second feature may be in direct contact with the first and second features, or the first and second features may be indirectly through an intermediary. contact.
  • the "above”, “above” and “above” of the first feature on the second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the level of the first feature is higher than the second feature.
  • the first feature "below”, “below” and “below” the second feature can mean that the first feature is directly below or obliquely below the second feature, or it simply means that the level of the first feature is smaller than the second feature.

Abstract

本发明公开了一种学习分离表征的图卷积神经网络构建方法及装置,其中,方法包括:对输入图的形成过程进行概率建模,生成描述多个可能导致一条边形成的潜在因子的概率生成模型;通过概率生成模型在每一个卷积层中使用可导的动态EM算法进行推理,获取每个节点的各个邻居所对应的因子,以将邻居节点分离;在每一个卷积层中,根据不同因子的邻居节点构建出描述每个节点不同侧面的表征。该方法可以根据各个因子生成能全面精确地描述图中各个数据点的多个侧面的表征。

Description

学习分离表征的图卷积神经网络构建方法及装置
相关申请的交叉引用
本申请要求清华大学于2019年04月08日提交的、发明名称为“学习分离表征的图卷积神经网络构建方法及装置”的、中国专利申请号“201910277434.9”的优先权。
技术领域
本发明涉及社交网络分析技术领域,特别涉及一种学习分离表征的图卷积神经网络构建方法及装置。
背景技术
目前,以图卷积网络为代表的图神经网络,是用于处理社交网络、信息网络等复杂图结构数据的新一代端到端深度学习技术。然而,现有的图神经网络默认图中的边的形成都是由同一个单一因子推动的,因此无法捕捉实际数据背后的多样化成因。
发明内容
本发明旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本发明的一个目的在于提出一种学习分离表征的图卷积神经网络构建方法,该方法可以生成能全面精确地描述图中各个数据点的多个侧面的表征。
本发明的另一个目的在于提出一种学习分离表征的图卷积神经网络构建装置。
为达到上述目的,本发明一方面实施例提出了一种学习分离表征的图卷积神经网络构建方法,包括:对输入图的形成过程进行概率建模,生成描述多个可能导致一条边形成的潜在因子的概率生成模型;通过所述概率生成模型在每一个卷积层中使用可导的动态EM算法(Expectation-Maximization,最大期望算法)进行推理,获取每个节点的各个邻居所对应的因子,以将邻居节点分离;在所述每一个卷积层中,根据不同因子的所述邻居节点构建出描述所述每个节点不同侧面的表征。
本发明实施例的学习分离表征的图卷积神经网络构建方法,考虑一张图形成背后的多个因子,将这些因子分离,获得更精确全面的表征,并在分离各个因子时,仍保留图神经网络支持端到端学习、归纳学习的优点,在分离各个因子后,可以根据各个因子生成能全面精确地描述图中各个数据点的多个侧面的表征。
另外,根据本发明上述实施例的学习分离表征的图卷积神经网络构建方法还可以具有 以下附加的技术特征:
进一步地,在本发明的一个实施例中,还包括:叠加多个所述每一个卷积层,以利用预设的高阶拓扑结构。
进一步地,在本发明的一个实施例中,每个侧面对应一个已被分离的因子。
进一步地,在本发明的一个实施例中,所述输入图的因子为复数多个。
为达到上述目的,本发明另一方面实施例提出了一种学习分离表征的图卷积神经网络构建装置,包括:建模模块,用于对输入图的形成过程进行概率建模,生成描述多个可能导致一条边形成的潜在因子的概率生成模型;推理模块,用于通过所述概率生成模型在每一个卷积层中使用可导的动态EM算法进行推理,获取每个节点的各个邻居所对应的因子,以将邻居节点分离;构建模块,用于在所述每一个卷积层中,根据不同因子的所述邻居节点构建出描述所述每个节点不同侧面的表征。
本发明实施例的学习分离表征的图卷积神经网络构建装置,考虑一张图形成背后的多个因子,将这些因子分离,获得更精确全面的表征,并在分离各个因子时,仍保留图神经网络支持端到端学习、归纳学习的优点,在分离各个因子后,可以根据各个因子生成能全面精确地描述图中各个数据点的多个侧面的表征。
另外,根据本发明上述实施例的学习分离表征的图卷积神经网络构建装置还可以具有以下附加的技术特征:
进一步地,在本发明的一个实施例中,还包括:叠加模块,用于叠加多个所述每一个卷积层,以利用预设的高阶拓扑结构。
进一步地,在本发明的一个实施例中,每个侧面对应一个已被分离的因子。
进一步地,在本发明的一个实施例中,所述输入图的因子为复数多个。
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为根据本发明一个实施例的学习分离表征的图卷积神经网络构建方法的流程图;
图2为根据本发明一个具体实施例的学习分离表征的图卷积神经网络构建方法的流程图;
图3为根据本发明一个实施例的学习分离表征的图卷积神经网络构建装置的结构示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
下面参照附图描述根据本发明实施例提出的学习分离表征的图卷积神经网络构建方法及装置,首先将参照附图描述根据本发明实施例提出的学习分离表征的图卷积神经网络构建方法。
图1是本发明一个实施例的学习分离表征的图卷积神经网络构建方法的流程图。
如图1所示,该学习分离表征的图卷积神经网络构建方法包括以下步骤:
在步骤S101中,对输入图的形成过程进行概率建模,生成描述多个可能导致一条边形成的潜在因子的概率生成模型。
可以理解的是,如图2所示,首先,对输入的图的形成过程进行概率建模,建立的概率生成模型描述了多个可能导致一条边形成的潜在因子。
具体地,基于概率生成模型的推理模块,在给定图中一个节点和它的邻居后,能够无监督地发现推动各条边形成的潜在因子、并将邻居们根据其对应的因子进行归类或分离。
其中,在本发明的一个实施例中,输入图的因子为复数多个。
在步骤S102中,通过概率生成模型在每一个卷积层中使用可导的动态EM算法进行推理,获取每个节点的各个邻居所对应的因子,以将邻居节点分离。
可以理解的是,如图2所示,在每一个卷积层中,根据建立的概率生成模型,使用可导的动态EM算法进行推理,推理一个节点的各个邻居所对应的因子,据此将邻居们分离。
在步骤S103中,在每一个卷积层中,根据不同因子的邻居节点构建出描述每个节点不同侧面的表征。
可以理解的是,如图2所示,在每一个卷积层中,根据上一步中得到的不同因子对应的邻居,构建出描述该节点不同侧面的表征,每个侧面对应一个已被分离的因子。
具体地,本发明实施例提出了一种新的应用了因子分离技术的图卷积层,该图卷积层能够给每个节点输出能精确全面地描述其多个侧面的表征。也就是说,本发明实施例图卷积层应用了因子分离技术,在进行因子分离后,应用多个图卷积操作来并行、独立地处理和各个因子对应的信息。
其中,因子分离技术是在给定图中一个节点和它的邻居后,一种能够无监督地发现推 动各条边形成的潜在因子、并将邻居们根据其对应的因子进行归类/分离的技术。
在具体应用时,在推荐系统中,自动生成更为全面精确的用户画像等;并在推荐系统中,用户、物品等个体之间的交互自然而然地形成了一张图,通过本发明实施例的方法能够更加精确全面地捕捉到用户的复数多个兴趣点或需求点。
进一步地,在本发明的一个实施例中,本发明实施例的方法还包括:叠加多个每一个卷积层,以利用预设的高阶拓扑结构。
可以理解的是,本发明实施例通过叠加多个上述的卷积层,来有效利用图中的高阶拓扑结构。
具体地,本发明实施例提出了一种叠加了多个上述新的图卷积层的图卷积神经网络,能够进一步地利用图中的高阶拓扑结构等额外信息。也就是说,本发明实施例的图卷积神经网络叠加了多个上述新的图卷积层,以进一步地利用图中的高阶拓扑结构等额外信息。
综上,本发明实施例主要针对在进行图卷积时试图发现并分离多个因子所带来的挑战,提出针对性的措施,以期改进后的图卷积神经网络能输出能更精确、全面描述数据点的表征:
(1)挑战一:图数据通常不会标注出推动一条边形成的具体因子。本发明实施例为此提出一种基于概率生成模型的无监督技术,以推理出每条边对应的潜在因子。
(2)挑战二:如何在进行复杂推理的同时保持图神经网络的两大优点——支持端到端学习、支持归纳学习(将结果外推到没见过的新数据点)。本发明实施例为此将概率推理过程描述成一种可求导的(以支持端到端)、动态执行的(以支持归纳)EM算法。
根据本发明实施例提出的学习分离表征的图卷积神经网络构建方法,考虑了促成一张图形成的因子可能是有复数多个,可以无监督地推理出潜在的多个因子,并将它们分离,并在分离各个因子后,可以据此生成能全面精确地描述图中各个数据点的多个侧面的表征,从而考虑了推动一张图形成的因子数量可能是复数多个的,通过在进行图卷积时分离这些不同的因子,进而获得了能更加精确全面地描述图中每一个数据点的多个不同侧面的表征。
其次参照附图描述根据本发明实施例提出的学习分离表征的图卷积神经网络构建装置。
图3是本发明一个实施例的学习分离表征的图卷积神经网络构建装置的结构示意图。
如图3所示,该学习分离表征的图卷积神经网络构建装置10包括:建模模块100、推理模块200和构建模块300。
其中,建模模块100用于对输入图的形成过程进行概率建模,生成描述多个可能导致一条边形成的潜在因子的概率生成模型。推理模块200用于通过概率生成模型在每一个卷积层中使用可导的动态EM算法进行推理,获取每个节点的各个邻居所对应的因子,以将邻居节点分离。构建模块300用于在每一个卷积层中,根据不同因子的邻居节点构建出描 述每个节点不同侧面的表征。本发明实施例的装置10可以根据各个因子生成能全面精确地描述图中各个数据点的多个侧面的表征。
进一步地,在本发明的一个实施例中,本发明实施例的装置10还包括:叠加模块。其中,叠加模块,用于叠加多个每一个卷积层,以利用预设的高阶拓扑结构。
进一步地,在本发明的一个实施例中,每个侧面对应一个已被分离的因子。
进一步地,在本发明的一个实施例中,输入图的因子为复数多个。
需要说明的是,前述对学习分离表征的图卷积神经网络构建方法实施例的解释说明也适用于该实施例的学习分离表征的图卷积神经网络构建装置,此处不再赘述。
根据本发明实施例提出的学习分离表征的图卷积神经网络构建装置,考虑了促成一张图形成的因子可能是有复数多个,可以无监督地推理出潜在的多个因子,并将它们分离,并在分离各个因子后,可以据此生成能全面精确地描述图中各个数据点的多个侧面的表征,从而考虑了推动一张图形成的因子数量可能是复数多个的,通过在进行图卷积时分离这些不同的因子,进而获得了能更加精确全面地描述图中每一个数据点的多个不同侧面的表征。
在本发明的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本发明中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
在本发明中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二 特征。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (8)

  1. 一种学习分离表征的图卷积神经网络构建方法,其特征在于,包括:
    对输入图的形成过程进行概率建模,生成描述多个可能导致一条边形成的潜在因子的概率生成模型;
    通过所述概率生成模型在每一个卷积层中使用可导的动态EM算法进行推理,获取每个节点的各个邻居所对应的因子,以将邻居节点分离;
    在所述每一个卷积层中,根据不同因子的所述邻居节点构建出描述所述每个节点不同侧面的表征。
  2. 根据权利要求1所述的学习分离表征的图卷积神经网络构建方法,其特征在于,还包括:
    叠加多个所述每一个卷积层,以利用预设的高阶拓扑结构。
  3. 根据权利要求1所述的学习分离表征的图卷积神经网络构建方法,其特征在于,每个侧面对应一个已被分离的因子。
  4. 根据权利要求1所述的学习分离表征的图卷积神经网络构建方法,其特征在于,所述输入图的因子为复数多个。
  5. 一种学习分离表征的图卷积神经网络构建装置,其特征在于,包括:
    建模模块,用于对输入图的形成过程进行概率建模,生成描述多个可能导致一条边形成的潜在因子的概率生成模型;
    推理模块,用于通过所述概率生成模型在每一个卷积层中使用可导的动态EM算法进行推理,获取每个节点的各个邻居所对应的因子,以将邻居节点分离;
    构建模块,用于在所述每一个卷积层中,根据不同因子的所述邻居节点构建出描述所述每个节点不同侧面的表征。
  6. 根据权利要求5所述的学习分离表征的图卷积神经网络构建装置,其特征在于,还包括:
    叠加模块,用于叠加多个所述每一个卷积层,以利用预设的高阶拓扑结构。
  7. 根据权利要求5所述的学习分离表征的图卷积神经网络构建装置,其特征在于,每个侧面对应一个已被分离的因子。
  8. 根据权利要求5所述的学习分离表征的图卷积神经网络构建装置,其特征在于,所述输入图的因子为复数多个。
PCT/CN2019/098236 2019-04-08 2019-07-29 学习分离表征的图卷积神经网络构建方法及装置 WO2020206876A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910277434.9A CN110083778A (zh) 2019-04-08 2019-04-08 学习分离表征的图卷积神经网络构建方法及装置
CN201910277434.9 2019-04-08

Publications (1)

Publication Number Publication Date
WO2020206876A1 true WO2020206876A1 (zh) 2020-10-15

Family

ID=67414479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098236 WO2020206876A1 (zh) 2019-04-08 2019-07-29 学习分离表征的图卷积神经网络构建方法及装置

Country Status (2)

Country Link
CN (1) CN110083778A (zh)
WO (1) WO2020206876A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113299079A (zh) * 2021-03-29 2021-08-24 东南大学 一种基于ppo和图卷积神经网络区域交叉口信号控制方法
CN113722603A (zh) * 2021-11-02 2021-11-30 阿里巴巴达摩院(杭州)科技有限公司 对象推送方法、产品推送方法、计算机终端及存储介质
CN115883147A (zh) * 2022-11-22 2023-03-31 浙江御安信息技术有限公司 一种基于图神经网络的攻击者画像方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889015B (zh) * 2019-10-31 2024-01-30 天津工业大学 面向图数据的独立解耦卷积神经网络表征方法
CN112148998B (zh) * 2020-09-08 2021-10-26 浙江工业大学 一种基于多核图卷积网络的在线社交平台用户好友推荐方法
CN116127204B (zh) * 2023-04-17 2023-07-18 中国科学技术大学 多视角用户画像方法、多视角用户画像系统、设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203511A (zh) * 2017-05-27 2017-09-26 中国矿业大学 一种基于神经网络概率消歧的网络文本命名实体识别方法
CN108681775A (zh) * 2018-05-25 2018-10-19 厦门大学 通过WordNet嵌入进行测试和更新的树形网络方法
CN109063841A (zh) * 2018-08-27 2018-12-21 北京航空航天大学 一种基于贝叶斯网络和深度学习算法的故障机理智能分析方法
CN109376769A (zh) * 2018-09-21 2019-02-22 广东技术师范学院 基于生成式对抗神经网络用于多任务分类的信息迁移方法
CN109582960A (zh) * 2018-11-27 2019-04-05 上海交通大学 基于结构化关联语义嵌入的零示例学习方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866558B (zh) * 2015-05-18 2018-08-10 中国科学院计算技术研究所 一种社交网络账号映射模型训练方法及映射方法和系统
CN106959967B (zh) * 2016-01-12 2019-11-19 中国科学院声学研究所 一种链路预测模型的训练及链路预测方法
CN106559290B (zh) * 2016-11-29 2019-09-27 北京邮电大学 基于社团结构的链路预测的方法和系统
CN106649659B (zh) * 2016-12-13 2020-09-29 重庆邮电大学 一种面向社交网络的链接预测系统及方法
CN107332687B (zh) * 2017-05-23 2020-05-05 浙江工业大学 一种基于贝叶斯估计和共同邻居的链路预测方法
CN107451703A (zh) * 2017-08-31 2017-12-08 杭州师范大学 一种基于因子图模型的社交网络多任务预测方法
CN109347697B (zh) * 2018-10-10 2019-12-03 南昌航空大学 机会网络链路预测方法、装置及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203511A (zh) * 2017-05-27 2017-09-26 中国矿业大学 一种基于神经网络概率消歧的网络文本命名实体识别方法
CN108681775A (zh) * 2018-05-25 2018-10-19 厦门大学 通过WordNet嵌入进行测试和更新的树形网络方法
CN109063841A (zh) * 2018-08-27 2018-12-21 北京航空航天大学 一种基于贝叶斯网络和深度学习算法的故障机理智能分析方法
CN109376769A (zh) * 2018-09-21 2019-02-22 广东技术师范学院 基于生成式对抗神经网络用于多任务分类的信息迁移方法
CN109582960A (zh) * 2018-11-27 2019-04-05 上海交通大学 基于结构化关联语义嵌入的零示例学习方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113299079A (zh) * 2021-03-29 2021-08-24 东南大学 一种基于ppo和图卷积神经网络区域交叉口信号控制方法
CN113299079B (zh) * 2021-03-29 2022-06-10 东南大学 一种基于ppo和图卷积神经网络区域交叉口信号控制方法
CN113722603A (zh) * 2021-11-02 2021-11-30 阿里巴巴达摩院(杭州)科技有限公司 对象推送方法、产品推送方法、计算机终端及存储介质
CN115883147A (zh) * 2022-11-22 2023-03-31 浙江御安信息技术有限公司 一种基于图神经网络的攻击者画像方法
CN115883147B (zh) * 2022-11-22 2023-10-13 浙江御安信息技术有限公司 一种基于图神经网络的攻击者画像方法

Also Published As

Publication number Publication date
CN110083778A (zh) 2019-08-02

Similar Documents

Publication Publication Date Title
WO2020206876A1 (zh) 学习分离表征的图卷积神经网络构建方法及装置
Chen et al. Personalized federated learning with graph
Lee et al. Beyond random walk and metropolis-hastings samplers: why you should not backtrack for unbiased graph sampling
CN107220312B (zh) 一种基于共现图的兴趣点推荐方法及系统
CN102637183A (zh) 用于在社交网络中向用户推荐好友的方法和设备
Zou et al. Continuous-time distributed Nash equilibrium seeking algorithms for non-cooperative constrained games
Ma et al. Hybrid ADMM: a unifying and fast approach to decentralized optimization
CN113706326B (zh) 基于矩阵运算的移动社会网络图修改方法
CN102662964A (zh) 对用户的好友进行分组的方法和装置
Huang et al. Improving Quality of Experience in multimedia Internet of Things leveraging machine learning on big data
Haeupler et al. Discovery through gossip
Skraba et al. Sweeps over wireless sensor networks
Xing et al. Big-fed: Bilevel optimization enhanced graph-aided federated learning
CN111597276A (zh) 实体对齐方法、装置和设备
Al-Adrousy et al. A recommender system for team formation in MANET
Toyonaga et al. Virtual wireless sensor networks: Adaptive brain-inspired configuration for internet of things applications
CN111342991B (zh) 基于跨社交网络的信息传播方法
Yang et al. A game theoretic model for the formation of navigable small-world networks
Zhou Green service over Internet of Things: a theoretical analysis paradigm
US9015292B2 (en) Method, apparatus and computer program product for providing composite capability information for devices in distributed networks
Modarresi et al. Modeling technological interdependency in IoT-A multidimensional and multilayer network model for smart environments
Nag et al. Synchronized states and multistability in a random network of coupled discontinuous maps
CN106712995B (zh) 一种多跳邻居节点的获取方法和装置
Zhou et al. Green multimedia communications over Internet of Things
Mishkovski et al. Enhancing robustness and synchronizability of networks homogenizing their degree distribution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924373

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924373

Country of ref document: EP

Kind code of ref document: A1