CN112286996A - Node embedding method based on network link and node attribute information - Google Patents

Node embedding method based on network link and node attribute information Download PDF

Info

Publication number
CN112286996A
CN112286996A CN202011319384.5A CN202011319384A CN112286996A CN 112286996 A CN112286996 A CN 112286996A CN 202011319384 A CN202011319384 A CN 202011319384A CN 112286996 A CN112286996 A CN 112286996A
Authority
CN
China
Prior art keywords
network
model
node
variable
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011319384.5A
Other languages
Chinese (zh)
Inventor
单虹毓
杜朴风
焦鹏飞
金弟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202011319384.5A priority Critical patent/CN112286996A/en
Publication of CN112286996A publication Critical patent/CN112286996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2465Query processing support for facilitating data mining operations in structured databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2216/00Indexing scheme relating to additional aspects of information retrieval not explicitly covered by G06F16/00 and subgroups
    • G06F2216/03Data mining

Abstract

The invention discloses a node embedding method based on network link and node attribute information, which comprises the following steps: constructing a neural network model consisting of a graph self-encoder, a prior generation model and a network guide constraint module; the graph self-encoder maps the network data into an implicit variable space to generate an implicit variable space distribution function; the priori generation model maps the spatial distribution of the hidden variables to Gaussian distribution through a normalized flow model and generates new variables; optimizing and updating spatial distribution of the hidden variable and the new variable based on kl divergence between the hidden variable and the new variable, and combining a plurality of spatial distribution functions of the hidden variable into a combined distribution function; the network guiding constraint module is used for constraining the combined distribution function in a Laplace characteristic mapping mode; and processing the network data by adopting the trained model to obtain node characteristic parameters, and combining the node characteristic parameters with the network data to form the network data with the node characteristic parameters. The invention can obtain high-quality node representation.

Description

Node embedding method based on network link and node attribute information
Technical Field
The invention relates to the field of data mining, in particular to a node embedding method based on network link and node attribute information.
Background
Currently, network embedding is an important task in the field of data mining. It is the basis for many network analysis tasks, such as node clustering, node classification, and graph visualization. Network embedding aims at learning low-dimensional potential representations of each node while preserving the relationships between nodes in the network. In recent years, various network embedding methods have been proposed. The topology-based method considers that only topology information of the network can be acquired, and the network embedding common method is to keep as much topology information as possible; in addition to topology information, attribute information of a network is also considered a useful source of network embedding. Many network embedding methods use both types of information to improve the quality of network embedding, such as random walks, matrix factorization, deep learning, and the like. Especially, an automatic encoder using a deep learning technique, which works on the principle of learning the encoding of data and reconstructing data from the decoding, has the advantage of scalability when dealing with large-scale networks. Since the goal of network embedding is to explore and preserve the underlying structure of the original data, the current network data with node semantic information is usually high-dimensional and complex, so that it is difficult for the self-encoder-based method to find some deep information of the data. To solve this problem, the existing improved method employs a variational automatic encoder by introducing a latent variable model into the self-encoder, and assuming that the latent variable compressed by the encoder follows a certain a priori distribution, the distribution parameters can be inferred from the observed data. However, existing variational self-coder models typically allow latent variables to follow a fixed distribution, such as a gaussian distribution, but real networks typically have many complex structural characteristics, such as: first/second order closeness, high order closeness (such as topics and communities), power law and the like, which express multi-peak characteristics, the existing variational self-encoder model cannot mine potential variable information which does not conform to a fixed distribution.
Disclosure of Invention
The invention provides a node embedding method based on network link and node attribute information for solving the technical problems in the prior art.
The technical scheme adopted by the invention for solving the technical problems in the prior art is as follows: a node embedding method based on network link and node attribute information is disclosed, which comprises the following steps: constructing a neural network model consisting of a graph self-encoder, a prior generation model and a network guide constraint module; the graph self-encoder is used for mapping network data comprising network links and node attribute information into a hidden variable space and correspondingly generating a spatial distribution function of a hidden variable; a priori generating model for mapping the spatial distribution of the hidden variables to gaussian distribution by the normalized flow model and generating new variables; calculating kl divergence between an implicit variable distribution function and a new variable distribution function, optimizing and updating spatial distribution of the implicit variable and the new variable based on the kl divergence to obtain a plurality of spatial distribution functions corresponding to the implicit variable, and combining the obtained plurality of spatial distribution functions into a combined distribution function; the network guide constraint module is used for constraining the combined distribution function in a Laplace characteristic mapping mode; collecting network data including known network link and node attribute information, making a sample set, extracting network data samples and corresponding adjacency matrixes from the sample set, and training a neural network model; and processing the network data of unknown network link and node attribute information by adopting the trained neural network model to obtain node characteristic parameters corresponding to the network data, and combining the node characteristic parameters with the network data to form the network data with the node characteristic parameters.
Further, the method further comprises: and putting the obtained nodes into a classifier for training, and embedding the trained nodes into network data for visual representation.
Further, an Adam optimizer is employed to minimize the loss function of the neural network model and optimize the parameters of the neural network model.
Further, when the neural network model is trained, parameters are initialized randomly, a model training process is established by using a parameter updating rule obtained by optimization of an Adam optimizer, and network data samples are placed in the neural network model for training and are iterated continuously until parameter updating is converged.
Further, the method comprises the steps of:
step one, constructing a neural network model consisting of a graph self-encoder, a prior generation model and a network guide constraint module, and describing the meaning of each variable in the neural network model in detail;
step two, according to the relation of each module in the neural network model, depicting the process of generating the neural network model to obtain the KL divergence of the neural network model, wherein the calculation formula of the KL divergence is as follows:
KL(p||q)=[p(u)logp(u)-p(u)logq(u)];
wherein u is a variable of the intermediate layer of the normalized flow model; q (u) is the distribution of the intermediate variable u; p (u) is the standard Gaussian distribution of the intermediate variable u;
step three, obtaining a final loss function of the neural network model by the reconstruction error loss of the image self-encoder, and KL divergence loss and prior network regularization loss generated by the prior generation model and the network guide constraint module:
Figure BDA0002792361000000021
in the above formula, L is the final loss; l isrectIs the reconstruction error loss from the encoder; l isklThe KL divergence loss is the loss due to variation inference; l islaPrior network regularization loss is generated for the generated prior network regularization loss, which is a loss resulting from the laplacian eigenmap; alpha is a hyper-parameter;
(1)Lrectthe calculation formula of (a) is as follows:
Figure BDA0002792361000000031
wherein
Figure BDA0002792361000000032
Is the cross entropy loss;
Figure BDA0002792361000000033
for decodingThe value of the ith, j position of the adjacency matrix generated by the generator; a isijIs the value of the ith, j position of the input adjacency matrix; n represents the number of all nodes, and i and j represent the ith and jth nodes respectively;
(2)Lklthe calculation formula of (a) is as follows:
Figure BDA0002792361000000034
in the formula, det is a Jacobian determinant; z is a hidden variable; u is a variable of the middle layer of the normalized flow model; q (u) is the distribution of the intermediate variable u; p (u) is the standard Gaussian distribution of the intermediate variable u; KL is divergence;
(3)Llathe calculation formula of (a) is as follows:
Figure BDA0002792361000000035
wherein a isijThe value of the ith, j position of the input adjacency matrix,
Figure BDA0002792361000000036
representing the ith vector in the data generated by the normalized flow model,
Figure BDA0002792361000000037
representing the jth vector in the data generated by the normalized flow model, n representing the number of all nodes, and i and j representing the ith and jth nodes respectively;
step four, using an Adam optimizer to minimize a loss function and optimize parameters of the neural network model;
collecting network data and processing the network data into a data set;
initializing parameters randomly, establishing a neural network model training process by using the parameter updating rule obtained in the step four, and extracting network data and an adjacency matrix from a data set; leading in a neural network model for training, and continuously iterating until the parameter updating is converged;
and step seven, recording the obtained parameter results into related network data, representing the network data by using the obtained nodes, putting the obtained node representations into a classifier for training, and carrying out visual representation on the trained node representations.
The invention has the advantages and positive effects that: a distribution of flexible multiple modes is fitted by introducing a normalized flow model, so that the distribution of hidden variables of the model is close to the distribution of the flexible multiple modes. Meanwhile, a network-oriented constraint module is introduced to guide the flexible distribution to have network characteristics, so that the node characterization capability is improved. The method is based on an effective deep neural network, utilizes the updating rule obtained by variational inference and Laplace feature mapping, trains the model efficiently and rapidly, and obtains the required model parameters. The model can be iterated to be converged quickly, has strong expandability and can be applied to a large-scale network. The experimental results on the training data also show that the method can obtain high-quality node representation. The method has wide application prospect in the fields of social networks, information retrieval, recommendation systems and the like.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
For further understanding of the contents, features and effects of the present invention, the following embodiments are enumerated in conjunction with the accompanying drawings, and the following detailed description is given:
referring to fig. 1, a node embedding method based on network links and node attribute information includes: constructing a neural network model consisting of a graph self-encoder, a prior generation model and a network guide constraint module; the graph self-encoder is used for mapping network data comprising network links and node attribute information into a hidden variable space and correspondingly generating a spatial distribution function of a hidden variable; a priori generating model for mapping the spatial distribution of the hidden variables to gaussian distribution by the normalized flow model and generating new variables; calculating kl divergence between an implicit variable distribution function and a new variable distribution function, optimizing and updating spatial distribution of the implicit variable and the new variable based on the kl divergence to obtain a plurality of spatial distribution functions corresponding to the implicit variable, and combining the obtained plurality of spatial distribution functions into a combined distribution function; the network guide constraint module is used for constraining the combined distribution function in a Laplace characteristic mapping mode; collecting network data including known network link and node attribute information, making a sample set, extracting network data samples and corresponding adjacency matrixes from the sample set, and training a neural network model; and processing the network data of unknown network link and node attribute information by adopting the trained neural network model to obtain node characteristic parameters corresponding to the network data, and combining the node characteristic parameters with the network data to form the network data with the node characteristic parameters.
The method may further comprise: and putting the obtained nodes into a classifier for training, and embedding the trained nodes into network data for visual representation.
The method may employ an Adam optimizer to minimize the loss function of the neural network model and optimize the parameters of the neural network model. The method of using Adam optimizer to minimize the loss function of the neural network model and optimize the parameters of the neural network model can be referred to the following documents: a random optimization method, the authors: kingma, Diederik, Ba, Jimmy, Computer Science 2015.
Preferably, when the neural network model is trained, parameters can be initialized randomly, a model training process is established by using a parameter updating rule obtained by optimization of an Adam optimizer, and network data samples are placed in the neural network model for training and are iterated continuously until parameter updating is converged.
The working process and working principle of the present invention are further explained by a preferred embodiment of the present invention as follows:
a node embedding method based on network link and node attribute information can comprise the following steps:
step one, a neural network model formed by a graph self-encoder, a prior generation model and a network guide constraint module is constructed, and the meaning of each variable in the neural network model is described in detail.
Step two, according to the relation of each module in the neural network model, depicting the process of generating the neural network model to obtain the KL divergence of the neural network model, wherein the calculation formula of the KL divergence is as follows:
KL(p||q)=[p(u)logp(u)-p(u)logq(u)];
wherein u is a variable of the intermediate layer of the normalized flow model; q (u) is the distribution of the intermediate variable u, which is to approach the true gaussian distribution; p (u) is the standard Gaussian distribution of the intermediate variable u.
Step three, obtaining a final loss function of the neural network model by the reconstruction error loss of the image self-encoder, and KL divergence loss and prior network regularization loss generated by the prior generation model and the network guide constraint module:
Figure BDA0002792361000000051
in the above formula, L is the final loss; l isrectIs the reconstruction error loss from the encoder; l isklThe KL divergence loss is the loss due to variation inference; l islaPrior network regularization loss is generated for the generated prior network regularization loss, which is a loss resulting from the laplacian eigenmap; alpha is a hyperparameter.
(1)LrectThe calculation formula of (a) is as follows:
Figure BDA0002792361000000052
wherein
Figure BDA0002792361000000053
Is the cross entropy loss;
Figure BDA0002792361000000054
the value of the ith, j position of the adjacency matrix generated by the decoder; a isijIs the value of the ith, j position of the input adjacency matrix; n represents the number of all nodes, i and j represent the ith and jth nodes respectivelyAnd (4) point.
(2)LklThe calculation formula of (a) is as follows:
Figure BDA0002792361000000055
in the formula, det is a Jacobian determinant; z is a hidden variable; u is a variable of the middle layer of the normalized flow model; q (u) is the distribution of the intermediate variable u; p (u) is the standard Gaussian distribution of the intermediate variable u; KL is divergence.
(3)LlaThe calculation formula of (a) is as follows:
Figure BDA0002792361000000061
wherein a isijThe value of the ith, j position of the input adjacency matrix,
Figure BDA0002792361000000062
representing the ith vector in the data generated by the normalized flow model,
Figure BDA0002792361000000063
represents the jth vector in the data generated by the normalized flow model, n represents the number of all nodes, and i and j represent the ith and jth nodes, respectively.
And step four, in order to optimize the loss function, an Adam optimizer is used for minimizing the loss function and optimizing the parameters of the neural network model.
Collecting network data and processing the network data into a data set; the network data may be document network data or the like. The required content and the adjacency matrix can be extracted from the document network;
initializing parameters randomly, establishing a neural network model training process by using the parameter updating rule obtained in the step four, and extracting network data and an adjacency matrix from a data set; and (5) introducing the neural network model for training, and continuously iterating until the parameter updating is converged.
And step seven, recording the obtained parameter results into related network data, representing the network data by using the obtained nodes, putting the obtained node representations into a classifier for training, and carrying out visual representation on the trained node representations.
The method has the advantages that the updating rule solved by the model is used for training, the representation of the nodes is put into the clustering algorithm for training, the clustering result is more accurate, and the distribution of the nodes is visualized, namely, the distribution of the representation of the nodes on a two-dimensional space and the community to which each node belongs are observed, so that the method can obtain the high-quality node embedding.
Table 1 is a detailed description of one of the test data selected.
Table 1: test data
Data set name Node point Number of edges Number of features Number of groups
PubMed 19717 44338 500 3
The method has the advantages that the updating rule solved by the neural network model is trained, the characterization of the nodes is put into a clustering algorithm for training, the clustering result is more accurate, the distribution of the nodes is visualized, namely the distribution of the characterization of the nodes on a two-dimensional space and the community to which each node belongs are observed, and the method can obtain high-quality node embedding.
The method can obtain higher-quality node representation, and the experimental data is shown in the following table 2.
Table 2 shows the comparison of the accuracy and the normalized information entropy score of the experimental results of the present invention with those of other network node representation methods:
method of producing a composite material Rate of accuracy Normalized information entropy
DeepWalk 0.684 0.265
Node2Vec 0.667 0.250
SDNE 0.416 0.158
TADW 0.574 0.201
ARVGA 0.587 0.184
VGAE 0.586 0.178
Method of the invention 0.681 0.314
The above-mentioned embodiments are only for illustrating the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and to carry out the same, and the present invention shall not be limited to the embodiments, i.e. the equivalent changes or modifications made within the spirit of the present invention shall fall within the scope of the present invention.

Claims (5)

1. A node embedding method based on network link and node attribute information is characterized in that the method comprises the following steps: constructing a neural network model consisting of a graph self-encoder, a prior generation model and a network guide constraint module; the graph self-encoder is used for mapping network data comprising network links and node attribute information into a hidden variable space and correspondingly generating a spatial distribution function of a hidden variable; a priori generating model for mapping the spatial distribution of the hidden variables to gaussian distribution by the normalized flow model and generating new variables; calculating kl divergence between an implicit variable distribution function and a new variable distribution function, optimizing and updating spatial distribution of the implicit variable and the new variable based on the kl divergence to obtain a plurality of spatial distribution functions corresponding to the implicit variable, and combining the obtained plurality of spatial distribution functions into a combined distribution function; the network guide constraint module is used for constraining the combined distribution function in a Laplace characteristic mapping mode; collecting network data including known network link and node attribute information, making a sample set, extracting network data samples and corresponding adjacency matrixes from the sample set, and training a neural network model; and processing the network data of unknown network link and node attribute information by adopting the trained neural network model to obtain node characteristic parameters corresponding to the network data, and combining the node characteristic parameters with the network data to form the network data with the node characteristic parameters.
2. The method of claim 1, further comprising: and putting the obtained nodes into a classifier for training, and embedding the trained nodes into network data for visual representation.
3. The method of claim 1, wherein an Adam optimizer is used to minimize the loss function of the neural network model and optimize the parameters of the neural network model.
4. The node embedding method based on network link and node attribute information as claimed in claim 3, wherein when training the neural network model, parameters are initialized randomly, a model training process is established by using a parameter update rule obtained by optimization of an Adam optimizer, the network data sample is put into the neural network model for training, and iteration is continued until parameter update converges.
5. The method of claim 1, wherein the method comprises the steps of:
step one, constructing a neural network model consisting of a graph self-encoder, a prior generation model and a network guide constraint module, and describing the meaning of each variable in the neural network model in detail;
step two, according to the relation of each module in the neural network model, depicting the process of generating the neural network model to obtain the KL divergence of the neural network model, wherein the calculation formula of the KL divergence is as follows:
KL(p||q)=[p(u)log p(u)-p(u)log q(u)];
wherein u is a variable of the intermediate layer of the normalized flow model; q (u) is the distribution of the intermediate variable u; p (u) is the standard Gaussian distribution of the intermediate variable u;
step three, obtaining a final loss function of the neural network model by the reconstruction error loss of the image self-encoder, and KL divergence loss and prior network regularization loss generated by the prior generation model and the network guide constraint module:
Figure FDA0002792360990000021
in the above formula, L is the final loss; l isrectIs the reconstruction error loss from the encoder; l isklThe KL divergence loss is the loss due to variation inference; l islaPrior network regularization loss is generated for the generated prior network regularization loss, which is a loss resulting from the laplacian eigenmap; alpha is a hyper-parameter;
(1)Lrectthe calculation formula of (a) is as follows:
Figure FDA0002792360990000022
wherein l is the cross entropy loss;
Figure FDA0002792360990000023
the value of the ith, j position of the adjacency matrix generated by the decoder; a isijIs the value of the ith, j position of the input adjacency matrix; n represents the number of all nodes, and i and j represent the ith and jth nodes respectively;
(2)Lklthe calculation formula of (a) is as follows:
Figure FDA0002792360990000024
in the formula, det is a Jacobian determinant; z is a hidden variable; u is a variable of the middle layer of the normalized flow model; q (u) is the distribution of the intermediate variable u; p (u) is the standard Gaussian distribution of the intermediate variable u; KL is divergence;
(3)Llathe calculation formula of (a) is as follows:
Figure FDA0002792360990000025
wherein a isijThe value of the ith, j position of the input adjacency matrix,
Figure FDA0002792360990000026
representing the ith vector in the data generated by the normalized flow model,
Figure FDA0002792360990000027
representing the jth vector in the data generated by the normalized flow model, n representing the number of all nodes, and i and j representing the ith and jth nodes respectively;
step four, using an Adam optimizer to minimize a loss function and optimize parameters of the neural network model;
collecting network data and processing the network data into a data set;
initializing parameters randomly, establishing a neural network model training process by using the parameter updating rule obtained in the step four, and extracting network data and an adjacency matrix from a data set; leading in a neural network model for training, and continuously iterating until the parameter updating is converged;
and step seven, recording the obtained parameter results into related network data, representing the network data by using the obtained nodes, putting the obtained node representations into a classifier for training, and carrying out visual representation on the trained node representations.
CN202011319384.5A 2020-11-23 2020-11-23 Node embedding method based on network link and node attribute information Pending CN112286996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011319384.5A CN112286996A (en) 2020-11-23 2020-11-23 Node embedding method based on network link and node attribute information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011319384.5A CN112286996A (en) 2020-11-23 2020-11-23 Node embedding method based on network link and node attribute information

Publications (1)

Publication Number Publication Date
CN112286996A true CN112286996A (en) 2021-01-29

Family

ID=74425125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011319384.5A Pending CN112286996A (en) 2020-11-23 2020-11-23 Node embedding method based on network link and node attribute information

Country Status (1)

Country Link
CN (1) CN112286996A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297575A (en) * 2021-06-11 2021-08-24 浙江工业大学 Multi-channel graph vertical federal model defense method based on self-encoder
CN115529290A (en) * 2022-08-30 2022-12-27 中国人民解放军战略支援部队信息工程大学 IP street level positioning method and device based on graph neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805167A (en) * 2018-05-04 2018-11-13 江南大学 L aplace function constraint-based sparse depth confidence network image classification method
CN108920712A (en) * 2018-07-25 2018-11-30 中国海洋大学 The representation method and device of nodes
CN109376857A (en) * 2018-09-03 2019-02-22 上海交通大学 A kind of multi-modal depth internet startup disk method of fusion structure and attribute information
CN109753589A (en) * 2018-11-28 2019-05-14 中国科学院信息工程研究所 A kind of figure method for visualizing based on figure convolutional network
US20190180732A1 (en) * 2017-10-19 2019-06-13 Baidu Usa Llc Systems and methods for parallel wave generation in end-to-end text-to-speech
US20200134428A1 (en) * 2018-10-29 2020-04-30 Nec Laboratories America, Inc. Self-attentive attributed network embedding
CN111340187A (en) * 2020-02-18 2020-06-26 河北工业大学 Network characterization method based on counter attention mechanism

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190180732A1 (en) * 2017-10-19 2019-06-13 Baidu Usa Llc Systems and methods for parallel wave generation in end-to-end text-to-speech
CN108805167A (en) * 2018-05-04 2018-11-13 江南大学 L aplace function constraint-based sparse depth confidence network image classification method
CN108920712A (en) * 2018-07-25 2018-11-30 中国海洋大学 The representation method and device of nodes
CN109376857A (en) * 2018-09-03 2019-02-22 上海交通大学 A kind of multi-modal depth internet startup disk method of fusion structure and attribute information
US20200134428A1 (en) * 2018-10-29 2020-04-30 Nec Laboratories America, Inc. Self-attentive attributed network embedding
CN109753589A (en) * 2018-11-28 2019-05-14 中国科学院信息工程研究所 A kind of figure method for visualizing based on figure convolutional network
CN111340187A (en) * 2020-02-18 2020-06-26 河北工业大学 Network characterization method based on counter attention mechanism

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: "Riemannian Normalizing Flow on VariationalWasserstein Autoencoder for Text Modeling", 《PROCEEDINGS OF THE 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, VOLUME 1 (LONG AND SHORT PAPERS)》 *
HONGYAO KE 等: "Deep Mutual Encode Model for Network Embedding From Structural Identity", 《IEEE ACCESS》 *
LEO GUO: "Normalization Flow (标准化流) 总结", 《知乎》 *
周晓旭等: "网络顶点表示学习方法", 《华东师范大学学报(自然科学版)》 *
张璞等: "半监督属性网络表示学习方法", 《计算机工程与应用》 *
王杰等: "基于图卷积网络和自编码器的半监督网络表示学习模型", 《模式识别与人工智能》 *
白铂等: "图神经网络", 《中国科学:数学》 *
陈亦琦等: "基于复合关系图卷积的属性网络嵌入方法", 《计算机研究与发展》 *
陈梦雪等: "基于对抗图卷积的网络表征学习框架", 《模式识别与人工智能》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297575A (en) * 2021-06-11 2021-08-24 浙江工业大学 Multi-channel graph vertical federal model defense method based on self-encoder
CN113297575B (en) * 2021-06-11 2022-05-17 浙江工业大学 Multi-channel graph vertical federal model defense method based on self-encoder
CN115529290A (en) * 2022-08-30 2022-12-27 中国人民解放军战略支援部队信息工程大学 IP street level positioning method and device based on graph neural network

Similar Documents

Publication Publication Date Title
Xu et al. Ternary compression for communication-efficient federated learning
CN110889015B (en) Independent decoupling convolutional neural network characterization method for graph data
CN111950594A (en) Unsupervised graph representation learning method and unsupervised graph representation learning device on large-scale attribute graph based on sub-graph sampling
Dai et al. Sliced iterative normalizing flows
CN109753589A (en) A kind of figure method for visualizing based on figure convolutional network
CN113065974B (en) Link prediction method based on dynamic network representation learning
CN113159239A (en) Method for processing graph data by quantum graph convolutional neural network
Liu et al. Real-time streaming graph embedding through local actions
CN112286996A (en) Node embedding method based on network link and node attribute information
CN112417289A (en) Information intelligent recommendation method based on deep clustering
CN108764362A (en) K-means clustering methods based on neural network
CN113989544A (en) Group discovery method based on deep map convolution network
CN112270374B (en) Clustering method of mathematical expression based on SOM (sequence of events) clustering model
Lutton et al. Holder functions and deception of genetic algorithms
Chang Latent variable modeling for generative concept representations and deep generative models
CN114265954B (en) Graph representation learning method based on position and structure information
CN116108127A (en) Document level event extraction method based on heterogeneous graph interaction and mask multi-head attention mechanism
CN115168326A (en) Hadoop big data platform distributed energy data cleaning method and system
CN114692867A (en) Network representation learning algorithm combining high-order structure and attention mechanism
CN114511060A (en) Attribute completion and network representation method based on self-encoder and generation countermeasure network
CN103824279A (en) Image segmentation method based on organizational evolutionary cluster algorithm
CN111882441A (en) User prediction interpretation Treeshap method based on financial product recommendation scene
CN112307288A (en) User clustering method for multiple channels
Zhang et al. Color clustering using self-organizing maps
CN110851732A (en) Attribute network semi-supervised community discovery method based on non-negative matrix three-factor decomposition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210129

WD01 Invention patent application deemed withdrawn after publication