CN114692867A - Network representation learning algorithm combining high-order structure and attention mechanism - Google Patents

Network representation learning algorithm combining high-order structure and attention mechanism Download PDF

Info

Publication number
CN114692867A
CN114692867A CN202210293116.3A CN202210293116A CN114692867A CN 114692867 A CN114692867 A CN 114692867A CN 202210293116 A CN202210293116 A CN 202210293116A CN 114692867 A CN114692867 A CN 114692867A
Authority
CN
China
Prior art keywords
node
network
matrix
data
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210293116.3A
Other languages
Chinese (zh)
Inventor
于硕
黄华飞
丁锋
陈志奎
夏锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202210293116.3A priority Critical patent/CN114692867A/en
Publication of CN114692867A publication Critical patent/CN114692867A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to the field of network representation learning, and discloses a network representation learning algorithm combining a high-order structure and an attention mechanism. Firstly, data preparation and data preprocessing are carried out; secondly, extracting attribute features and structural features of the data by using a graph convolution network layer; then, learning the similarity of the attribute features and the structural features by using an attention mechanism, and aggregating the representation of each node neighborhood in the network according to the similarity; then, converting into representation of downstream task by graph convolution operation; and finally, continuously updating the parameters of the algorithm model by using the loss function until the optimal algorithm model is obtained, and obtaining high-efficiency network representation, wherein the method can be applied to spam identification, false news detection and disease prediction.

Description

Network representation learning algorithm combining high-order structure and attention mechanism
Technical Field
The invention belongs to the field of network representation learning, and relates to a network representation learning algorithm combining a high-order structure and an attention mechanism, which can be used for node classification and link prediction in a social network.
Background
Many complex systems may be represented in the form of networks, such as social networks, biological networks, and transportation networks. Network representation learning learns a low-dimensional dense representation vector for each node and applies the low-dimensional dense representation vector to common network analysis tasks such as node classification and link prediction. The network analysis tasks have great potential in the application fields of spam identification, false news detection, disease prediction and the like.
Currently, the mainstream web-representation learning algorithms in social networks can be divided into the following categories:
a network representation learning algorithm based on matrix decomposition. If the Yang et al propose TADW in 2015, learning network representation of text features is added on the basis of matrix decomposition; cao et al propose GraRep in 2015, which can handle networks with weights and can integrate global structure information of the networks. The main idea of the network representation learning algorithm based on matrix decomposition is to decompose a matrix into a combination of a plurality of simple matrices. These simple matrices have a low dimensionality and can be used to represent the original information of the network, such as the relationships between nodes, and can be represented as nodes of the network. However, when the matrix is too large, the algorithm for matrix decomposition requires a large memory. Furthermore, matrix decomposition algorithms are not suitable for supervised and semi-supervised tasks.
The network representation learning algorithm based on random walks. As in the DeepWalk algorithm proposed by Perozzi et al in 2014, consider a node as a Word, generate a random sequence of nodes and learn a representation of the node using the Word2vec model; grover and Leskovec proposed in 2016 for Node2vec, sample the network by adjusting breadth-first and depth-first related parameters. These algorithms are all applicable to generating node sequences in a network structure using a random walk method while maintaining the association between nodes. Meanwhile, the feature vectors of the vertexes can be generated, so that network information can be mined in a low-dimensional space by downstream tasks, and random walk plays an important role in data dimension reduction. However, this approach is overly dependent on a wandering strategy, which can create uncertain node relationships that affect the stability of the network representation learning effect. While some random walk methods may preserve local and global information of the network, the parameters cannot be effectively adjusted to accommodate different types of networks.
The network based on deep learning represents a learning algorithm. As Kipf and Welling proposed the graph convolution network GCN in 2017, a convolution operator was applied to the network data by first order approximation; the graph attention network GAT proposed by Velickovic et al in 2018 allocates different weights to the neighbors of the nodes by using a self-attention mechanism, and effectively aggregates neighborhood characteristics. The existing deep learning model is expanded to network data and becomes a trend. In recent years, many network representation learning models based on deep learning, such as graph convolution networks, graph attention networks, graph self-encoders, etc., have been proposed. Deep learning enhances the applicability and performance of network representation learning algorithms, but is susceptible to problems of over-fitting and over-smoothing.
The network fused with the high-order structure represents a learning algorithm. If Rossi et al propose HONE in 2018, use different network motifs to learn the network representation; xu et al, in 2020, proposed MORE, combined with network motif and attribute information, to achieve good performance in downstream tasks. The network representation method based on the high-order structure models structures such as subgraphs, and the subgraphs can represent high-order structure granularity information, so that the learned network representation characteristics are enhanced. However, the existing method processes different high-order structure information in the same way, and cannot fully dig out the high-order information of different types of networks to play a role.
Disclosure of Invention
Most of the existing network representation learning algorithms do not consider useful information contained in a connection mode of a high-order structure, and the representation quality of the learned nodes is limited. In the method considering the use of the high-order connection mode, the change of the importance of the high-order structure to the network representation learning in different scenes is ignored, and the useful information of the high-order modes cannot be played.
In view of the problems in the prior art, an object of the present invention is to provide a network representation learning algorithm combining a high-order structure and an attention mechanism, which is capable of adaptively learning a high-order connectivity mode existing in a network by effectively using the attention mechanism to adapt to differences of the high-order connectivity mode in different scenes, so as to generate an effective network representation for a downstream classification task, in order to overcome the performance suboptimal problem caused by not considering the applicability of the high-order structure in network representation learning. We learn information about the network structure and attributes and then adaptively fuse them together through an attention mechanism.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a network representation learning algorithm based on combination of a high-order structure and an attention mechanism comprises the steps of firstly preprocessing data to obtain an attribute information matrix and a structure information matrix of the data; secondly, extracting attribute characteristics and structural characteristics of nodes in the network data by using a graph convolution formula respectively; then, calculating the scores by using an attention mechanism and aggregating the node neighborhoods to obtain an aggregated new node representation, and converting the aggregated new node representation into a probability form which accords with a downstream classification task by using graph convolution operation; finally, continuously updating the parameters of the algorithm model by using the loss function until the optimal algorithm model is obtained and used for a downstream classification task; the method comprises the following steps:
step (1): preparing data, preprocessing the data to obtain attribute information matrix X of the dataaAnd a structural information matrix Xs
1) Checking whether the attribute information of the data set exists, if so, keeping the attribute information unchanged, otherwise, using the identity matrix
Figure BDA0003562256170000031
Attribute information as network data
Figure BDA0003562256170000032
Where N is the number of nodes in the network data, daDimension of attribute information;
2) for the topology information of the network, calculating 2-4 order motif node degree values of each node by using a motif node degree formula, and taking the motif node degree values as a structural information matrix of the network data set
Figure BDA0003562256170000033
3) For attribute information matrix XaAnd a structural information matrix XsRespectively carrying out standardization; the attribute information matrix is normalized as
Figure BDA0003562256170000034
Wherein Xa[i,j]Represents XaThe value of the ith row and the jth column; the structure information matrix is standardized to operate as
Figure BDA0003562256170000035
Wherein Xs[i,j]Represents XsThe value of the ith row and the jth column;
step (2): extracting attribute feature matrix H in network dataaAnd structural feature matrix Hs
1) Obtaining a data adjacency matrix
Figure BDA0003562256170000041
Aij1 represents that the node i and the node j are connected, 0 is obtained if the nodes are not connected, and the adjacency matrix obtained after adding self-loop connection to each node is
Figure BDA0003562256170000042
Wherein
Figure BDA0003562256170000043
Representing the self-loop connection of the nodes as an identity matrix to obtain a degree matrix of the nodes of the graph
Figure BDA0003562256170000044
Wherein
Figure BDA0003562256170000045
2) Extracting attribute feature matrix H of network by using graph convolution layeraThe graph convolution formula is as follows:
Figure BDA0003562256170000046
where σ (·) represents a nonlinear function, using relu (x) max {0, x } as the function, WaTrainable weights representing the convolution of the layer graph.
Extracting structural characteristic matrix H of network by using graph convolution layersThe graph convolution formula is as follows:
Figure BDA0003562256170000047
where σ (·) represents a nonlinear function, using relu (x) max {0, x } as the function, WsTrainable weights representing the convolution of the layer graph;
and (3): obtaining a new node representation matrix H by using the representation of the aggregation node neighborhood of the attention mechanismatt
1) For node i, the attention score of the neighborhood is calculated using the cosine similarity function:
eij=β·cos(Ha[i,:],Hs[j,:])
wherein the content of the first and second substances,
Figure BDA0003562256170000048
is a learnable scaling factor;
2) to the attention score eijAnd (3) carrying out normalization operation:
Figure BDA0003562256170000049
wherein the content of the first and second substances,
Figure BDA00035622561700000410
a neighborhood node set representing a node i;
3) a new node representation H is obtained by a domain aggregation of the attention scores to the node jatt[i,:]The method comprises the following specific operations:
Figure BDA0003562256170000051
and (4): representation matrix H converted into downstream tasks by graph convolution operationout
Figure BDA0003562256170000052
Where σ (·) uses relu (x) max {0, x }, W ·outTrainable weights representing the convolution of the layer graph;
and (5): continuously updating the parameter W of the algorithm model by using a loss function until an optimal algorithm model is obtained;
1) representing the downstream task network node as H by utilizing softmax functionoutProbability function of transformed node classes
Figure BDA0003562256170000053
Figure BDA0003562256170000054
2) The loss function used in the optimization process is cross-entropy loss
Figure BDA0003562256170000055
Wherein the content of the first and second substances,
Figure BDA0003562256170000056
for a data index set with labels in the training set, Y is the true label of the data, oneA common class C; the weight parameters are updated through training, so that the model is continuously optimized, and the loss is caused during training
Figure BDA0003562256170000057
And after convergence, obtaining the optimal algorithm model.
Compared with the existing algorithm, the method has the beneficial effects that: the invention mainly uses the network motif to describe the high-order structure in the network as the structural information, effectively learns the relevance of the node attribute and the structure in the network by utilizing the attention mechanism, learns the efficient network node representation and shows excellent performance in the downstream classification task. The method overcomes the problem of suboptimal performance caused by the fact that the applicability of a high-order structure is not considered in network representation learning.
Drawings
Figure 1 is the basic framework of the invention.
FIG. 2 is a schematic diagram of the structural attention mechanism of the present invention.
Detailed Description
The following further describes a specific embodiment of the present invention with reference to the drawings and technical solutions.
In order to make the technical problems solved, technical solutions adopted and technical effects achieved by the present invention clearer, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
The method comprises the following five steps: (1) preparing data and preprocessing the data; (2) extracting attribute features and structural features of the network data; (3) aggregating the node neighborhood representations using an attention mechanism; (4) converting the learned node representation into a form suitable for downstream tasks; (5) repeatedly optimizing by using a loss function until an optimal algorithm model is obtained;
the first step, data preparation and data preprocessing, so that the distribution of data is more suitable for the requirement of a task.
1) Detection ofChecking whether the attribute information of the data set exists, if so, performing normalization operation on the attribute, otherwise, using the diagonal identity matrix I as the attribute information X of the network dataa
2) For the topology information of the network data, calculating 2-4 order motif node degree value of each node by using a motif node degree formula, and taking the motif node degree value as the structure information X of the network data sets
3) For attribute information XaAnd structural information XsThe normalization is performed separately. The attribute information is normalized as
Figure BDA0003562256170000061
The normalization operation of the structure information is
Figure BDA0003562256170000062
Figure BDA0003562256170000063
And secondly, extracting the attribute characteristics and the structural characteristics of the network data for the subsequent calculation of the attention mechanism.
1) Using a picture-winding layer
Figure BDA0003562256170000064
And extracting attribute features.
2) Using a picture-winding layer
Figure BDA0003562256170000065
And extracting structural features.
And thirdly, aggregating the node neighborhood representation by using an attention mechanism.
1) Calculating neighborhood attention score e of node i by using cosine similarity formula and scaling factorij=β·cos(Ha[i,:],Hs[j,:])。
2) Normalization of attention score using softmax function
Figure BDA0003562256170000071
3) Aggregating node domains using scores computed based on a structural attention mechanism
Figure BDA0003562256170000072
Figure BDA0003562256170000073
And fourthly, converting the learned node representation into a form suitable for the downstream task.
Using graph convolution formula
Figure BDA0003562256170000074
The nodes are transformed into a form that conforms to the downstream task.
And fifthly, repeatedly optimizing by using the loss function until an optimal algorithm model is obtained.
1) Probability function for converting downstream task node representation into node classification in node classification task
Figure BDA0003562256170000075
2) Setting loss function of node classification task as cross entropy function
Figure BDA0003562256170000076
3) And continuously training and updating the model weight to obtain the optimal algorithm model after the loss function is converged.
In conjunction with the protocol of the present invention, the experimental analysis was performed as follows:
there are many high-order structures in networks represented by Social networks (Social networks). Compared with binary connection, a ternary or even a multivariate complex connection mode widely exists in a social network, and the connection mode is widely applied to scenes such as anomaly detection, social and academic network analysis and the like, and has important significance in the fields such as spam classification, anomaly detection and the like. Thus, experiments and related comparisons can be performed on social networking datasets to demonstrate the effectiveness of the present model.
(1) Introduction of network motif usage and social network data sets
At present, a large number of high-order connection modes with different structures are found in researches of social networks, student networks, traffic networks and the like, and the connection modes play an important role in the networks. In the social network, in addition to being connected two by two, there is a ternary closure of the interconnections between three users, i.e., M31 type motif (table 1), meaning that the three users know each other and form a small group with close relationship. Such a connection pattern can reflect the network structure of the individual in the social network and is ubiquitous in the social network. The network motifs have a high frequency of occurrence in a real network, and the network motifs of the former have a much higher frequency of occurrence than those of the latter in a real network and a random network having the same number of nodes and edges. The basic cases of network motifs used in the experiments are shown in table 1.
Network motif information for use in table 1
Figure BDA0003562256170000081
The model carries out performance comparison tests on 4 public social network data sets Cora, Citationv1, DBLPv7 and ACMv9, and tasks are node classification. The detailed information of the data set is shown in table 2.
Table 2 data set statistics
Figure BDA0003562256170000082
Figure BDA0003562256170000091
The Cora data set is a paper reference network. Nodes are published articles, edges are reference links, and a node is characterized by a 1433-dimensional one-hot (one-hot) vector whose value represents whether each word generation exists in the paper. All nodes in the network are divided into 7 classes for a total of 2708 samples. Citationv1, DBLPv7, and ACMv9 are 3 citation networks extracted from the AMIner dataset, from Microsoft, DBLP, and ACM, respectively. In these data sets, each node represents an article and each edge represents a citation link for the article. Wherein, the total of 8935 nodes in Citationv1, 5484 nodes in DBLPv7, and 9360 nodes in ACMv9, and the 3 data sets all include 5 categories.
(2) The method compares the experimental results with other excellent methods
The comparison between the experimental results of the method and other excellent models in the classification task of the social network nodes in the network representation learning field is shown in table 3. The MLP is a classical neural network model in deep learning, and various complex tasks are completed by stacking a plurality of linear and nonlinear transformations; the GCN is a semi-supervised learning graph convolution network model, and the representation of the network is learned by aggregating the information of the neighborhood nodes. The SGC removes unnecessary complex and redundant computation in the GCN, accelerates the computation speed and effectively reduces the performance degradation. The GAT is a graph neural network model for aggregating the characteristics of neighborhood nodes by using an attention mechanism, and the high-efficiency performance of the GAT achieves the best effect in a plurality of tasks. MORE learns the representation of nodes by aggregating the structural information of the nodes and the attribute information of the nodes with simple operators.
TABLE 3 node Classification accuracy results on dataset
Figure BDA0003562256170000101
Through comparison of experimental results, it can be seen that the method uses the network motif to describe a large number of high-order connection modes existing in the network as structural information, effectively learns the relevance between the node attribute and the structure in the network by using an attention mechanism, learns high-efficiency network node representation, and shows good performance in a downstream classification task.
The above-mentioned embodiments only express the embodiments of the present invention, but not should be understood as the limitation of the scope of the invention patent, it should be noted that, for those skilled in the art, many variations and modifications can be made without departing from the concept of the present invention, and these all fall into the protection scope of the present invention.

Claims (1)

1. A network representation learning algorithm based on combination of a high-order structure and an attention mechanism is characterized in that the network representation learning algorithm is characterized in that firstly, data are preprocessed to obtain an attribute information matrix and a structure information matrix of the data; secondly, extracting attribute characteristics and structural characteristics of nodes in the network data by using a graph convolution formula respectively; then, calculating the scores by using an attention mechanism and aggregating the node neighborhoods to obtain an aggregated new node representation, and converting the aggregated new node representation into a probability form which accords with a downstream classification task by using graph convolution operation; finally, continuously updating the parameters of the algorithm model by using the loss function until the optimal algorithm model is obtained and used for a downstream classification task; the method comprises the following steps:
step (1): preparing data, preprocessing the data to obtain attribute information matrix X of the dataaAnd a structural information matrix Xs
1) Checking whether the attribute information of the data set exists, if so, keeping the attribute information unchanged, otherwise, using the identity matrix
Figure FDA0003562256160000011
Attribute information as network data
Figure FDA0003562256160000012
Where N is the number of nodes in the network data, daDimension of attribute information;
2) for the topology information of the network, 2-4-order motif node degree values of each node are calculated by using a motif node degree formula and are used as a structure information matrix of the network data set
Figure FDA0003562256160000013
3) For attribute information matrix XaAnd a structural information matrix XsRespectively carrying out standardization; the attribute information matrix is normalized into
Figure FDA0003562256160000014
Wherein Xa[i,j]Represents XaThe value of the ith row and the jth column; the structure information matrix is standardized to operate as
Figure FDA0003562256160000015
Wherein Xs[i,j]Represents XsThe value of the ith row and the jth column;
step (2): extracting attribute feature matrix H in network dataaAnd structural feature matrix Hs
1) Obtaining a data adjacency matrix
Figure FDA0003562256160000016
AijWhen 1 represents that the node i and the node j are connected, and when they are not connected, they are 0, and the adjacency matrix obtained by adding self-loop connection to each node is
Figure FDA0003562256160000017
Wherein
Figure FDA0003562256160000018
Representing the self-loop connection of the nodes as an identity matrix to obtain a degree matrix of the nodes of the graph
Figure FDA0003562256160000021
Wherein
Figure FDA0003562256160000022
2) Extracting attribute feature matrix H of network by using graph convolution layeraThe graph convolution formula is as follows:
Figure FDA0003562256160000023
where σ (·) represents a nonlinear function, using relu (x) max {0, x } as the function, WaTrainable weights representing the convolution of the layer graph;
extracting structural characteristic matrix H of network by using graph convolution layersThe graph convolution formula is as follows:
Figure FDA0003562256160000024
where σ (·) represents a nonlinear function, using relu (x) max {0, x } as the function, WsTrainable weights representing the convolution of the layer graph;
and (3): obtaining a new node representation matrix H by using the representation of the aggregation node neighborhood of the attention mechanismatt
1) For node i, the attention score of the neighborhood is calculated using the cosine similarity function:
eij=β·cos(Ha[i,:],Hs[j,:])
wherein the content of the first and second substances,
Figure FDA0003562256160000025
is a learnable scaling factor;
2) to the attention point eijAnd (3) carrying out normalization operation:
Figure FDA0003562256160000026
wherein the content of the first and second substances,
Figure FDA0003562256160000027
a neighborhood node set representing a node i;
3) a new node representation H is obtained by a domain aggregation of the attention scores to the node jatt[i,:]The method comprises the following specific operations:
Figure FDA0003562256160000028
and (4): representation matrix H converted into downstream tasks by graph convolution operationout
Figure FDA0003562256160000031
Where σ (·) uses relu (x) max {0, x }, W ·outTrainable weights representing the convolution of the layer graph;
and (5): continuously updating the parameter W of the algorithm model by using a loss function until an optimal algorithm model is obtained;
1) representing the downstream task network node as H by utilizing softmax functionoutProbability function of node classification of conversion
Figure FDA0003562256160000032
Figure FDA0003562256160000033
2) The loss function used in the optimization process is cross entropy loss;
Figure FDA0003562256160000034
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003562256160000035
a data index set with labels in a training set, wherein Y is a real label of data and has a C type; the weight parameters are updated through training, so that the model is continuously optimized, and the loss is caused during training
Figure FDA0003562256160000036
And after convergence, obtaining the optimal algorithm model.
CN202210293116.3A 2022-03-24 2022-03-24 Network representation learning algorithm combining high-order structure and attention mechanism Pending CN114692867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210293116.3A CN114692867A (en) 2022-03-24 2022-03-24 Network representation learning algorithm combining high-order structure and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210293116.3A CN114692867A (en) 2022-03-24 2022-03-24 Network representation learning algorithm combining high-order structure and attention mechanism

Publications (1)

Publication Number Publication Date
CN114692867A true CN114692867A (en) 2022-07-01

Family

ID=82139051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210293116.3A Pending CN114692867A (en) 2022-03-24 2022-03-24 Network representation learning algorithm combining high-order structure and attention mechanism

Country Status (1)

Country Link
CN (1) CN114692867A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115130663A (en) * 2022-08-30 2022-09-30 中国海洋大学 Heterogeneous network attribute completion method based on graph neural network and attention mechanism

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115130663A (en) * 2022-08-30 2022-09-30 中国海洋大学 Heterogeneous network attribute completion method based on graph neural network and attention mechanism
CN115130663B (en) * 2022-08-30 2023-10-13 中国海洋大学 Heterogeneous network attribute completion method based on graph neural network and attention mechanism

Similar Documents

Publication Publication Date Title
Liao et al. Efficient graph generation with graph recurrent attention networks
Lu et al. Learning to pre-train graph neural networks
CN104298873B (en) A kind of attribute reduction method and state of mind appraisal procedure based on genetic algorithm and rough set
US7469237B2 (en) Method and apparatus for fractal computation
CN104601565A (en) Network intrusion detection classification method of intelligent optimization rules
CN112256870A (en) Attribute network representation learning method based on self-adaptive random walk
CN115688776B (en) Relation extraction method for Chinese financial text
Gu et al. Application of fuzzy decision tree algorithm based on mobile computing in sports fitness member management
CN112767186A (en) Social network link prediction method based on 7-subgraph topological structure
Xue et al. Classification and identification of unknown network protocols based on CNN and T-SNE
CN114692867A (en) Network representation learning algorithm combining high-order structure and attention mechanism
Fang et al. Contrastive multi-modal knowledge graph representation learning
CN110851733A (en) Community discovery and emotion interpretation method based on network topology and document content
CN114118416A (en) Variational graph automatic encoder method based on multi-task learning
CN117272195A (en) Block chain abnormal node detection method and system based on graph convolution attention network
CN116628524A (en) Community discovery method based on adaptive graph attention encoder
CN111126443A (en) Network representation learning method based on random walk
CN114265954B (en) Graph representation learning method based on position and structure information
Choong et al. Optimizing variational graph autoencoder for community detection
WO2022227957A1 (en) Graph autoencoder-based fusion subspace clustering method and system
CN115269853A (en) Heterogeneous graph neural network false news detection algorithm based on motif
CN114861450A (en) Attribute community detection method based on potential representation and graph regular nonnegative matrix decomposition
CN114595336A (en) Multi-relation semantic solution model based on Gaussian mixture model
Xie et al. L-bgnn: Layerwise trained bipartite graph neural networks
Shan et al. NF-VGA: Incorporating Normalizing Flows into Graph Variational Autoencoder for Embedding Attribute Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination