CN115131605A - Structure perception graph comparison learning method based on self-adaptive sub-graph - Google Patents

Structure perception graph comparison learning method based on self-adaptive sub-graph Download PDF

Info

Publication number
CN115131605A
CN115131605A CN202210665049.3A CN202210665049A CN115131605A CN 115131605 A CN115131605 A CN 115131605A CN 202210665049 A CN202210665049 A CN 202210665049A CN 115131605 A CN115131605 A CN 115131605A
Authority
CN
China
Prior art keywords
graph
motif
node
embedding
types
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210665049.3A
Other languages
Chinese (zh)
Inventor
于硕
彭寅
陈志奎
夏锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202210665049.3A priority Critical patent/CN115131605A/en
Publication of CN115131605A publication Critical patent/CN115131605A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of graph representation learning, and provides a structure perception graph comparison learning method based on an adaptive sub-graph, which is used for graph representation learning. The method comprises a subgraph generation algorithm based on Motif, a graph enhancement algorithm, a subgraph embedding algorithm based on Motif, a graph embedding algorithm based on GNN and a subgraph contrast learning framework. The method can help the model to better capture local semantic information in an unsupervised scene, so that the embedding of high-quality nodes is learned, and the method is used for downstream graph learning tasks such as node classification, link prediction and a recommendation system. According to the invention, based on the motif information in the original graph, the coded subgraph is constructed, so that the damage of graph enhancement on the semantic information of the original graph can be effectively reduced; compared with a coding strategy and a traditional subgraph generation method, the proposed subgraph generation based on motif can capture richer semantic information.

Description

Structure perception graph comparison learning method based on self-adaptive sub-graph
Technical Field
The invention relates to the field of graph representation learning, in particular to a structure perception graph comparison learning method based on an adaptive sub-graph.
Background
Graph learning, one of the most important branches of deep learning, is widely used in various fields such as recommendation systems, biomolecules, abnormality detection, and the like. The graph data is different from the traditional grid data, such as text, voice, image and the like, is non-European data with a complex structure, and is difficult to solve graph learning tasks by applying traditional neural networks, such as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN) and the like.
In recent years, Graph Neural Networks (GNNs) dedicated to graph data have become the mainstream method of graph learning, and Graph Convolution Networks (GCNs), graph attention networks (GATs), graphsages, and the like are widely used for various graph learning tasks. While GNN has met with great success in various areas, it also suffers from the drawback that a large amount of labeled data is required for supervised learning. However, in the real world, on one hand, the cost of data labeling is higher and higher, and on the other hand, in some special fields, such as biomolecules, chemistry and the like, labeling data requires a great deal of domain-related knowledge, and it is difficult to obtain a large amount of labeled data. Therefore, the unsupervised image learning method is also one of the research hotspots in the image learning field.
The traditional unsupervised graph learning method is mainly represented by a reconstructed topological structure learning Node, such as Node2vec, VGAE and the like. These methods overemphasize the proximity of the graph, resulting in poor model performance in certain scenarios. At present, Graph Contrast Learning (GCL) gradually becomes the most representative unsupervised graph learning method, and prior knowledge is learned from data per se. In particular, GCL learns node representations for downstream tasks by minimizing mutual information between two perspectives, as opposed to different enhanced perspectives of the original data.
The existing GCL method is basically the comparison of node levels, namely the difference of nodes between two enhanced visual angles is compared. The complexity of the graph structure is ignored in the method, and rich semantic information hidden in the graph local structure is difficult to capture, so that the model performance is poor. In addition, although there are some GCL methods based on subgraph comparison, the methods for constructing and encoding subgraphs are too simple, and semantic information hidden in the graph structure cannot be completely acquired.
Disclosure of Invention
The invention solves the problem that the prior graph contrast learning method is difficult to capture semantic information hidden in a graph topological structure, and can learn higher-quality node embedding, so that a model has better expression in downstream tasks.
The technical scheme of the invention is as follows:
a structure perception map comparison learning method based on an adaptive sub-map specifically comprises the following steps:
step 1, generating a subgraph;
based on an original graph G ═ V, E, A, X, wherein V is a node, E is an edge, A is an adjacency matrix, and X is a feature matrix; obtaining the motif information of each node, wherein the motif information comprises motif type and quantity distribution information; constructing a sub-graph for each node based on the motif information, and splicing all motif types including target nodes together to form a sub-graph;
for node v in the original graph i Finding a set of motif information containing all the motif types for this node, denoted M i ={m 1 ,...,m n In which m is i ={v i ,v j |v j E is V, i is not equal to j; for node v in the original graph i The calculation method of all the node sets contained in the subgraph is as follows:
S i ={n i |n i ∈m j ,m j ∈M i }
wherein n is i Representing nodes in the subgraph;
step 2, based on the original graph, two enhancement strategies of random edge loss and random shielding node characteristics are utilized to generate two graph enhancement visual angles; two graph enhancement visual angles with certain difference are generated for comparison, but the core semantic information of the original input graph cannot be excessively damaged in the enhancement process.
Step 2.1, randomly discarding edges in the original graph;
for the original graph G ═ (V, E, a, X), first the R distribution is followed by a bernoulli distribution ij ~B(1-p r ) Randomly sampling an edge-missing probability matrix R epsilon {0, 1} |V|×|V| Wherein p is r Representing the probability of edge loss; for each edge in the figure, there is p r The probability of (c) is deleted.
Calculating a adjacency matrix of the graph enhancement view angle, wherein the calculation formula is as follows:
Figure BDA0003692636190000031
2.2, randomly shielding the dimensionality of the node features;
firstly, the nodes in each original graph are represented according to the probability of 1-p k Bernoulli distribution k of i ~B(1-p k ) Sample a vector k e {0, 1}, where p k Representing the probability of the shielding characteristic;
calculating the enhanced node characteristic matrix according to the following mode:
Figure BDA0003692636190000032
wherein x is i Representing a node v i The feature vector of (2);
step 3, calculating sub-graph embedding based on motif information; a graph Motif-based subgraph aggregator is used to encode subgraphs.
Firstly, calculating a motif embedded vector, then calculating a prototype vector of each motif type, and then aggregating all motif prototype vectors;
3.1 computation contains node v i The prototype vector m for all motifs, is calculated as follows:
Figure BDA0003692636190000033
3.2 one node may exist in multiple types of motifs, and there may be multiple motifs for each type of motif; will involve the node v i All motif information is grouped according to motif types, and then prototype vectors of each motif type are calculated;
M it representing the inclusion of a node v i The specific calculation formula of all motif sets with the motif types t is as follows:
Figure BDA0003692636190000041
3.3 obtaining node v i After all the prototype vectors of the motif types are aggregated, the prototype vectors of all the motif types are aggregated into a vector to represent structural semantic information around the node; polymerization modes include average polymerization and attention polymerization;
the average polymerization and the attention polymerization are selected according to the importance degree of different motif types;
when the importance degree of each motif type is the same, calculating a prototype vector of each motif type by using average aggregation, wherein the calculation formula is as follows:
Figure BDA0003692636190000042
where | M | represents the number of motif types; and in the present invention 5.
When the importance degrees of different motif types are different or the importance degrees of the motif types are uncertain, adopting attention to aggregate prototype vectors of all the motif types;
attention aggregation the idea behind an aggregator based on attention mechanism is that different types of motif contribute differently to sub-map embedding. I.e. different motif types are of different importance for a certain node. The calculation formula of the aggregator based on the attention mechanism is as follows:
m i =Softmax(f(M))·M
where f (-) and Softmax (-) denote linear and Softmax functions, respectively, and M denotes the motif prototype matrix.
Step 4, a graph encoder based on GNN is used for embedding the computing nodes;
providing two types of encoders, namely a "concat" encoder and a "place" encoder; the "concat" encoder and the "place" encoder are selected according to graph sparsity;
when the original graph is a sparse graph, a concat type encoder is adopted, a subgraph aggregation layer is added in front of each GNN layer, subgraph embedding is used as node embedding, the subgraph embedding is transmitted into the GNN layer, and then the node embedding is updated:
Figure BDA0003692636190000051
H l+1 =σ(AS l W l)
wherein Agg (-) represents a motif prototype aggregation function, σ (-) represents a sigmoid function, P represents motif information of each node, Q represents the number of each motif type, W l Representing the weight parameter of the layer I GNN;
when the graph is a dense graph, a 'replace' type encoder is adopted, the subgraph embedding is directly used as node embedding, and the GNN propagation aggregation operation is executed; the specific calculation method is as follows:
Figure BDA0003692636190000052
step 5, the mutual information between the two graph enhancement visual angles is maximized by a graph comparator based on the mutual information, and the learned node embedding is optimized;
after the nodes of the two graph enhanced visual angles are embedded, maximizing mutual information between the two graph visual angles based on a defined contrast target;
enhancing a certain node v in the view angle for any one of the graphs i U for embedding vector i Indicating that the node is embedded in another graph enhancement viewQuantity using z i Represents; (u) i ,z i ) The node pairs are positive sample pairs, and the node pairs formed by all different nodes in the two graph enhancement visual angles are regarded as negative sample pairs; the loss of contrast between the positive sample pairs is defined as:
Figure BDA0003692636190000053
δ(u i ,z i )=μ(g γ (u i ),g γ (z i ))
wherein, mu (·,) represents cosine similarity function, tau represents temperature parameter, g γ (. h) a representation mapping function for embedding and mapping the nodes to the contrast space;
since the two image enhancement views are symmetric, a contrast loss D (z) is obtained i ,u i ) (ii) a The final loss function of the present invention is defined as follows:
Figure BDA0003692636190000061
wherein N represents the total number of nodes;
and obtaining high-quality node embedding for a downstream graph learning task after continuously optimizing the model loss.
Motif refers to a structure which frequently appears in a network and has special semantic information, and the number of types of the Motif is 5.
The invention has the beneficial effects that: the method can help the model to better capture local semantic information in an unsupervised scene, so that the embedding of high-quality nodes is learned, and the method is used for downstream graph learning tasks such as node classification, link prediction and a recommendation system. In contrast learning, the core of graph enhancement is to generate contrast visual angles with significant differences and simultaneously retain original graph semantic information. The invention constructs the coding subgraph based on the motif information in the original graph, and can effectively reduce the damage of graph enhancement on the semantic information of the original graph. Compared with a coding strategy and a traditional subgraph generation method, the subgraph generation based on the motif can capture richer semantic information.
Drawings
FIG. 1 is a flow chart of an implementation of a structure perception graph comparison learning method based on an adaptive sub-graph according to the present invention.
FIG. 2 is a flow chart of generating a subgraph;
FIG. 3 is a flow diagram of an implementation of generating a graph to enhance a perspective;
FIG. 4 is a flow chart of an implementation of computation of sub-graph embedding based on motif information;
FIG. 5 is a flow chart of an implementation of the graph comparator;
FIG. 6 shows 5 Motif types used in the present invention, and (a) - (e) are five types id1-id5, respectively.
Detailed Description
To further illustrate the technical effects brought by the present invention, the node classification is taken as an example to show the performance improvement brought by the present invention in the real experiment. The experimental settings were as follows:
setting 1, using two types of data sets of a social network and an academic network, wherein the data sets comprise 4 data sets of Polblogs, Citationv1, Computers and Photo;
setting 2, using 2-layer GNNs as an encoder, setting the dimension of node embedding as 128 and setting the learning rate as 0.001;
setting 3, setting the maximum number of model training rounds as 2000, training the model in an unsupervised mode, training a classifier by using 10% of data after the model is converged, and taking the rest data as test data;
setting 4, executing all experiments 20 times, and taking an average value;
setting 5, using the existing high-performance unsupervised image learning method as basemines, specifically including GAE, DGI, GRACE and MVGRL.
The experiment comprises the following specific steps:
step 1, extracting the motif information of each data set according to the 5 motif types set by the invention;
step 2, generating a graph enhancement visual angle by using two enhancement strategies defined by the invention in each round;
step 3, the generated two graph enhancement views are projected to a graph encoder, and nodes are calculated to be embedded;
step 4, after the nodes are embedded, calculating the contrast loss according to the contrast loss function defined by the invention, performing back propagation, and optimizing the model parameters;
step 5, repeating the steps 2-4, and stopping training after the iterative training model is repeatedly trained for at most 2000 rounds or the model is converged;
step 6, training a linear classifier by using 10% of labeled data;
and 7, predicting the labels of the residual 90% of data by using the trained linear classifier, and calculating the prediction accuracy, wherein the experimental result is shown in table 1. Table 1 is the performance of the present invention in a real experimental environment, showing the classification accuracy of each model under different data sets. GAE, DGI, GRACE and MVGRL are the other 4 unsupervised graph models. Polblogs, Citationv1, Computers, and Photo, are 4 common graph datasets.
TABLE 1 prediction accuracy under different methods
Figure BDA0003692636190000081
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: modifications are made to the technical solutions described in the foregoing embodiments, or some or all of the technical features are replaced with equivalents, without departing from the spirit of the technical solutions of the embodiments of the present invention.

Claims (4)

1. A structure perception map comparison learning method based on an adaptive sub-map is characterized by comprising the following steps:
step 1, generating a subgraph;
based on an original graph G ═ V, E, A, X, wherein V is a node, E is an edge, A is an adjacency matrix, and X is a feature matrix; obtaining the motif information of each node, wherein the motif information comprises motif type and quantity distribution information; constructing a sub-graph for each node based on the motif information, and splicing all the motif types including the target node together to form a sub-graph;
for node v in the original graph i Finding a set of motif information comprising all the motif types of the node, denoted as M i ={m 1 ,...,m n In which m is i ={v i ,v j |v j E is V, i is not equal to j; for node v in the original graph i The calculation method of all the node sets in the subgraph is as follows:
S i ={n i |n i ∈m j ,m j ∈M i }
wherein n is i Representing nodes in the subgraph;
step 2, based on the original graph, two enhancement strategies of random edge loss and random shielding node characteristics are utilized to generate two graph enhancement visual angles;
step 2.1, randomly discarding edges in the original graph;
for the original graph G ═ (V, E, a, X), first the R is distributed according to bernoulli ij ~B(1-p r ) Randomly sampling a lost edge probability matrix R epsilon {0, 1} |V|×|V| Wherein p is r Representing the probability of edge loss;
calculating a adjacency matrix of the graph enhancement view angle, wherein the calculation formula is as follows:
Figure FDA0003692636180000011
step 2.2, randomly shielding the dimensionality of the node characteristics;
firstly, the nodes in each original graph are represented according to the probability of 1-p k Bernoulli distribution k of i ~B(1-p k ) Sampling a vector k e {0, 1}, where p k Representing the probability of the shielding characteristic;
calculating the enhanced node characteristic matrix according to the following mode:
Figure FDA0003692636180000023
wherein x is i Representing a node v i The feature vector of (2);
step 3, calculating subgraph embedding based on motif information;
firstly, calculating embedded vectors of the motif types, then calculating prototype vectors of each motif type, and then aggregating the prototype vectors of all the motif types;
3.1, calculation includes node v i The prototype vector m for all motif types, is calculated as follows:
Figure FDA0003692636180000021
3.2 this node v will be involved i All the motif information is grouped according to the motif types, and then a prototype vector of each motif type is calculated;
M it representing the inclusion of a node v i The specific calculation formula of all motif sets with the motif types t is as follows:
Figure FDA0003692636180000022
3.3 obtaining node v i After all the prototype vectors of the motif types are aggregated, the prototype vectors of all the motif types are aggregated into a vector to represent structural semantic information around the node; polymerization modes include average polymerization and attention polymerization;
step 4, a GNN-based graph encoder for computing node embedding;
providing two types of encoders, namely a "concat" encoder and a "place" encoder;
step 5, the mutual information between the two graph enhancement visual angles is maximized by a graph comparator based on the mutual information, and the learned node embedding is optimized;
after the nodes of the two graph enhanced visual angles are embedded, maximizing mutual information between the two graph visual angles based on a defined contrast target;
enhancing a certain node v in the view angle for any one of the graphs i U for embedding vector i Indicating that the node is embedded with a vector z in another graph enhancement view i Represents; (u) i ,z i ) The node pair formed by all different nodes in the enhanced view angles of the two graphs is regarded as a positive sample pair; the loss of contrast between the positive sample pairs is defined as:
Figure FDA0003692636180000031
δ(u i ,z i )=μ(g γ (u i ),g γ (z i ))
wherein, mu (·,) represents cosine similarity function, tau represents temperature parameter, g γ (. h) a representation mapping function for embedding and mapping the nodes to the contrast space;
since the two views are symmetric in enhancing the viewing angle, a contrast loss D (z) is obtained i ,u i ) (ii) a The loss function is defined as follows:
Figure FDA0003692636180000032
wherein N represents the total number of nodes;
and obtaining high-quality node embedding for a downstream graph learning task after continuously optimizing the model loss.
2. The adaptive sub-graph-based structure perception graph contrast learning method according to claim 1, wherein the average aggregation and the attention aggregation are selected according to importance degrees of different motif types;
when the importance degree of each motif type is the same, calculating the prototype vector of each motif type by adopting average aggregation, wherein the calculation formula is as follows:
Figure FDA0003692636180000033
wherein | M | represents the number of motif types;
when the importance degrees of different motif types are different or the importance degrees of the motif types are uncertain, adopting attention to aggregate prototype vectors of all the motif types;
attention aggregation is based on an attention mechanism aggregator, and the importance degrees of different motif types are different for a certain node; the calculation formula of the aggregator based on the attention mechanism is as follows:
m i =Softmax(f(M))·M
wherein f (-) and Softmax (-) denote a linear function and a Softmax function, respectively, and M denotes a motif prototype vector matrix.
3. The adaptive sub-graph-based structure perception graph-versus-learning method according to claim 1 or 2, wherein the "concat" encoder and the "place" encoder are selected according to graph sparsity;
when the original graph is a sparse graph, a concat type encoder is adopted, a subgraph aggregation layer is added in front of each GNN layer, subgraph embedding is used as node embedding, the subgraph embedding is transmitted into the GNN layer, and then the node embedding is updated:
Figure FDA0003692636180000041
H l+1 =σ(AS l W l )
wherein Agg (-) represents a motif prototype aggregation function, σ (-) represents a sigmoid function, P represents motif information of each node, Q represents the number of each motif type, W l Representing the l-th layer GNN weight parameter;
when the original graph is a dense graph, a 'replace' type encoder is adopted, the subgraph embedding is directly used as node embedding, and the GNN propagation aggregation operation is executed; the specific calculation method is as follows:
Figure FDA0003692636180000042
4. the adaptive sub-map-based structure perception map contrast learning method according to claim 1, wherein the motif types are 5 in total.
CN202210665049.3A 2022-06-14 2022-06-14 Structure perception graph comparison learning method based on self-adaptive sub-graph Pending CN115131605A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210665049.3A CN115131605A (en) 2022-06-14 2022-06-14 Structure perception graph comparison learning method based on self-adaptive sub-graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210665049.3A CN115131605A (en) 2022-06-14 2022-06-14 Structure perception graph comparison learning method based on self-adaptive sub-graph

Publications (1)

Publication Number Publication Date
CN115131605A true CN115131605A (en) 2022-09-30

Family

ID=83377996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210665049.3A Pending CN115131605A (en) 2022-06-14 2022-06-14 Structure perception graph comparison learning method based on self-adaptive sub-graph

Country Status (1)

Country Link
CN (1) CN115131605A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993433A (en) * 2023-07-14 2023-11-03 重庆邮电大学 Internet E-commerce abnormal user detection method based on big data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993433A (en) * 2023-07-14 2023-11-03 重庆邮电大学 Internet E-commerce abnormal user detection method based on big data

Similar Documents

Publication Publication Date Title
CN109948029B (en) Neural network self-adaptive depth Hash image searching method
CN111079532B (en) Video content description method based on text self-encoder
WO2023280065A1 (en) Image reconstruction method and apparatus for cross-modal communication system
CN109740106A (en) Large-scale network betweenness approximation method based on graph convolution neural network, storage device and storage medium
CN113065974A (en) Link prediction method based on dynamic network representation learning
CN114491039B (en) Primitive learning few-sample text classification method based on gradient improvement
CN114780748A (en) Priori weight enhancement-based completion method of knowledge graph
CN111931814A (en) Unsupervised anti-domain adaptation method based on intra-class structure compactness constraint
CN112417289A (en) Information intelligent recommendation method based on deep clustering
CN112686376A (en) Node representation method based on timing diagram neural network and incremental learning method
CN117272195A (en) Block chain abnormal node detection method and system based on graph convolution attention network
CN117009547A (en) Multi-mode knowledge graph completion method and device based on graph neural network and countermeasure learning
CN115525771A (en) Context data enhancement-based learning method and system for representation of few-sample knowledge graph
CN115131605A (en) Structure perception graph comparison learning method based on self-adaptive sub-graph
CN113033410B (en) Domain generalization pedestrian re-recognition method, system and medium based on automatic data enhancement
CN113987203A (en) Knowledge graph reasoning method and system based on affine transformation and bias modeling
CN117765258A (en) Large-scale point cloud semantic segmentation method based on density self-adaption and attention mechanism
CN117391816A (en) Heterogeneous graph neural network recommendation method, device and equipment
CN110717402B (en) Pedestrian re-identification method based on hierarchical optimization metric learning
CN116883751A (en) Non-supervision field self-adaptive image recognition method based on prototype network contrast learning
CN115238134A (en) Method and apparatus for generating a graph vector representation of a graph data structure
CN115965078A (en) Classification prediction model training method, classification prediction method, device and storage medium
CN115563519A (en) Federal contrast clustering learning method and system for non-independent same-distribution data
CN115115966A (en) Video scene segmentation method and device, computer equipment and storage medium
CN114861863A (en) Heterogeneous graph representation learning method based on meta-path multi-level graph attention network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination