CN116861923A - Multi-view unsupervised graph contrast learning model construction method, system, computer, storage medium and application - Google Patents

Multi-view unsupervised graph contrast learning model construction method, system, computer, storage medium and application Download PDF

Info

Publication number
CN116861923A
CN116861923A CN202310864902.9A CN202310864902A CN116861923A CN 116861923 A CN116861923 A CN 116861923A CN 202310864902 A CN202310864902 A CN 202310864902A CN 116861923 A CN116861923 A CN 116861923A
Authority
CN
China
Prior art keywords
view
graph
matrix
constructing
original input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310864902.9A
Other languages
Chinese (zh)
Inventor
徐博
王锦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Publication of CN116861923A publication Critical patent/CN116861923A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Machine Translation (AREA)

Abstract

The invention belongs to the field of computer software, and provides a multi-view unsupervised graph contrast learning model construction method, a system, a computer, a storage medium and application. The model can be used for network mining of various types of implicit relations, and solves the problems of complexity and lack of available labels of modeling relations encountered in mining the implicit relations. According to the method, effective data are obtained and preprocessed, an anchor view generation module, a language enhancement view learning module and a structure feedback comparison learning module of a multi-view unsupervised graph comparison learning model are constructed, the models are trained by utilizing a preprocessed data set, an optimal graph structure obtained through training and an original graph structure are fused and input into a graph encoder to obtain a network general representation, and the probability of an implicit relation between two nodes is predicted by utilizing a link prediction method, so that an unsupervised implicit relation mining task is realized.

Description

Multi-view unsupervised graph contrast learning model construction method, system, computer, storage medium and application
Technical Field
The invention belongs to the field of natural language processing, and relates to a multi-view unsupervised graph contrast learning model construction method, a system, a computer, a storage medium and an implicit relation mining application.
Background
Mining relationships between entities provides the possibility to develop various practical applications. For example, in a social network scene, knowing the relationship between users is helpful for the development of community group purchase, product recommendation and other services; in the biomedical field, analyzing the relationship between cytokines, proteins and drugs helps the discovery of new drugs; in terms of network security, identifying interactions between different nodes helps to detect anomalous nodes with high aggressiveness. However, unlike studies of the entities of interest themselves, such as node classification and node clustering, the connections between entities in most real world networks often do not have attribute information, which presents many difficulties in learning relationships between entities.
In order to solve the above problems, researchers have focused on research relationship mining. Relationship mining can be understood essentially as a classification task, focusing on predicting the characteristics of interactions between two entities. In particular, if the interaction characteristics of two entities are more prone to some relationship that is known, then the likelihood that such a relationship exists between the two entities is considered to be high. At present, the relation mining is widely applied to a plurality of application fields, such as user analysis, recommendation systems, semantic similarity searching and the like. While there are a number of students working to discover and identify relationships between entities in a network and have achieved advanced results, there will be limitations and instability only around the connections that exist between entities, i.e., explicit relationships. In practice, real world networks consist not only of explicit physical connections, but also of hidden physical relationships. These non-explicit, hidden relationships are also referred to as implicit relationships. Implicit relationships are very common in real world networks and may reflect the potential reasons for entities to establish explicit connections.
Mining implicit relationships helps understand potential links, reveal potential interaction rules, and provide more meaningful guidance for many practical applications, such as business recommendations, double blind peer reviews, and fraudulent user detection for one city. However, mining implicit relationships has the following problems: (1) complexity of modeling relationships. Implicit relationships are more complex than explicit relationships and tend to be hidden behind surface contacts. This fact increases the complexity of modeling implicit relationships. (2) lack of available tag information. Since implicit relationships may not have a true connection/edge in the network, it is expensive to obtain tag information—manual annotation or purchase of a license, even if not available due to privacy concerns, let alone whether the obtained tag is authentic. The conventional relation mining method has failed to meet the demands of the scholars.
Contrast learning is a self-supervised learning method that models the general characteristics of a network by letting the model learn whether unlabeled nodes are similar or different. In recent years, contrast learning has shown competitive performance in terms of computer vision and has become increasingly popular in graphic representation learning. Graph contrast learning follows the principle of mutual information Maximization (MI), i.e., pulling representations of nodes with similar semantic information closer together, while keeping representations of irrelevant nodes away. However, the existing unsupervised graph represents that the learning method is mostly focused on downstream tasks such as node classification and node clustering, and the work of exploring the relation mining is less. Therefore, in order to effectively model complex interactions between entities without external information guidance, it is desirable to develop an unsupervised relational mining method that can accurately discover and model explicit and implicit relationships.
Disclosure of Invention
In order to overcome the problems of the prior art center, the invention provides a multi-view unsupervised graph contrast learning model construction method, which is used for maximizing the consistency between a double-anchor view and a semantic enhancement view by constructing two anchor views with complementary information and a self-enhanced semantic enhancement view, so that the semantic enhancement view can integrate the complementary information of different views, learn deeper network representation and solve the problem that the prior method cannot well mine implicit relation under unsupervised setting.
In a first aspect, the present invention provides the following technical solutions:
a multi-view unsupervised graph contrast learning model construction method, the method comprising:
preprocessing effective data, obtaining first relation data, second relation data and a relation diagram network formed by the first relation and the second relation, collecting the relation diagram network into an original input diagram data set, and dividing the original input diagram data set into a verification set and a test set;
constructing an original structure anchoring view by utilizing an adjacent matrix of the original input diagram and a feature matrix of a node in the original input diagram; constructing an edge attribute anchoring view by utilizing an adjacent matrix of the original input diagram, a feature matrix of nodes in the original input diagram and the edge attribute of the network;
Modeling an original input graph with a semantic feature learner to generate a fully connected semantic enhanced view adjacency matrixSemantic enhancement view adjacency matrix based on full connection>Selecting the first k nodes which are semantically similar to each node, and constructing a sparse semantic enhancement view adjacency matrix S by taking the k nodes as neighbor nodes; constructing a semantic enhancement view by using a sparse semantic enhancement view adjacency matrix S;
normalizing the original structure anchor view, the edge attribute anchor view, and the semantic enhancement view, and ensuring that adjacency matrices in the three views are symmetric and that each element in an adjacency matrix is non-negative; modeling the normalized original structure anchoring view, the edge attribute anchoring view and the semantic enhancement view by using three weight-sharing graph convolution neural networks to generate embedded representations of the three views;
constructing a first contrast learning penalty between the original structure anchor view and the semantically enhanced view from the embedded representation of the original structure anchor view and the embedded representation of the semantically enhanced view; constructing a second contrast learning penalty between the edge attribute anchor view and the semantic enhancement view according to the embedded representation of the edge attribute anchor view and the embedded representation of the semantic enhancement view; and combining the first comparison learning loss and the second comparison learning loss, and finally training to obtain a multi-view unsupervised graph comparison learning model by using a multi-view unsupervised graph comparison learning overall loss function.
Preferably, the specific steps of constructing the original structure anchor view include: reading the original input diagram data to construct an adjacent matrix and a node characteristic matrix of the original input diagram, and constructing an original structure anchoring view wherein Adjacency matrix representing the original input graph, +.>Representing a node attribute matrix of an original input diagram, wherein n is the number of nodes, and d is the feature dimension of the nodes;
the specific steps of constructing the edge attribute anchoring view comprise: computing edge attributes of each edge in the original input graph by using similarity indexes of two connected nodes, constructing an edge attribute matrix and computing adjacent moments of the original input graph with the edge attributesArray A e The formula is:wherein Sim is an edge attribute matrix calculated from similarity indexes of two connected nodes,/->Adjacency matrix representing the original input graph, +.>Representing Hadamard product, using an original input graph adjacency matrix A with edge attributes e Constructing an edge attribute anchor view with a node attribute matrix X of an original input graph
Preferably, the semantic feature learner adopts a full-graph parameterization method or a multi-layer perceptron;
when using the full-graph parameterization method as a semantic feature learner, each element of the original input graph adjacency matrix is directly modeled by an independent parameter to generate a full-connected semantic enhancement view adjacency matrix Without any additional input, the formula is: /> wherein ,/>A node attribute matrix representing an original input graph, wherein ω=w is a parameter matrix and σ is a nonlinear activation function; the assumption behind the full graph parameterization is that each edge exists independently in the graph;
when the multi-layer perceptron is used as a semantic feature learner, node features of an original input graph are embedded into a shallow space, and the formula is as follows: x is X (l+1) =MLP ω (X (l) )=σ(X (l) W (l) ) Where ω=w is a parameter matrix, σ nonlinear activation function, h=x (l+1) W (l+1) Representing the embedding of the final nodes generated by the multi-layer perceptron, wherein the multi-layer perceptron considers the correlation and the combinability of the features and provides more information for the similarity measurement learning; then generating fully connected semantically enhanced view adjacency matrix using node embedded pairwise similarityThe formula is: />Where phi (-) is a non-parametric metric function for computing pairwise similarities.
Preferably, the first k nodes which are semantically most similar to each node are selected for each node, and the k nodes are used as neighbor nodes of the nodes, so that a sparse semantically enhanced view adjacency matrix S is constructed; constructing a semantic enhancement view by using a sparse semantic enhancement view adjacency matrix S; the method comprises the following steps: using the k-nearest neighbor algorithm, the formula is:
wherein ,is a row vector +.>Is set of the first k maxima, +.>Semantic enhanced view adjacency matrix representing full connection +.>The value of the ith row and jth column of S ij Representing sparse semantically enhanced view adjacency matricesValues of ith row and jth column of S, construct a semantic enhanced view +.>
Preferably, the normalized raw structure anchor viewSaid edge property anchor view->And said semantic enhanced view->Adjacency matrix A guaranteeing an original structure anchor view a Adjacency matrix A of edge attribute anchor views e And the adjacency matrix S of the semantically enhanced view is symmetrical and each element in the adjacency matrix is non-negative, the formula is:
wherein SYM (level) and NOR (level) are a symmetrization function and a normalization function, respectively, and σ is a nonlinear activation function that maps element values to [0,1 ]]Interval () T Is a transpose operation.
Preferably, the embedding of the three views is represented as follows:
original structure anchor view
Edge attribute anchor view
Semantically enhanced views
wherein , is an adjacency matrix A in three views a ,A e And S performs a symmetric normalization operation, ">Representing the node attribute matrix of the original input diagram, W (l) Is the learning weight matrix of the first hidden layer, D a 、D e and Ds Are respectively (A) a +I)、(A e +I) and (S+I), I being the identity matrix.
Preferably, the first contrast learning lossThe calculation formula is as follows:
the second contrast learning penaltyThe calculation formula is as follows:
wherein sim (-) is a similarity function that calculates the similarity between two node representations, N represents the total number of nodes in the original input graph, θ represents the temperature factor that controls the concentration level of the distribution;
the multi-view unsupervised graph contrast learning total loss functionThe calculation formula is as follows:
where μ is a non-negative tuning parameter for measuring the importance of the different contrast losses;
preferably, the training is to learn an overall loss function based on the multi-view unsupervised graph contrast, minimizing the overall loss function with an adaptive moment estimation (Adam) optimizerAnd updating the model weight parameters by adopting a back propagation method to finally obtain the multi-view unsupervised graph contrast learning model.
Preferably, the method further comprises a structural feedback mechanism to enhance the view according to self-enhanced semanticsSlowly update original structure anchor view +.>And edge Attribute Anchor View->The method for preventing the noise inheriting the original graph structure in the learning process comprises the following implementation modes:
A a =ξA a +(1-ξ)S
A e =ξA e +(1-ξ)S
wherein, xi is E [0,1 ] ]Is the attenuation rate, adjusts the anchoring view and />The speed of the update.
In a second aspect, the present invention provides a multi-view unsupervised graph-contrast learning model building system, including:
the data preprocessing module is used for preprocessing the effective data, acquiring first relation data, second relation data and a relation graph network formed by the first relation and the second relation, gathering the relation graph network into an original input graph data set, and dividing the original input graph data set into a verification set and a test set;
the anchoring view generation module is used for constructing an original structure anchoring view through the adjacent matrix of the original input diagram and the feature matrix of the nodes in the original input diagram; constructing an edge attribute anchoring view through an adjacent matrix of the original input diagram, a feature matrix of nodes in the original input diagram and the edge attribute of the network;
a semantic enhanced view learning module for modeling the original input graph by a semantic feature learner to generate a fully connected semantic enhanced view adjacency matrixSemantic enhancement view adjacency matrix based on full connection>Selecting the first k nodes which are semantically similar to each node, and constructing a sparse semantic enhancement view adjacency matrix S by taking the k nodes as neighbor nodes; constructing a semantic enhancement view by using a sparse semantic enhancement view adjacency matrix S;
The contrast learning module or the structural feedback contrast learning module:
the contrast learning module is used for normalizing the original structure anchoring view, the edge attribute anchoring view and the semantic enhancement view, and ensuring that an adjacent matrix in the three views is symmetrical and each element in the adjacent matrix is non-negative; modeling the normalized original structure anchoring view, the edge attribute anchoring view and the semantic enhancement view by using three weight-sharing graph convolution neural networks to generate embedded representations of the three views; constructing a first contrast learning penalty between the original structure anchor view and the semantically enhanced view from the embedded representation of the original structure anchor view and the embedded representation of the semantically enhanced view; constructing a second contrast learning penalty between the edge attribute anchor view and the semantic enhancement view according to the embedded representation of the edge attribute anchor view and the embedded representation of the semantic enhancement view; combining the first comparison learning loss and the second comparison learning loss, and finally training to obtain a multi-view unsupervised graph comparison learning model by using a multi-view unsupervised graph comparison learning overall loss function;
The structure feedback contrast learning module is based on the contrast learning module, and is added with a structure feedback mechanism for enhancing the view according to self-enhanced semanticsSlowly update original structure anchor view +.>And edge Attribute Anchor View->Noise inheriting the original graph structure in the learning process is prevented.
In a third aspect, the present invention provides a technical solution, where the multi-view unsupervised graph constructed by using the above method performs implicit relation mining against a learning model, where the implicit relation refers to a potential relation between entities in a network, and reflects a cause of an explicit relation to a great extent, where the cause includes various interactions, such as a common benefit, a relative or colleague, a same city or institution, and the like. The attribute network is represented by g= (V, E, X), where V and E represent a set of nodes and edges,is a node feature matrix (where n is the number of nodes, d is the feature dimension of the node, row i x i For node v i Is a feature vector of (c). Implicit relation R im Inferred by the entity feature X, the external knowledge K, and the explicit edge E. An unsupervised implicit relation mining problem is defined as how to mine entities v in the network G without being guided by label information i and vj Implicit relation R between im . Specifically, given an attribute network G= (A, X) with a noise graph structure, the objective of unsupervised implicit relation mining is to optimize the optimal semantic enhanced view adjacency matrix S ε [0,1] n×n Wherein, S is generally constructed according to the similarity of the semantic features of the nodes, and optimized by contrast learning to better express the dependency relationship between the nodes. Then, the optimized network G' = (S, X) is inferred to predict the entity v i and vj Whether or not there is an implicit relation R between im
In the multi-view unsupervised graph contrast learning model construction method, the first relationship data and the second relationship data are respectively explicit relationship data and implicit relationship data; comparing the obtained multi-view unsupervised graph with the semantic enhancement view neighbor corresponding to the learning modelThe adjacency matrix, i.e. the optimal semantically enhanced view adjacency matrix S and the original adjacency matrix A a The final embedded representation Z of all network nodes is obtained by fusing the input graph encoder, and a link prediction method is used for predicting the implicit relation R of candidate edges im The probability of existence below, the performance of mining implicit relationships in the model is checked on a test set.
In a fourth aspect, the present invention provides a computer, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the above-mentioned multi-view unsupervised graph-versus-learning model construction method when executing the computer program.
In a fifth aspect, the present invention provides a storage medium, where a computer program is stored, where the computer program when executed by a processor implements the above-mentioned multi-view unsupervised graph-versus-learning model construction method.
The invention has the beneficial effects that: the invention provides a multi-view unsupervised graph contrast learning model construction method, and the multi-view unsupervised graph contrast learning model constructed by the method is used for implicit relation mining, so that the problems of complexity and lack of available labels of modeling relations in the mining of implicit relations are solved, and the complex interaction behavior between entities is effectively modeled and the implicit relations in a network are accurately identified under the condition of no external information guidance; two anchoring views with complementary information are constructed by using original topology and edge attribute information, and consistency of the anchoring views and the learnable views is improved to the greatest extent by using a contrast learning method, so that an optimal graph structure is learned under the condition of no external information (namely label) guidance, and the method has high accuracy in the aspect of unsupervised implicit relation prediction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a framework diagram of implicit relation mining based on a multi-view unsupervised graph versus learning model construction method.
FIG. 2 is a schematic diagram of an embodiment multi-view unsupervised graph versus learning model building system.
Fig. 3 is a graph of the results of experimental verification of comparative loss tuning parameters μ of different magnitudes in the examples.
Detailed Description
The technical scheme of the invention will be further described with reference to specific embodiments and drawings.
As shown in fig. 1, the embodiment of the invention discloses an application of implicit relation mining based on a multi-view unsupervised graph versus learning model construction method, which comprises the following steps:
firstly, acquiring effective data from a microblog social network and a Microsoft academic graph to construct a microblog co-urban relationship coefficient data set and a Microsoft academic teacher-student relationship data set, preprocessing the microblog co-urban relationship coefficient data set and the Microsoft academic teacher-student relationship data set, and dividing the microblog co-urban relationship data set and the Microsoft academic teacher-student relationship data set into a verification set and a test set;
the pretreatment comprises the following specific processes:
(1) For the microblog same-city-related coefficient dataset: and using the python crawler to crawl the unidirectional attention relation of the microblog user, and setting an initial user, and iteratively crawling the attention list of the initial user and the attention list of the user contained in the attention list. For users with the attention list user number less than 200, all the attention information is crawled, and for users with the attention list user number more than 200, the first 200 pieces of attention information are crawled, and about 800 ten thousands of users are obtained in a unidirectional attention relation. The mutual attention information is obtained by filtering the unidirectional attention relation of the microblog user, and the method is implemented as follows:
(a) Exchanging ids of two users in each piece of concerned information and splicing the ids with the original data;
(b) Screening the data obtained in the step (a), wherein the screening rule is that if repeated new information appears in the whole data set, two corresponding users in the repeated information pay attention to each other;
(c) Reserving the user items which are screened out and are focused on each other in the step (b) and only one of the user items is reserved, and deleting the data items without repeated information;
establishing a network based on the mutual attention relationship between the users obtained in (c), and taking the mutual attention relationship as an explicit relationship. Personal information (such as institutions, authentications and profiles) of the users is taken as attribute information of the users, geographic position information of the users is taken to determine the same city relationship among the users, and if the geographic position information of two users in the mutual attention items is the same, the two users are considered to have the same city relationship before, and the same city relationship is taken as an implicit relationship; and removing the users with missing geographic position information and the mutual attention information corresponding to the users to obtain a microblog co-city relation data set, wherein the microblog co-city relation data set comprises a 34,438-user mutual attention relation pair, 10,195 users and 16,428-user co-city relation pairs. The dataset was assembled as per 6:4, dividing the test set into a verification set and a test set for later model test;
(2) For Microsoft academic Master-students relationship data set: the attribute of the scholars is collected from the microsoft academic graph, including the academic age of the scholars, the information of the organization or organization to which the scholars belong (organization name, network identification, official website, wiki page), the number of publications of the scholars, the publication information of the scholars (publication DOI, publication topic, publication year, conjunctive, publishing information, quotation record) and the conjunctive relationship of the scholars, so that an academic collaboration network is constructed, and the conjunctive relationship of the scholars is used as an explicit relationship. 8853 pairs of real teacher-student pairs from 2000 to 2010 in the field of computer science were collected from academic pedigrees (AFT) to determine real teacher-student relationships between students, with teacher-student relationships as implicit relationships. The data are cleaned, and a Microsoft academic Master-student relationship data set is obtained by matching the Microsoft academic graph with the names of the scholars in the academic family, and the method is implemented as follows:
(a) Disambiguating the author name; the microsoft academy data set does not contain a unique author identifier and therefore a name disambiguation operation is required to overcome the problem of name duplication. All authors are first separated, each author in the record is treated as a unique author, and then duplicate identifications are reduced by iteratively merging authors. Authors with the same name are considered to be the same person if one of the following criteria is met: 1. two authors have referenced each other at least once; 2. at least one partner is provided for the two authors; 3. at least one of the two authors has the same membership. The name disambiguation process ends with no author pairs to merge;
(b) Matching the scholars names in Microsoft Academy Graph (MAG) and academy of scholars (AFT); the name of each student is represented by a regular expression, matching is carried out in a MAG data set, and all matching results are added into a dictionary to serve as 'the name of the student in AFT': the learner name in MAG, where "learner name in AFT" is the primary key of the dictionary, creates a collection to store all information of the matched students. The names of the instructors are matched in the same way from the matched student's co-workers, and if the instructors can be matched in the student's co-workers, the two co-workers are defined as a base fact instructor-student pair, thereby ensuring that matching of the instructor-student pairs in the MAG dataset is truly required, thereby obtaining a microsoft academic teacher-student relationship dataset containing 8282 pairs of co-worker relationship pairs, 7872 pairs of students and 2787 pairs of instructor-student relationship pairs. The dataset was assembled as per 6:4, dividing the test set into a verification set and a test set for later model test;
step two, an anchor view generation module and a semantic enhancement view learning module for multi-view unsupervised graph contrast learning are constructed, and the method specifically comprises the following steps:
(1) Constructing an original structure anchoring view; constructing an original structure anchoring view by utilizing the topological structure of the original input diagram and the feature matrix of the nodes in the original input diagram, and constructing the original structure anchoring view wherein />Adjacency matrix representing the original input graph, +.>Representing a node attribute matrix of an original input diagram, wherein n is the number of nodes, and d is the feature dimension of the nodes;
(2) Constructing an edge attribute anchoring view; constructing an edge attribute anchoring view by using a topological structure of an original input diagram, a characteristic matrix of nodes in the original input diagram and an edge attribute matrix of the original input diagram, and calculating the edge attribute of each edge in the original input diagram by using a similarity index between two connected nodes, wherein the six similarity indexes are considered in the invention and comprise:
node v i The degree of: connected to node v i The number of sides of (2);
node v i Is the degree of the neighborhood node: connected to node v i Is the number of edges of the neighbors;
common neighbors: node v i And node v j The number of common neighbors (nodes connected to both nodes);
advac-Adar index: weight information is added on the basis of public neighbors;
jaccard index: for comparing differences between limited samples;
priority link index: preferential attachment similarity between two nodes is described;
constructing an edge attribute matrix of the original input graph by using the six similarity indexes and calculating an original input graph adjacent matrix A with edge attributes e The formula is:wherein Sim is an edge attribute matrix calculated from similarity indexes of two connected nodes,/->Adjacency matrix representing the original input graph, +.>Representing Hadamard product using a graph with edge attributesOriginal input diagram adjacency matrix A e Constructing an edge attribute anchor view with the node attribute matrix X of the original input graph>
(3) Constructing a semantic enhancement view; using a full-graph parameterization method as a semantic feature learner, directly modeling each element of an original input graph adjacency matrix through an independent parameter to generate a full-connected semantic enhancement view adjacency matrix without any additional input, wherein the formula is as follows: wherein ,/>Is a fully connected semantically enhanced view adjacency matrix, ω=w is a parameter matrix, σ nonlinear activation function.
Since in most real networks, the attribute correlation between nodes is critical to learning the general representation of the network, consider using a multi-layer perceptron as another semantic feature learner, embedding the features of the nodes in the original input graph into the shallow space, with the formula: x is X (l+1) =MLP ω (X (l) )=σ(X (l) W (l) ) Where ω=w is a parameter matrix, σ nonlinear activation function, h=x (l+1) W (l+1) Representing final node embeddings generated by the multi-layer perceptron; then generating fully connected semantically enhanced view adjacency matrix using node embedded pairwise similarity The formula is:where phi (-) is a non-parametric metric function for computing pair-wise similarity (e.g. cosine similarity or minkowski distance), a suitable semantic feature learner can be selected according to different real world networks. Make the following stepsSelecting the first k nodes which are semantically similar to each node by using a k-nearest neighbor (kNN) algorithm, and constructing a sparse semantically enhanced view adjacency matrix S by taking the k nodes as neighbor nodes of the k nodes, thereby preventing the fully connected semantically enhanced view adjacency matrix->Masking important features of the network and greatly consuming computing resources, the sparsification formula is:
wherein Is a row vector +.>Is used to construct a semantically enhanced view +.>Aiming at a microblog same-city-related coefficient data set and a Microsoft academic teacher-student relationship data set, k values are set to be 5 and 20 respectively, cosine similarity is adopted as a similarity function, and in order to keep gradient flow, a full-graph parameterized learner does not execute sparsification operation;
step three, a structural feedback contrast learning module for constructing a multi-view unsupervised graph contrast learning model specifically comprises the following steps:
(1) To ensure that the adjacency matrix of the three views is undirected and non-negative, the views are anchored to the original structure Edge Attribute Anchor View->And semantically enhanced view->And executing normalization operation, wherein the formula is as follows:
wherein SYM (& gt) and NOR (& gt) are a symmetrizing function and a normalizing function, respectively, and σ is a nonlinear activation function that maps element values to [0,1 ]]Interval () T For transpose operations, the nonlinear activation function σ applies the ELU function to prevent gradient disappearance for a full-graph parameterized learner, and the ReLU function for a multi-layer perceptron learner. In the specific implementation process, the three views are subjected to simple data enhancement by using two common data enhancement modes of feature masking and edge deletion, wherein the feature masking probability is set to 0.7, and the edge deletion probability is set to 0.8;
(2) The three views are modeled by using a graph convolution neural network with three weight shares, and the formula is as follows:
wherein Is an adjacency matrix A in three views a ,A e And S performs a symmetric normalization operation, ">Representing a node attribute matrix, W (l) Is the learning weight matrix of the first hidden layer, D a 、D e and Ds Are respectively (A) a +I)、(A e +i) and (s+i). The invention uses a 2-layer GCN model as an encoder, and adds a multi-layer perceptron layer after the graph rolling neural network model to map the node representation to another potential space, wherein the hidden layer dimension of the graph rolling neural network model is set to 512, the output layer dimension is set to 64, and the random inactivation rate is set to 0.5;
(3) Constructing an original structure anchor viewAnd semantically enhanced view->Contrast learning loss between->The specific calculation method comprises the following steps:
wherein sim (-) is a similarity function that calculates the similarity between two node representations, N represents the total number of nodes in the original input graph, θ represents the temperature factor that controls the concentration level of the distribution;
(4) Constructing edge attribute anchor viewsAnd semantic enhancement view +.>Contrast learning loss->The specific calculation method comprises the following steps:
bonding of and />Multi-view unsupervised graph contrast learning objective function>
Where μ is a non-negative tuning parameter that measures the importance of the different contrast losses, μ=1 means that only the original structure anchor view and the semantic enhancement view are of interest, μ=0 means that only contrast learning is performed between the edge attribute anchor view and the semantic enhancement view. Aiming at a microblog same-city-related coefficient data set and a Microsoft academic teacher-student relationship data set, the mu value is set to be 0.4 and 0.1 respectively;
(5) Designing a structural feedback mechanism; the main idea of the structure feedback mechanism is to slowly update the anchor view structure according to the self-enhanced semantically enhanced view, rather than maintaining the anchor view and />The method is unchanged, prevents noise of the original graph structure in the learning process, enables the semantic enhancement view to learn complementary information from the original topological graph, the edge attribute graph and the semantic feature graph, and realizes the structural feedback mechanism by the following steps:
A a =ξA a +(1-ξ)S
A e =ξA e +(1-ξ)S
Wherein ζ is E [0,1 ]]Is an attenuation rate, adjusts the anchoring view and />In the updating speed, in the model training process, an anchoring view is updated once after delta iterations, and the optimal parameter combination is selected for a microblog co-urban relationship coefficient data set and a Microsoft academic teacher-student relationship data set, wherein the optimal parameter combination is { ζ=0.9999, delta=0 } and { ζ=0.9999, delta=10 };
training a multi-view unsupervised graph to contrast and learn the integral model; the model of this example received 4000 epoch trainingModeling, minimizing model loss using an adaptive moment estimation (Adam) optimizerThe learning rate is set to be 0.01, and a back propagation method is adopted to update model weight parameters, so that a training set does not exist because the invention aims at the implicit relation mining under an unsupervised scene, and all nodes and edges in the data set are input into a network together for training in the model training process, and finally a multi-view unsupervised graph contrast learning model for the implicit relation mining and an optimal semantic enhancement view adjacency matrix S are obtained;
fifthly, testing performance of the model on implicit relation mining tasks; on the basis of the fourth step, the optimal semantic enhancement view adjacency matrix S and the original adjacency matrix A are formed a The final embedded matrix Z of all the nodes is obtained in the fusion input graph encoder, and a traditional link prediction method is applied to the final embedded matrix Z of the nodes to predict candidate edges (v h ,v t ) In implicit relation R im Probability of existence under v h Representing the head entity of a linked triplet, v t Representing the tail entity of the linked triplet. In predicting the probability of existence, the final embedding matrix Z of the node is projected onto a d-dimensional vector, scoring function Γ (v h ,R im ,v t ) The calculation formula is as follows:
/>
wherein Zh Representing the header entity v in Z h Corresponding row vector, Z t Representing tail entity v in Z t Corresponding row vectors, hou Xuanbian (v h ,v t ) In implicit relation R im The probability of existence is defined as:
wherein Is a sigmoid type function that expresses candidate triples (v h ,R im ,v t ) And predicting the existence probability. The experimental results are obtained by testing the microblog same-city-related coefficient dataset and the test set of the microsoft academic teacher-student relationship dataset by using the method, the experimental results are obtained by comparing the embodiment with the latest unsupervised graph representation learning method GMI (graphic mutual information), GCA (graph contrast learning with self-adaptive enhancement function), SUGRL (simple unsupervised graph representation learning) and suplume (structure-guided contrast learning framework), the results of table 1 are obtained, and the experimental results of the non-negative tuning parameters μ with different sizes are obtained by performing experiments as shown in fig. 3:
Table 1 comparison of experimental results of different algorithms
The model provided by the embodiment obtains results superior to GMI, GCA, SUGRL and SUBLIME on the same data set, is higher than GCA8.8% on the microblog same-city-related annotation data set, is higher than GMI3.85% on the Microsoft academy-of-students relationship data set, and shows good unsupervised implicit relationship mining performance of the model. As can be seen from fig. 3, when the magnitude of the non-negative contrast loss tuning parameter μ is different, the model performance is different, and neither μ=0 nor μ=1 can obtain an optimal result, i.e., the node attribute information and the network structure information are very important for improving the performance of the task of mining the implicit relation. But the importance of different contrast losses is different in different data sets, and the contrast losses between the edge attribute anchoring view and the semantic enhancement view are more important for Microsoft academic teacher-student relationship data sets, namely the network topology structure is beneficial to the relationship mining of the teacher-student, and the importance of the node attribute and the network topology structure for mining the relationship of the microblog user and the city is approximately the same for the microblog relationship data sets.

Claims (10)

1. The method for constructing the multi-view unsupervised graph contrast learning model is characterized by comprising the following steps of:
Preprocessing effective data, obtaining first relation data, second relation data and a relation diagram network formed by the first relation and the second relation, collecting the relation diagram network into an original input diagram data set, and dividing the original input diagram data set into a verification set and a test set;
constructing an original structure anchoring view by utilizing an adjacent matrix of the original input diagram and a feature matrix of a node in the original input diagram; constructing an edge attribute anchoring view by utilizing an adjacent matrix of the original input diagram, a feature matrix of nodes in the original input diagram and the edge attribute of the network;
modeling an original input graph with a semantic feature learner to generate a fully connected semantic enhanced view adjacency matrixSemantic enhancement view adjacency matrix based on full connection>Selecting the first k nodes which are semantically similar to each node, and constructing a sparse semantic enhancement view adjacency matrix S by taking the k nodes as neighbor nodes; constructing a semantic enhancement view by using a sparse semantic enhancement view adjacency matrix S;
normalizing the original structure anchor view, the edge attribute anchor view, and the semantic enhancement view, and ensuring that adjacency matrices in the three views are symmetric and that each element in an adjacency matrix is non-negative; modeling the normalized original structure anchoring view, the edge attribute anchoring view and the semantic enhancement view by using three weight-sharing graph convolution neural networks to generate embedded representations of the three views;
Constructing a first contrast learning penalty between the original structure anchor view and the semantically enhanced view from the embedded representation of the original structure anchor view and the embedded representation of the semantically enhanced view; constructing a second contrast learning penalty between the edge attribute anchor view and the semantic enhancement view according to the embedded representation of the edge attribute anchor view and the embedded representation of the semantic enhancement view; and combining the first comparison learning loss and the second comparison learning loss, and finally training to obtain a multi-view unsupervised graph comparison learning model by using a multi-view unsupervised graph comparison learning overall loss function.
2. The method for constructing a multi-view unsupervised graph contrast learning model according to claim 1, wherein the specific step of constructing an anchor view of an original structure comprises: reading the original input diagram data to construct an adjacent matrix and a node characteristic matrix of the original input diagram, and constructing an original structure anchoring view wherein />Adjacency matrix representing the original input graph, +.>Representing a node attribute matrix of an original input diagram, wherein n is the number of nodes, and d is the feature dimension of the nodes;
the specific steps of constructing the edge attribute anchoring view comprise: computing edge attribute of each edge in the original input graph by using similarity indexes of two connected nodes, constructing an edge attribute matrix and computing an original input graph adjacency matrix A with the edge attribute e The formula is:wherein Sim is an edge attribute matrix calculated from similarity indexes of two connected nodes,/->Adjacency matrix representing the original input graph, +.>Representing Hadamard product, using an original input graph adjacency matrix A with edge attributes e Constructing an edge attribute anchor view with a node attribute matrix X of an original input graph
3. The multi-view unsupervised graph contrast learning model construction method according to claim 2, wherein the semantic feature learner adopts a full graph parameterization method or a multi-layer perceptron;
when using the full-graph parameterization method as a semantic feature learner, each element of the original input graph adjacency matrix is directly modeled by an independent parameter to generate a full-connected semantic enhancement view adjacency matrixThe formula is: wherein ,/>A node attribute matrix representing an original input graph, wherein ω=w is a parameter matrix and σ is a nonlinear activation function;
when the multi-layer perceptron is used as a semantic feature learner, node features of an original input graph are embedded into a shallow space, and the formula is as follows: x is X (l+1) =MLP ω (X (l) )=σ(X (l) W (l) ) Where ω=w is a parameter matrix, σ nonlinear activation function, h=x (l+1) W (l+1) Representing final node embeddings generated by the multi-layer perceptron; then generating fully connected semantically enhanced view adjacency matrix using node embedded pairwise similarity The formula is: />Where phi (-) is a non-parametric metric function for computing pairwise similarities.
4. The method for constructing a multi-view unsupervised graph contrast learning model according to claim 3,
selecting the first k nodes which are semantically most similar to each node, and constructing a sparse semantically enhanced view adjacency matrix S by taking the k nodes as neighbor nodes; constructing a semantic enhancement view by using a sparse semantic enhancement view adjacency matrix S; the method comprises the following steps: using the k-nearest neighbor algorithm, the formula is:
wherein ,is a row vector +.>Is set of the first k maxima, +.>Semantic enhanced view adjacency matrix representing full connection +.>The value of the ith row and jth column of S ij Representing the value of the ith row and jth column of the sparse semantic enhancement view adjacency matrix S, constructing a semantic enhancement view +.>Said normalized original structure anchor view +.>Said edge property anchor view->And said semantic enhanced view->Adjacency matrix A guaranteeing an original structure anchor view a Adjacency matrix A of edge attribute anchor views e And the adjacency matrix S of the semantically enhanced view is symmetrical and each element in the adjacency matrix is non-negative, the formula is:
Wherein SYM (level) and NOR (level) are a symmetrization function and a normalization function, respectively, and σ is a nonlinear activation function that maps element values to [0,1 ]]Interval () T Is a transposition operation;
the embedding of the three views is represented as follows:
original structure anchor view
Edge attribute anchor view
Semantically enhanced views
wherein , is an adjacency matrix A in three views a ,A e And S performs a symmetric normalization operation, ">Representing the node attribute matrix of the original input diagram, W (l) Is the learning weight matrix of the first hidden layer, D a 、D e and Ds Are respectively (A) a +I)、(A e +I) and (S+I), I being the identity matrix.
5. The method for constructing a multi-view unsupervised graph contrast learning model according to claim 4, wherein the first contrast learning lossThe calculation formula is as follows:
the second contrast learning penaltyThe calculation formula is as follows:
wherein sim (-) is a similarity function that calculates the similarity between two node representations, N represents the total number of nodes in the original input graph, θ represents the temperature factor that controls the concentration level of the distribution;
the multi-view unsupervised graph contrast learning total loss functionThe calculation formula is as follows:
where μ is a non-negative tuning parameter for measuring the importance of the different contrast losses;
The training is to use an adaptive moment estimation optimizer to minimize the overall loss function based on the multi-view unsupervised graph contrast learning overall loss functionUpdating model weight parameters by adopting a back propagation method to finally obtain the multi-view nothingThe supervised graph compares the learning model.
6. The method for constructing a multi-view unsupervised graph contrast learning model according to claim 2, further comprising a structural feedback mechanism for enhancing views according to self-enhanced semanticsSlowly update original structure anchor view +.>And edge Attribute Anchor View->The method for preventing the noise inheriting the original graph structure in the learning process comprises the following implementation modes:
A a =ξA a +(1-ξ)S
A e =ξA e +(1-ξ)S
wherein, xi is E [0,1 ]]Is the attenuation rate, adjusts the anchoring view and />The speed of the update.
7. A multi-view unsupervised graph contrast learning model building system, comprising:
the data preprocessing module is used for preprocessing the effective data, acquiring first relation data, second relation data and a relation graph network formed by the first relation and the second relation, gathering the relation graph network into an original input graph data set, and dividing the original input graph data set into a verification set and a test set;
the anchoring view generation module is used for constructing an original structure anchoring view through the adjacent matrix of the original input diagram and the feature matrix of the nodes in the original input diagram; constructing an edge attribute anchoring view through an adjacent matrix of the original input diagram, a feature matrix of nodes in the original input diagram and the edge attribute of the network;
A semantic enhanced view learning module for modeling the original input graph by a semantic feature learner to generate a fully connected semantic enhanced view adjacency matrixSemantic enhancement view adjacency matrix based on full connection>Selecting the first k nodes which are semantically similar to each node, and constructing a sparse semantic enhancement view adjacency matrix S by taking the k nodes as neighbor nodes; constructing a semantic enhancement view by using a sparse semantic enhancement view adjacency matrix S;
the contrast learning module or the structural feedback contrast learning module:
the contrast learning module is used for normalizing the original structure anchoring view, the edge attribute anchoring view and the semantic enhancement view, and ensuring that an adjacent matrix in the three views is symmetrical and each element in the adjacent matrix is non-negative; modeling the normalized original structure anchoring view, the edge attribute anchoring view and the semantic enhancement view by using three weight-sharing graph convolution neural networks to generate embedded representations of the three views; constructing a first contrast learning penalty between the original structure anchor view and the semantically enhanced view from the embedded representation of the original structure anchor view and the embedded representation of the semantically enhanced view; constructing a second contrast learning penalty between the edge attribute anchor view and the semantic enhancement view according to the embedded representation of the edge attribute anchor view and the embedded representation of the semantic enhancement view; combining the first comparison learning loss and the second comparison learning loss, and finally training to obtain a multi-view unsupervised graph comparison learning model by using a multi-view unsupervised graph comparison learning overall loss function;
The structure feedback contrast learning module is based on the contrast learning module, and is added with a structure feedback mechanism for enhancing the view according to self-enhanced semanticsSlowly update original structure anchor view +.>And edge attribute anchor viewNoise inheriting the original graph structure in the learning process is prevented.
8. The method for constructing the multi-view unsupervised graph contrast learning model according to any one of claims 1 to 6 is characterized in that the implicit relation is potential relation among entities in a network, and in the method for constructing the multi-view unsupervised graph contrast learning model, the first relation data and the second relation data are respectively explicit relation data and implicit relation data; the finally obtained semantically enhanced view adjacency matrix corresponding to the multi-view unsupervised graph contrast learning model, namely the optimal semantically enhanced view adjacency matrix S and the original adjacency matrix A a The final embedded representation Z of all network nodes is obtained by fusing the input graph encoder, and a link prediction method is used for predicting the implicit relation R of candidate edges im The probability of existence below, the performance of mining implicit relationships in the model is checked on a test set.
9. A computer comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-view unsupervised graph contrast learning model construction method according to any one of claims 1 to 6 when executing the computer program.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the multi-view unsupervised graph contrast learning model construction method according to any one of claims 1 to 6.
CN202310864902.9A 2023-04-04 2023-07-14 Multi-view unsupervised graph contrast learning model construction method, system, computer, storage medium and application Pending CN116861923A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310355408X 2023-04-04
CN202310355408 2023-04-04

Publications (1)

Publication Number Publication Date
CN116861923A true CN116861923A (en) 2023-10-10

Family

ID=88235601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310864902.9A Pending CN116861923A (en) 2023-04-04 2023-07-14 Multi-view unsupervised graph contrast learning model construction method, system, computer, storage medium and application

Country Status (1)

Country Link
CN (1) CN116861923A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117829683A (en) * 2024-03-04 2024-04-05 国网山东省电力公司信息通信公司 Electric power Internet of things data quality analysis method and system based on graph comparison learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633478A (en) * 2020-12-31 2021-04-09 天津大学 Construction of graph convolution network learning model based on ontology semantics
CN112784918A (en) * 2021-02-01 2021-05-11 中国科学院自动化研究所 Node identification method, system and device based on unsupervised graph representation learning
CN113591879A (en) * 2021-07-22 2021-11-02 大连理工大学 Deep multi-view clustering method, network, device and storage medium based on self-supervision learning
CN113989582A (en) * 2021-08-26 2022-01-28 中国科学院信息工程研究所 Self-supervision visual model pre-training method based on dense semantic comparison
CN114419396A (en) * 2022-01-20 2022-04-29 江苏大学 Semantic level picture decoupling and generation optimization method
CN115082142A (en) * 2022-05-10 2022-09-20 华南理工大学 Recommendation method, device and medium based on heterogeneous relational graph neural network
US20220350998A1 (en) * 2021-04-30 2022-11-03 International Business Machines Corporation Multi-Modal Learning Based Intelligent Enhancement of Post Optical Character Recognition Error Correction
CN115481682A (en) * 2022-09-11 2022-12-16 北京工业大学 Graph classification training method based on supervised contrast learning and structure inference
CN115661457A (en) * 2022-10-31 2023-01-31 大连理工大学 Small sample semantic segmentation method based on network motif graph representation learning
US20230153627A1 (en) * 2020-04-07 2023-05-18 Koninklijke Philips N.V. Training a convolutional neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230153627A1 (en) * 2020-04-07 2023-05-18 Koninklijke Philips N.V. Training a convolutional neural network
CN112633478A (en) * 2020-12-31 2021-04-09 天津大学 Construction of graph convolution network learning model based on ontology semantics
CN112784918A (en) * 2021-02-01 2021-05-11 中国科学院自动化研究所 Node identification method, system and device based on unsupervised graph representation learning
US20220350998A1 (en) * 2021-04-30 2022-11-03 International Business Machines Corporation Multi-Modal Learning Based Intelligent Enhancement of Post Optical Character Recognition Error Correction
CN113591879A (en) * 2021-07-22 2021-11-02 大连理工大学 Deep multi-view clustering method, network, device and storage medium based on self-supervision learning
CN113989582A (en) * 2021-08-26 2022-01-28 中国科学院信息工程研究所 Self-supervision visual model pre-training method based on dense semantic comparison
CN114419396A (en) * 2022-01-20 2022-04-29 江苏大学 Semantic level picture decoupling and generation optimization method
CN115082142A (en) * 2022-05-10 2022-09-20 华南理工大学 Recommendation method, device and medium based on heterogeneous relational graph neural network
CN115481682A (en) * 2022-09-11 2022-12-16 北京工业大学 Graph classification training method based on supervised contrast learning and structure inference
CN115661457A (en) * 2022-10-31 2023-01-31 大连理工大学 Small sample semantic segmentation method based on network motif graph representation learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BINHUI XIE等: "SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation", IEEE, vol. 45, no. 7, 17 January 2023 (2023-01-17), pages 9004 *
孟春运: "语义级表征学习方法及其应用研究", 中国优秀硕士学位论文全文数据库信息科技辑, no. 3, 15 March 2023 (2023-03-15), pages 138 - 383 *
董西伟;: "基于局部流形重构的半监督多视图图像分类", 计算机工程与应用, vol. 52, no. 18, 15 September 2016 (2016-09-15), pages 24 *
阳长征: "社交网络中辟谣信息回波损耗与互感耦合研究", 情报杂志, vol. 42, no. 5, 10 May 2023 (2023-05-10), pages 102 *
高子贤: "基于深度语义关系推断的仿真人脸生成", 中国优秀硕士学位论文全文数据库信息科技辑, no. 2, 15 February 2023 (2023-02-15), pages 138 - 1560 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117829683A (en) * 2024-03-04 2024-04-05 国网山东省电力公司信息通信公司 Electric power Internet of things data quality analysis method and system based on graph comparison learning

Similar Documents

Publication Publication Date Title
WO2023000574A1 (en) Model training method, apparatus and device, and readable storage medium
Shen et al. Causally regularized learning with agnostic data selection bias
US9965717B2 (en) Learning image representation by distilling from multi-task networks
CN112529168A (en) GCN-based attribute multilayer network representation learning method
CN111382283B (en) Resource category label labeling method and device, computer equipment and storage medium
Ghasemi et al. User embedding for expert finding in community question answering
CN109992784B (en) Heterogeneous network construction and distance measurement method fusing multi-mode information
CN113806582B (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN109284414B (en) Cross-modal content retrieval method and system based on semantic preservation
CN113918834B (en) Graph convolution collaborative filtering recommendation method fusing social relations
Lin et al. Deep unsupervised hashing with latent semantic components
CN116861923A (en) Multi-view unsupervised graph contrast learning model construction method, system, computer, storage medium and application
Yin et al. An Anomaly Detection Model Based On Deep Auto-Encoder and Capsule Graph Convolution via Sparrow Search Algorithm in 6G Internet-of-Everything
Area et al. Analysis of Bayes, neural network and tree classifier of classification technique in data mining using WEKA
Wang et al. Link prediction in heterogeneous collaboration networks
CN109933720B (en) Dynamic recommendation method based on user interest adaptive evolution
WO2020147259A1 (en) User portait method and apparatus, readable storage medium, and terminal device
Ding et al. User identification across multiple social networks based on naive Bayes model
Surekha et al. Digital misinformation and fake news detection using WoT integration with Asian social networks fusion based feature extraction with text and image classification by machine learning architectures
CN117349494A (en) Graph classification method, system, medium and equipment for space graph convolution neural network
CN117435685A (en) Document retrieval method, document retrieval device, computer equipment, storage medium and product
CN116467666A (en) Graph anomaly detection method and system based on integrated learning and active learning
CN111144453A (en) Method and equipment for constructing multi-model fusion calculation model and method and equipment for identifying website data
CN115344794A (en) Scenic spot recommendation method based on knowledge map semantic embedding
CN112307343B (en) Cross-E-book city user alignment method based on double-layer iterative compensation and full-face representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination