CN114817568A - Knowledge hypergraph link prediction method combining attention mechanism and convolutional neural network - Google Patents
Knowledge hypergraph link prediction method combining attention mechanism and convolutional neural network Download PDFInfo
- Publication number
- CN114817568A CN114817568A CN202210475730.1A CN202210475730A CN114817568A CN 114817568 A CN114817568 A CN 114817568A CN 202210475730 A CN202210475730 A CN 202210475730A CN 114817568 A CN114817568 A CN 114817568A
- Authority
- CN
- China
- Prior art keywords
- vector
- entity
- tuple
- initial
- entities
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 46
- 230000007246 mechanism Effects 0.000 title claims abstract description 38
- 239000013598 vector Substances 0.000 claims abstract description 223
- 238000012545 processing Methods 0.000 claims abstract description 27
- 102100024394 Adipocyte enhancer-binding protein 1 Human genes 0.000 claims abstract description 16
- 101000833122 Homo sapiens Adipocyte enhancer-binding protein 1 Proteins 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims description 41
- 238000013507 mapping Methods 0.000 claims description 37
- 238000004364 calculation method Methods 0.000 claims description 29
- 239000011159 matrix material Substances 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 16
- 125000004432 carbon atom Chemical group C* 0.000 claims description 15
- 238000005457 optimization Methods 0.000 claims description 14
- 238000011156 evaluation Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 210000002569 neuron Anatomy 0.000 claims description 7
- 230000008447 perception Effects 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 230000001174 ascending effect Effects 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000000295 complement effect Effects 0.000 claims description 2
- 241000695274 Processa Species 0.000 claims 1
- 239000000203 mixture Substances 0.000 claims 1
- 239000010410 layer Substances 0.000 description 30
- 238000010586 diagram Methods 0.000 description 7
- 239000007787 solid Substances 0.000 description 4
- 230000008034 disappearance Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 230000004308 accommodation Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention discloses a knowledge hypergraph link prediction method combining an attention machine mechanism and a convolutional neural network, which comprises the following steps of: s1, loading the knowledge hypergraph to be complemented to obtain an entity and a relation; s2, initializing the loaded entities and the loaded relations to obtain initial entity embedded vectors and initial relation embedded vectors; s3, inputting the initial entity embedding vector and the initial relation embedding vector into an ACLP model in a tuple form for training; s4, processing the initial relation embedding vector to obtain a processed relation attention vector; s5, processing the initial entity embedded vector to obtain a processed entity projection embedded vector; and S6, scoring the processed tuples through a preset scoring module to obtain a prediction result, judging whether the scoring result of the tuples is correct, adding the correct tuples into the knowledge hypergraph, and completing the knowledge hypergraph. The invention enables the processed tuple to contain more information and improves the link prediction accuracy.
Description
Technical Field
The invention relates to the technical field of knowledge hypergraphs, in particular to a knowledge hypergraph link prediction method combining an attention machine mechanism and a convolutional neural network.
Background
The hypergraph is a hypergraph structure knowledge graph, the relation among a plurality of entities in the real world can be represented by introducing a hyper-edge relation, and the hypergraph is generalization of the knowledge graph. The knowledge hypergraph is a hypergraph structure consisting of entities and hyperrelations, and has the characteristics of nodes and hyperedges of the hypergraph, and the knowledge hypergraph can be used for recording objects and relations in the real world. However, the existing knowledge hypergraphs are generally considered to be incomplete because of the intricacies of the facts in the real world and the difficulty in storing them. To make the incomplete knowledge hypergraph as complete as possible, it needs to be complemented. The link prediction aims to predict unknown tuples through the existing relations and entities in the knowledge hypergraph so as to complete the knowledge hypergraph, and therefore the link prediction can relieve the incompleteness problem of the knowledge hypergraph.
The existing knowledge hypergraph link prediction largely uses a method based on an embedded representation model, and the method has the advantages that a complex data structure can be mapped to an Euclidean space and converted into vectorized representation, the incidence relation is easier to discover, and reasoning is completed. When different tasks are completed, the vectorization representation obtained by the method based on the embedded representation model can be transmitted to the neural network, and the neural network is used for deeply learning the structural features and semantic features in the knowledge hypergraph, so that the missing relation entities in the knowledge hypergraph can be effectively predicted. However, the processing in the conventional method based on the embedded representation model only aims at the entity, and the multivariate relation is ignored, so that the multivariate relation only carries out initial embedding processing and does not contain more information, thereby restricting the performance of the algorithm.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a knowledge hypergraph link prediction method combining an attention machine mechanism and a convolutional neural network.
Interpretation of terms:
1. and (3) ACLP: attention and accommodation Network Link Prediction, Attention and Convolution Network Link Prediction.
2. ResidualNet: a residual network.
3. MLP: MultilayerPerceptron, multilayer perceptron.
4. MRR: mean regenerative Rank, representing the Mean Reciprocal Rank.
The invention adopts an improved attention mechanism module to enrich the information of the multivariate relation embedded vector, and adds the information in the entity to the relation embedded vector according to the weight proportion, thereby obtaining the processed relation attention vector and leading the relation attention vector to contain more information; in addition, adding adjacent entity information for the convolution kernel for extracting the entity characteristics, so that the obtained extraction vector contains the information of the number of the adjacent entities in the same tuple; in order to prevent excessive initial entity information from being lost during training, the entity projection vector and the same initial entity embedded vector are subjected to summation operation and then participate in link prediction scoring; and finally, optimizing by using a residual error network and a multilayer perceptron, and further improving the link prediction accuracy.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
a knowledge hypergraph link prediction method combining an attention mechanism and a convolutional neural network is used for carrying out reasoning prediction on unknown tuples in a knowledge hypergraph and at least comprises the following steps:
s1, loading the knowledge hypergraph to be complemented to obtain entities and relations in the knowledge hypergraph;
s2, initializing the entities and the relations loaded and obtained in the step S1 to obtain initial entity embedded vectors and initial relation embedded vectors;
s3, inputting the initial entity embedding vector and the initial relation embedding vector obtained in the step S2 into an ACLP model in a tuple form for training, wherein the ACLP model at least comprises an attention mechanism module and a convolutional neural network module;
s4, processing the initial relation embedding vector obtained in the step S2 through the attention mechanism module in the step S3, and adding the information of the entities in the tuples into the relation embedding vector in proportion to the importance degree of the entities to the relation to obtain a processed relation attention vector;
s5, performing feature extraction on the initial entity embedded vector obtained in the step S2 through the convolutional neural network module in the step S3, and adding the information of the adjacent number of the entities in the tuple to a convolutional kernel in the convolutional neural network module to obtain a processed entity projection embedded vector;
s6, scoring the processed tuples through a preset scoring module to obtain a prediction result, and judging whether the scoring result of the tuples is correct according to the evaluation index: if the tuple is correct, adding the correct tuple into the knowledge hypergraph to complement the knowledge hypergraph, and if the tuple is wrong, discarding the wrong tuple;
wherein the processed tuple comprises a processed relationship vector and a processed entity vector.
Further, let the knowledge hypergraph be a graph consisting of vertices and hyperedges, written as:
KHG={V,E}
in the above formula, V ═ V 1 ,v 2 ,…,v |V| Represents the set of entities in the KHG, | V | represents the number of entities contained in the KHG; e ═ E 1 ,e 2 ,…,e |E| Represents a set of relationships between entities, i.e. a set of super edges, | E | represents the number of super edges contained in the KHG; any one super edge e corresponds to a tuple T ═ e (v ═ e) 1 ,v 2 ,…,v |e| ) T ∈ τ, | e | represents the number of entities contained by the superedge e, i.e. the number of elements of e, τ represents the set of all tuples of the ideal complete target knowledge supergraph KHG.
Further, step S4 specifically includes:
the input in the attention mechanism module is the relation e in the tuple i Embedded vector of initial relationshipAnd corresponding initial entity embedding vector setsWherein the content of the first and second substances, representing a vectorI ≦ e ≦ d, 1 ≦ i ≦ e | e The dimension representing the relationship e when initialized as a vector, may be predefined, is the relation e i A matrix of all the entity vectors in (a),representing a vectorDimension, | e i I represents the relationship e i Including the number of entities, d v Represents the dimension of the entity v when initialized as a vector;
first, a vector is embedded for an initial relationshipAnd initial entity embedding vector setPerforming tandem operation, performing linear mapping on the vectors after tandem operation, and processing through a LeakyReLU nonlinear function to obtain a set of embedded vectors simultaneously containing initial entitiesEmbedding vectors in relation to initialProjection vector of informationThe calculation process is shown in formula (1):
in the above formula, the first and second carbon atoms are, representing projection vectorsThe dimension (c) of (a) is,a mapping matrix is represented that is, representing a mapping matrixConcat represents a tandem operation;
projection vector pair by softmaxProcessing to obtain initial relation embedded vectorEmbedding sets of vectors with initial entitiesWeight vector betweenThe calculation process of softmax is shown in formula (2):
in the above equation, softmax represents the flexible maximum transfer function,indicating taking eThe power of the first power of the image,representing a vectorThe jth line of (1);
by passingAndthe addition of the products results in a relational attention vector To representThe calculation process of the jth data is shown in formula (3):
further, step S5 specifically includes:
first, the convolutional neural network module embeds the vector with the initial entityAs an input to the process, the process may,using convolution kernels containing tuple location informationExtracting initial entity embedded vectorsThe method of (a), wherein,then using the parameter neb i To convolution kernelAdding information of the number of adjacent entities so thatThe extracted features are changed according to the number of adjacent entities, and convolution embedded vectors are obtained after convolution processingThe calculation process is shown in formula (4):
in the above formula, the first and second carbon atoms are,the jth row in the convolution kernel representing the ith position in the tuple,R l representing a convolution kernelL represents the convolution kernel length;
to derive a complete mapping vectorEmbedding vectors into the obtained convolutionPerforming a concatenation operation and a linear mapping:
in the above formula, the first and second carbon atoms are,a linear mapping matrix is represented that is, representing a mapping matrixQ denotes the size of the feature map, q ═ 1 + d-l/s); after a plurality of vectors are connected in series into a single vector, the dimension is increased, and a linear mapping matrix is usedMapping nq-dimensional vector to d v A vector of dimensions;
embedding an initial entity into a vectorAdding the transformed mapping vectorCalculating to obtain entity projection embedded vectorThe calculation process is shown in formula (6):
further, the ACLP model further comprises an optimization module comprising at least a residual network.
Further, before step S6, processing the entity projection embedding vector obtained after the processing by the convolutional neural network module by using a residual error network, specifically including the following steps:
the residual function F (x) of the residual network adopts a convolution neural network, and the process of the whole residual network is shown as the formula (7):
in the above formula, the first and second carbon atoms are,represents the entity residual vector, delta represents the ReLU activation function,a convolution kernel representing the ith position in the tuple,R n×l representing a convolution kernelN represents the number of convolution kernels at the location, l represents the length of the convolution kernels, l and n are predefined, F (x) the mapping result andare vectors of the same dimension.
Further, the optimization module further comprises a multilayer perceptron.
Further, before step S6, the entity residual vector is processed by the multi-layer perceptronThe treatment specifically comprises the following steps:
entity residual error vector of multi-layer perceptron obtained by formula (7)As an input layer vector, the input layer vector is connected with an output signal through a weight value, and the mathematical expression of the information propagation process of the multilayer perceptron is shown as a formula (8):
in the above formula, the first and second carbon atoms are,representing the entity perception vector that the layer of neurons last output,representing x-1 th layer to x-th layer transformation parametersThe number of the first and second groups is, representing transformation parametersDimension of (D) x Represents the dimension of the ith layer; b x A bias parameter representing an x-th layer; delta x Representing the activation function of the x-th layer.
Further, in step S6, scoring is performed on the processed tuples through a preset scoring module, which specifically includes the following steps:
the entity perception vector is obtained after the initial relationship embedding vector is processed through the step S4 and the initial entity embedding vector is processed through the step S5 and optimized through the optimization moduleWill relate to the attention vectorPerceptual vectors with all entities within a tupleThe inner product between scores the tuple T, as shown in equation (9):
further, in step S6, determining whether the processed tuple is correct includes the following steps:
replacing an entity v of a tuple T when making a prediction i Creating a set of negative tuples G for arbitrary n entities neg(T) Is marked as T ', T' is belonged to G neg(T) (ii) a Scoring the tuple T' by adopting a formula (9), and according to the height of the score, G neg(T) The tuples in (1) are sorted in ascending order to obtainTuple T is in G neg(T) Rank of (1); according to different rank calculation methods, adopting any one evaluation method of MRR or Hit @ n;
MRR stands for mean reciprocal rank, calculate G neg(T) The reciprocal and mean of rank of the medium tuple T'; the MRR calculation formula is shown in formula (10):
in the above formula, Σ represents the pair G neg(T) The reciprocal of the medium tuple rank is subjected to traversal summation, and the effect is better when the MRR value is larger;
hit @ n represents a type of evaluation method, and the calculation formula thereof is shown in formula (11):
if rank is not less than n, T' is regarded as positive tuple, n is 1, 3 or 10, num represents the number of positive tuples; the greater the Hit @ n, the better the effect.
The invention has the beneficial effects that:
compared with the traditional knowledge hypergraph link prediction method, the method for processing the multi-element relation in the knowledge hypergraph mainly uses the attention mechanism module to add the entity information in the element group to the relation embedding vector, so that the processed relation attention vector is obtained and contains more information. And the number information of the adjacent entities in the tuple is added into the used convolutional neural network module, so that more information in the tuple can be extracted when the convolutional neural network module extracts the entity characteristics. Furthermore, the ACLP model is optimized, and a residual error network is used for processing the vector passing through the convolutional neural network module, so that the problem of gradient disappearance is relieved, the learning can be continuously performed, and the learning loss value is reduced. In addition, in order to enhance the nonlinear learning capability of the model, a multilayer perceptron is added behind the residual error network, so that the model can learn more features, and the accuracy of the model in knowledge hypergraph link prediction is improved.
Drawings
FIG. 1 is a flow chart of a knowledge hypergraph link prediction method that combines the attention mechanism with a convolutional neural network, as described in the present invention.
FIG. 2 is a flow diagram of relationship and entity information in the rich knowledge hypergraph of the present invention.
FIG. 3 is a block diagram of the structure of the modules used in the knowledge hypergraph linkage prediction method of the joint attention mechanism and convolutional neural network described in the present invention.
FIG. 4 is a schematic diagram of the ACLP model of the present invention.
FIG. 5 is a schematic diagram of the calculation process of the relationship attention vector of the present invention.
FIG. 6 is a process diagram for computing the solid projected embedded vector of the present invention.
Detailed Description
In order to facilitate a better understanding of the invention for those skilled in the art, the invention will be described in further detail with reference to the accompanying drawings and specific examples, which are given by way of illustration only and do not limit the scope of the invention.
As shown in FIG. 1, the invention mainly combines an attention mechanism and a convolutional neural network, so that the relationship embedded vector can contain more information, and the convolutional neural network can extract more entity embedded features to realize high-precision reasoning on the relationship and the entity in the knowledge hypergraph. The invention is a flow chart for enriching the relation in the knowledge hypergraph and the information contained in the entity, as shown in fig. 2, the invention obtains the entity information and the relation information from the knowledge hypergraph to be complemented, then uses the convolution neural network module and the attention mechanism module to extract the characteristics in the entity, and learns the information contained in the entity near the relation in the same tuple into the relation vector, so that the entity vector and the relation vector both contain more information, and the subsequent scoring module can more effectively judge whether the input tuple is correct or wrong according to the abundant information. If the tuples are judged to be wrong, the wrong tuples are discarded, and if the tuples are judged to be correct, the correct tuples are added into the knowledge hypergraph to complete the knowledge hypergraph. In addition, after the relation embedded vector and the entity embedded vector are processed, the optimization module is optimized, the problem that the gradient disappears in the method is solved by using a residual error network, the nonlinear learning capability of the method is enhanced by using a multilayer perceptron, and therefore the accuracy of the method for performing link prediction in the knowledge hypergraph is further improved.
FIG. 3 is a block diagram illustrating the structure of the modules used in the knowledge hypergraph link prediction method of the joint attention mechanism and convolutional neural network of the present invention, in which the most important is the convolutional neural network module and attention mechanism module, which process the loaded data to obtain the entity projection embedding vector and the relationship attention vector containing richer information. The optimization module comprises a residual error network and a multilayer perceptron and is used for further enhancing the prediction effect of the knowledge hypergraph link, so that the prediction result is more accurate. And the scoring module scores the processed relationship attention vector and the processed entity projection embedded vector, judges whether the tuple to which the entity and the relationship belong is correct or not, discards the wrong tuple if the tuple is judged to be wrong, and adds the correct tuple into the knowledge hypergraph to complete the knowledge hypergraph if the tuple is judged to be correct.
The schematic diagram of the theory of the ACLP model of the invention is shown in FIG. 4, and the ACLP model mainly comprises three steps: (1) generating a relation attention vector, wherein the calculation process of the relation attention vector is shown in FIG. 5; (2) the calculation process of the entity projection embedding vector is shown in FIG. 6; (3) and (4) tuple scoring. In FIGS. 5 and 6, concat represents the concatenation operation and project represents the linear mapping.
Before specifically describing the knowledge hypergraph link prediction method combining the attention mechanism and the convolutional neural network, the definition of the knowledge hypergraph is given first. Let the knowledge hypergraph be a graph consisting of vertices and hyperedges, written as:
KHG={V,E}
in the above formula, V ═ V 1 ,v 2 ,…,v |V| Denotes the set of entities in KHG, | V | denotes KThe number of entities contained in the HG; e ═ E 1 ,e 2 ,…,e |E| Represents a set of relationships between entities, i.e., a set of hyper-edges, | E | represents the number of hyper-edges contained in the KHG; any one super edge e corresponds to a tuple T ═ e (v ═ e) 1 ,v 2 ,…,v |e| ) T belongs to tau, E represents the number of entities contained in the super edge e, i.e. the element number of e, tau represents the set of all tuples of the ideal complete target knowledge super graph KHG.
The method for predicting the knowledge hypergraph link of the joint attention mechanism and the convolutional neural network is used for performing inference prediction on unknown tuples in the knowledge hypergraph, and comprises the following steps as shown in fig. 1 to 6:
and step S1, loading the knowledge hypergraph to be complemented to obtain the entities and the relations in the knowledge hypergraph. Specifically, the supergraph used in this embodiment is stored in a text form, and the supergraph is loaded to the ACLP model in a tuple form for processing through a data loading function, the same superedges or entities may exist between tuples, and through these same superedges and entities, a link is formed between tuples, so that a whole supergraph is formed, which contains rich semantic information and can reflect facts contained in reality.
And step S2, initializing the entities and the relations obtained by loading in the step S1 to obtain initial entity embedded vectors and initial relation embedded vectors.
After the knowledge hypergraph is loaded, the entities and relationships therein need to be initialized and converted into embedded vectors. The specific initialization mode is similar to a word embedding processing method, a word matrix is obtained according to the number of words and defined dimensions, and the word matrix is multiplied by the randomly initialized embedding matrix to obtain a word embedding vector. According to the method, an entity matrix and a relation matrix similar to a word matrix are initialized according to entity information and relation information, and then multiplied by a matrix initialized at random to obtain an initial entity embedded vector and an initial relation embedded vector of the entity matrix and the relation embedded vector. Therefore, the entities and the relations in the knowledge hypergraph are embedded into the continuous vector space, so that the structural information in the knowledge hypergraph is reserved while calculation is facilitated, and a complex data structure is converted into vectorized representation through embedded representation, so that convenience is brought to the development of subsequent work. When the hypergraph knowledge inference is carried out, the embedded expression of the entities and the relations can map the relation information hidden in the graph structure to Euclidean space, so that the relation which is difficult to discover originally becomes obvious, and the hypergraph knowledge link prediction which carries out inference by using the embedded vectors of the entities and the relations can better complete the inference task and predict the entities and the relations of positions.
And S3, inputting the initial entity embedded vector and the initial relation embedded vector obtained in the step S2 into an ACLP model in a tuple form for training, wherein the ACLP model comprises an attention mechanism module, a convolutional neural network module and an optimization module.
Specifically, the attention mechanism module is configured to process the initial relationship embedded vector obtained in step S2, the convolutional neural network module is configured to process the initial entity embedded vector obtained in step S2, the optimization module includes a residual error network and a multilayer perceptron, the residual error network is configured to process the entity projection embedded vector obtained after processing by the convolutional neural network module, and the multilayer perceptron is configured to process the entity residual error vector.
The whole ACLP model mainly carries out different processing on an initial entity embedded vector and an initial relation embedded vector in a tuple existing in a knowledge hypergraph, but the processing aims to ensure that the initial entity embedded vector and the initial relation embedded vector can finally contain more information beneficial to link prediction. The implementation processes of these three modules are specifically described below.
Step S4, the attention mechanism module in step S3 processes the initial relationship embedding vector obtained in step S2, and adds the information of the entities in the tuples to the relationship embedding vector in proportion to the importance of the entities in the tuples to obtain the processed relationship attention vector.
In this embodiment, when the attention mechanism module is used to process the initial relationship embedded vector obtained in step S2, attention is paidThe relation e in the tuple is input in the semantic mechanism module i Embedded vector of initial relationshipAnd corresponding initial entity embedding vector setsWherein the content of the first and second substances, representing a vectorI ≦ e ≦ d, 1 ≦ i ≦ e | e The dimension representing the relationship e when initialized as a vector, may be predefined, is a relation e i A matrix of all the entity vectors in (a),representing a vectorDimension of, | e i I represents the relationship e i Including the number of entities, d v Represents the dimension of the entity v when initialized as a vector;
first, a vector is embedded for an initial relationshipAnd initial entity embedding vector setOperating in series, then pairThe vectors after the series connection are subjected to linear mapping and then processed by a LeakyReLU nonlinear function, so that an embedded vector set simultaneously containing initial entities is obtainedEmbedding vectors with and initial relationshipsProjection vector of informationThe calculation process is shown in formula (1):
in the above formula, the first and second carbon atoms are, representing projection vectorsThe dimension (c) of (a) is,a mapping matrix is represented that is, representing a mapping matrixConcat represents a tandem operation;
projection vector pair by softmaxProcessing to obtain an initial relation embedded vectorEmbedding sets of vectors with initial entitiesWeight therebetweenThe calculation process of softmax is shown in formula (2):
in the above equation, softmax represents the flexible maximum transfer function,indicating taking eThe power of the first power of the image,representing a vectorThe jth line of (1);
by passingAnd withThe addition of the products results in a relational attention vector To representThe calculation process of the jth data is shown in formula (3):
and S5, performing feature extraction on the initial entity embedded vector obtained in the step S2 through the convolutional neural network module in the step S3, and adding the information of the adjacent number of the entities in the tuple to a convolutional kernel in the convolutional neural network module to obtain a processed entity projection embedded vector.
The entities of the knowledge hypergraph can appear at different positions of a plurality of multivariate relations at the same time, and the quantity and the characteristics of the adjacent entities in the same tuple are different due to different appearance positions so as to be capable of being according to the entities v i Extracting features from the positions in the tuples to obtain convolution embedded vectors, in this embodiment, first, the convolution neural network module embeds the vectors with initial entitiesAs an input to the process, the process may be,using convolution kernels containing tuple location informationExtracting initial entity embedded vectorsThe method of (a), wherein,then using the parameter neb i To convolution kernelAdding information of the number of adjacent entities so thatThe extracted features are changed according to the number of adjacent entities, and convolution embedded vectors are obtained after convolution processingThe calculation process is shown in formula (4):
in the above formula, the first and second carbon atoms are,the jth row in the convolution kernel representing the ith position in the tuple,R l representing a convolution kernelL represents the convolution kernel length;
to derive a complete mapping vectorEmbedding vectors into the obtained convolutionPerforming a concatenation operation and a linear mapping:
in the above formula, the first and second carbon atoms are,indicating lineThe matrix of the sexual mapping is used, representing a mapping matrixQ denotes the size of the feature map, q ═ 1 + d-l/s); after a plurality of vectors are connected in series into a single vector, the dimension is increased, and a linear mapping matrix is usedMapping nq-dimensional vector to d v A vector of dimensions;
embedding an initial entity into a vectorAdding the transformed mapping vectorCalculating to obtain entity projection embedded vectorThe calculation process is shown in formula (6):
and step S6, further processing the vector processed by the attention mechanism module and the convolutional neural network module through the optimization module.
The optimization module comprises a residual error network ResidualNet and a multilayer perceptron, and the entity projection embedded vector processed by the convolutional neural network module is processed through the residual error network to obtain an entity residual error vectorAim atA constant is added to the original changing gradient as a new gradient, thereby mitigating the disappearance of the gradient. Then, in order to increase the nonlinear learning capability of the model, a multi-layer perceptron is used to continue the entity residual vectorProcessing to obtain entity sensing vectorThe method comprises the following specific steps:
(1) processing the entity projection embedded vector obtained after the processing of the convolutional neural network module through a residual error network, which specifically comprises the following steps:
the residual function F (x) of the residual network adopts a convolution neural network, and the process of the whole residual network is shown as the formula (7):
in the above formula, the first and second carbon atoms are,represents the entity residual vector, delta represents the ReLU activation function,a convolution kernel representing the ith position in the tuple,R n×l representing a convolution kernelN represents the number of convolution kernels at that location, l represents the length of the convolution kernels, F (x) the mapping result andmust be vectors of the same dimension.
When the two dimensions are different, canBy means of matricesTo pairAnd performing linear mapping to match the two dimensions, wherein the calculation formula after mapping is shown as formula (7-1).
The selection of the network layer number of F (x) is very flexible, and more than two layers can be selected; the single-layer network is not selected because when f (x) selects the single-layer network, equation (7) is more like a linear layer, and there is no advantage over other networks. In summary, in the present embodiment, f (x) selects a double-layer convolutional neural network.
The residual error network restores the learning gradient for the model through one section of jump connection; after multi-layer network learning, a solid projection embedded vector is assigned to the model againMaking a solid residual vectorThe characteristics and structural information of the nodes in the original hypergraph are kept to a great extent, so that the model always contains the original information of the knowledge hypergraph in continuous learning, the original gradient is restored, and the problem of gradient disappearance is effectively solved.
(2) In order to further enhance the nonlinear learning capability of the model, the invention adopts a multilayer perceptron to continuously process entity residual error embedded vectors.
The multi-layered sensor is a model for non-linear mapping of input and output vectors, and the multi-layered sensor obtains a solid residual vector according to equation (7)As an input layer toThe quantity is connected with the output signal through the weight value, and the mathematical expression of the information propagation process of the multilayer perceptron is shown as the formula (8):
in the above formula, the first and second carbon atoms are,representing the entity perception vector that the layer of neurons last output,representing the x-1 th layer to x-th layer transformation parameters, representing transformation parametersDimension of (D) x Representing the dimension of the i-th layer, b x A bias parameter representing an x-th layer; delta x Representing the activation function of the x-th layer.
When a multi-layer perceptron is used, the number of neuron layers needs to be strictly controlled, because if the number of neuron layers is too large, the overfitting can be caused by the excessively strong learning capability of the model. It was found that the training effect is best when four-layer neurons are used, so the multilayer perceptron used in the present invention uses two-layer neurons as hidden layers, and the mathematical representation of the information propagation process using the four-layer perceptron is shown in formula (8-1):
step S7, scoring the processed tuples through a preset scoring module to obtain a prediction result, and judging whether the scoring result of the tuples is judged according to the evaluation indexAnd (3) correct: if the tuple is correct, the correct tuple is added into the hypergraph to complete the hypergraph, and if the tuple is wrong, the wrong tuple is discarded. Wherein the processed tuple comprises a processed relationship vector and a processed entity vector, and after being optimized by the optimization module, the processed tuple comprises a relationship attention vectorAnd entity perception vector
The method comprises the following steps of scoring the processed tuples through a preset scoring module:
the entity perception vector is obtained after the vector is embedded through the initial relation and the initial entity and is optimized through an optimization moduleWill relate to the attention vectorPerceptual vectors with all entities within a tupleThe inner product between scores the tuple T, as shown in equation (9):
then, judging whether the processed tuple is correct, specifically comprising the following steps:
replacing an entity v of a tuple T when making a prediction i Creating a set of negative tuples G for arbitrary n entities neg(T) Is marked as T ', T' is belonged to G neg(T) (ii) a Scoring the tuple T' by adopting a formula (9), and according to the height of the score, G neg(T) The tuples in (1) are sorted in ascending order to obtain the tuple T inG neg(T) Rank of (1); according to different rank calculation methods, any one of MRR or Hit @ n evaluation methods can be adopted, and both methods are carried out in the specific experimental process so as to check and ensure the accuracy of the result.
MRR stands for mean reciprocal rank, calculate G neg(T) The reciprocal and mean of rank of the medium tuple T'; the MRR calculation formula is shown in formula (10):
in the above formula, Σ represents the pair G neg(T) The reciprocal of the medium tuple rank is subjected to traversal summation, and the effect is better when the MRR value is larger;
hit @ n represents a class of evaluation methods, and the calculation formula is shown as formula (11):
if rank is not less than n, T' is regarded as positive tuple, n is 1, 3 or 10, num represents the number of positive tuples; the greater the Hit @ n, the better the effect.
The foregoing merely illustrates the principles and preferred embodiments of the invention and many variations and modifications may be made by those skilled in the art in light of the foregoing description, which are within the scope of the invention.
Claims (10)
1. A knowledge hypergraph link prediction method combining an attention mechanism and a convolutional neural network is characterized by being used for reasoning and predicting unknown tuples in a knowledge hypergraph and at least comprising the following steps of:
s1, loading the knowledge hypergraph to be complemented to obtain entities and relations in the knowledge hypergraph;
s2, initializing the entities and the relations loaded and obtained in the step S1 to obtain initial entity embedded vectors and initial relation embedded vectors;
s3, inputting the initial entity embedding vector and the initial relation embedding vector obtained in the step S2 into an ACLP model in a tuple form for training, wherein the ACLP model at least comprises an attention mechanism module and a convolutional neural network module;
s4, processing the initial relation embedding vector obtained in the step S2 through the attention mechanism module in the step S3, and adding the information of the entities in the tuples into the relation embedding vector in proportion to the importance degree of the entities to the relation to obtain a processed relation attention vector;
s5, performing feature extraction on the initial entity embedded vector obtained in the step S2 through the convolutional neural network module in the step S3, and adding the information of the adjacent number of the entities in the tuple to a convolutional kernel in the convolutional neural network module to obtain a processed entity projection embedded vector;
s6, scoring the processed tuples through a preset scoring module to obtain a prediction result, and judging whether the scoring result of the tuple is correct according to the evaluation index: if the tuple is correct, adding the correct tuple into the knowledge hypergraph to complement the knowledge hypergraph, and if the tuple is wrong, discarding the wrong tuple;
wherein the processed tuple comprises a processed relationship vector and a processed entity vector.
2. The method of predicting knowledge hypergraph linkage for a joint attention mechanism and convolutional neural network of claim 1, wherein let the knowledge hypergraph be a graph consisting of vertices and hyperedges, written as:
KHG={V,E}
in the above formula, V ═ V 1 ,v 2 ,...,v |V| Represents the set of entities in the KHG, | V | represents the number of entities contained in the KHG; e ═ E 1 ,e 2 ,...,e |E| Represents a set of relationships between entities, i.e. a set of super edges, | E | represents the number of super edges contained in the KHG; any one super edge e corresponds to a tuple T ═ e (v ═ e) 1 ,v 2 ,...,v |e| ) T ∈ tau, | e | represents the number of entities contained in the super edge e, i.e. the element number of e, tau represents all the tuples of the ideal complete target knowledge super graph KHGA set of compositions.
3. The method for predicting knowledge hypergraph linkage of a joint attention mechanism and convolutional neural network as claimed in claim 2, wherein step S4 specifically comprises:
the input in the attention mechanism module is the relation e in the tuple i Embedded vector of initial relationshipAnd corresponding initial entity embedding vector setsWherein the content of the first and second substances, representing a vectorDimension of (1) is more than or equal to i and less than or equal to | e |, and d e Representing the dimension when the relation e is initialized to a vector, is the relation e i A matrix of all the entity vectors in (a),representing a vectorDimension, | e i I represents the relationship e i Including the number of entities, d v Represents the dimension of the entity v when initialized as a vector;
first, a vector is embedded for an initial relationshipAnd initial entity embedding vector setPerforming tandem operation, performing linear mapping on the vectors after tandem operation, and processing through a LeakyReLU nonlinear function to obtain a set of embedded vectors simultaneously containing initial entitiesEmbedding vectors in relation to initialProjection vector of informationThe calculation process is shown in formula (1):
in the above formula, the first and second carbon atoms are, representing projection vectorsThe dimension (c) of (a) is,a mapping matrix is represented that is, representing a mapping matrixConcat represents a tandem operation;
projection vector pair by softmaxProcessing to obtain initial relation embedded vectorEmbedding sets of vectors with initial entitiesWeight betweenThe calculation process of softmax is shown in formula (2):
in the above equation, softmax represents the flexible maximum transfer function,indicating taking eThe power of the first power of the image,representing a vectorThe jth line of (1);
by passingAndthe addition of the products results in a relational attention vector To representThe calculation process of the jth data is shown in formula (3):
4. the method for predicting knowledge hypergraph linkage of a joint attention mechanism and convolutional neural network as claimed in claim 3, wherein step S5 specifically comprises:
first, the convolutional neural network module embeds the vector with the initial entityAs an input to the process, the process may,using convolution kernels containing tuple location informationExtracting initial entity embedded vectorsThe method of (a), wherein,then using the parameter neb i To convolution kernelAdding information of the number of adjacent entities so thatThe extracted features are changed according to the number of adjacent entities, and convolution embedded vectors are obtained after convolution processingThe calculation process is shown in formula (4):
in the above formula, the first and second carbon atoms are,the jth row in the convolution kernel representing the ith position in the tuple,R l representing a convolution kernelL represents the convolution kernel length;
to derive a complete mapping vectorEmbedding vectors into the obtained convolutionPerforming a concatenation operation and a linear mapping:
in the above formula, the first and second carbon atoms are,a linear mapping matrix is represented that is, representing a mapping matrixQ denotes the size of the feature map, q ═ 1 + d-l/s); after a plurality of vectors are connected in series into a single vector, the dimension is increased, and a linear mapping matrix is usedMapping nq-dimensional vector to d v A vector of dimensions;
embedding an initial entity into a vectorAdding the transformed mapping vectorCalculating to obtain entity projection embedded vectorCalculation processAs shown in equation (6):
5. the method of knowledge hypergraph link prediction combining an attention mechanism with a convolutional neural network as claimed in claim 4, wherein the ACLP model further comprises an optimization module comprising at least a residual network.
6. The method for predicting the knowledge hypergraph linkage of the joint attention mechanism and the convolutional neural network as claimed in claim 5, wherein before step S6, the entity projection embedded vector obtained after the processing by the convolutional neural network module is processed by a residual error network, which specifically includes the following steps:
the residual function F (x) of the residual network adopts a convolution neural network, and the process of the whole residual network is shown as the formula (7):
in the above formula, the first and second carbon atoms are,represents the entity residual vector, delta represents the ReLU activation function,a convolution kernel representing the ith position in the tuple,R n×l representing a convolution kernelN represents the number of convolution kernels at that location, l represents the convolution kernelLength, F (x) mapping result andare vectors of the same dimension.
7. The method of knowledge hypergraph link prediction combining an attention mechanism with a convolutional neural network as claimed in claim 6, wherein the optimization module further comprises a multi-layered perceptron.
8. The method of predicting knowledge-hypergraph linkage of a joint attention mechanism and convolutional neural network as claimed in claim 7, wherein before step S6, entity residual vectors are processed by multi-layered perceptronThe treatment specifically comprises the following steps:
entity residual error vector of multi-layer perceptron obtained by formula (7)As an input layer vector, the input layer vector is connected with an output signal through a weight value, and the mathematical expression of the information propagation process of the multilayer perceptron is shown as a formula (8):
in the above formula, the first and second carbon atoms are,representing the entity perception vector that the layer of neurons last output,representing the x-1 th layer to x-th layer transformation parameters, representing transformation parametersDimension of (D) x Represents the dimension of the ith layer; b x A bias parameter representing an x-th layer; delta x Representing the activation function of the x-th layer.
9. The method for predicting knowledge hypergraph linkage of a joint attention mechanism and a convolutional neural network as claimed in claim 8, wherein in step S6, the score is performed on the processed tuples through a preset scoring module, which specifically includes the following steps:
the entity perception vector is obtained after the initial relationship embedding vector is processed through the step S4 and the initial entity embedding vector is processed through the step S5 and optimized through the optimization moduleWill relate to the attention vectorPerceptual vectors with all entities within a tupleThe inner product between scores the tuple T, as shown in equation (9):
10. the method for predicting knowledge hypergraph linkage of a joint attention mechanism and a convolutional neural network as claimed in claim 9, wherein in step S6, determining whether the processed tuple is correct specifically includes the following steps: in making a predictionWhen, replace the entity v of the tuple T i Creating a set of negative tuples G for arbitrary n entities neg(T) Is marked as T ', T' is belonged to G neg(T) (ii) a Scoring the tuple T' by adopting a formula (9), and according to the height of the score, G neg(T) The tuples in (1) are sorted in ascending order to obtain the tuple T at G neg(T) Rank of (1); according to different rank calculation methods, adopting any one evaluation method of MRR or Hit @ n;
MRR stands for mean reciprocal rank, calculate G neg(T) The reciprocal and mean of rank of the medium tuple T'; the MRR calculation formula is shown in formula (10):
in the above formula, Σ represents the pair G neg(T) Traversing and summing the reciprocal of the medium tuple rank, wherein the larger the MRR value is, the better the effect is;
hit @ n represents a class of evaluation methods, and the calculation formula is shown as formula (11):
if rank is not less than n, T' is regarded as positive tuple, n is 1, 3 or 10, num represents the number of positive tuples; the greater the Hit @ n, the better the effect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210475730.1A CN114817568B (en) | 2022-04-29 | Knowledge hypergraph link prediction method combining attention mechanism and convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210475730.1A CN114817568B (en) | 2022-04-29 | Knowledge hypergraph link prediction method combining attention mechanism and convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114817568A true CN114817568A (en) | 2022-07-29 |
CN114817568B CN114817568B (en) | 2024-05-10 |
Family
ID=
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115757806A (en) * | 2022-09-21 | 2023-03-07 | 清华大学 | Hyper-relation knowledge graph embedding method and device, electronic equipment and storage medium |
CN116186295A (en) * | 2023-04-28 | 2023-05-30 | 湖南工商大学 | Attention-based knowledge graph link prediction method, attention-based knowledge graph link prediction device, attention-based knowledge graph link prediction equipment and attention-based knowledge graph link prediction medium |
CN116579425A (en) * | 2023-07-13 | 2023-08-11 | 北京邮电大学 | Super-relationship knowledge graph completion method based on global and local level attention |
CN117314266A (en) * | 2023-11-30 | 2023-12-29 | 贵州大学 | Novel intelligent scientific and technological talent evaluation method based on hypergraph attention mechanism |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020140386A1 (en) * | 2019-01-02 | 2020-07-09 | 平安科技(深圳)有限公司 | Textcnn-based knowledge extraction method and apparatus, and computer device and storage medium |
US20200226472A1 (en) * | 2019-01-10 | 2020-07-16 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for a supra-fusion graph attention model for multi-layered embeddings and deep learning applications |
CN112417219A (en) * | 2020-11-16 | 2021-02-26 | 吉林大学 | Hyper-graph convolution-based hyper-edge link prediction method |
CN112613602A (en) * | 2020-12-25 | 2021-04-06 | 神行太保智能科技(苏州)有限公司 | Recommendation method and system based on knowledge-aware hypergraph neural network |
CN112883200A (en) * | 2021-03-15 | 2021-06-01 | 重庆大学 | Link prediction method for knowledge graph completion |
CN113051440A (en) * | 2021-04-12 | 2021-06-29 | 北京理工大学 | Link prediction method and system based on hypergraph structure |
US20210216881A1 (en) * | 2020-01-10 | 2021-07-15 | Accenture Global Solutions Limited | System for Multi-Task Distribution Learning With Numeric-Aware Knowledge Graphs |
CN113792768A (en) * | 2021-08-27 | 2021-12-14 | 清华大学 | Hypergraph neural network classification method and device |
CN113962358A (en) * | 2021-09-29 | 2022-01-21 | 西安交通大学 | Information diffusion prediction method based on time sequence hypergraph attention neural network |
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020140386A1 (en) * | 2019-01-02 | 2020-07-09 | 平安科技(深圳)有限公司 | Textcnn-based knowledge extraction method and apparatus, and computer device and storage medium |
US20200226472A1 (en) * | 2019-01-10 | 2020-07-16 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for a supra-fusion graph attention model for multi-layered embeddings and deep learning applications |
US20210216881A1 (en) * | 2020-01-10 | 2021-07-15 | Accenture Global Solutions Limited | System for Multi-Task Distribution Learning With Numeric-Aware Knowledge Graphs |
CN112417219A (en) * | 2020-11-16 | 2021-02-26 | 吉林大学 | Hyper-graph convolution-based hyper-edge link prediction method |
CN112613602A (en) * | 2020-12-25 | 2021-04-06 | 神行太保智能科技(苏州)有限公司 | Recommendation method and system based on knowledge-aware hypergraph neural network |
CN112883200A (en) * | 2021-03-15 | 2021-06-01 | 重庆大学 | Link prediction method for knowledge graph completion |
CN113051440A (en) * | 2021-04-12 | 2021-06-29 | 北京理工大学 | Link prediction method and system based on hypergraph structure |
CN113792768A (en) * | 2021-08-27 | 2021-12-14 | 清华大学 | Hypergraph neural network classification method and device |
CN113962358A (en) * | 2021-09-29 | 2022-01-21 | 西安交通大学 | Information diffusion prediction method based on time sequence hypergraph attention neural network |
Non-Patent Citations (1)
Title |
---|
王维美;史一民;李冠宇;: "改进的胶囊网络知识图谱补全方法", 计算机工程, no. 08, 31 December 2020 (2020-12-31) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115757806A (en) * | 2022-09-21 | 2023-03-07 | 清华大学 | Hyper-relation knowledge graph embedding method and device, electronic equipment and storage medium |
CN116186295A (en) * | 2023-04-28 | 2023-05-30 | 湖南工商大学 | Attention-based knowledge graph link prediction method, attention-based knowledge graph link prediction device, attention-based knowledge graph link prediction equipment and attention-based knowledge graph link prediction medium |
CN116579425A (en) * | 2023-07-13 | 2023-08-11 | 北京邮电大学 | Super-relationship knowledge graph completion method based on global and local level attention |
CN116579425B (en) * | 2023-07-13 | 2024-02-06 | 北京邮电大学 | Super-relationship knowledge graph completion method based on global and local level attention |
CN117314266A (en) * | 2023-11-30 | 2023-12-29 | 贵州大学 | Novel intelligent scientific and technological talent evaluation method based on hypergraph attention mechanism |
CN117314266B (en) * | 2023-11-30 | 2024-02-06 | 贵州大学 | Novel intelligent scientific and technological talent evaluation method based on hypergraph attention mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109389151B (en) | Knowledge graph processing method and device based on semi-supervised embedded representation model | |
CN109063719B (en) | Image classification method combining structure similarity and class information | |
US20160283842A1 (en) | Neural network and method of neural network training | |
CN110674850A (en) | Image description generation method based on attention mechanism | |
CN107871014A (en) | A kind of big data cross-module state search method and system based on depth integration Hash | |
CN112561064B (en) | Knowledge base completion method based on OWKBC model | |
CN108921047B (en) | Multi-model voting mean value action identification method based on cross-layer fusion | |
CN112541532B (en) | Target detection method based on dense connection structure | |
CN112527993B (en) | Cross-media hierarchical deep video question-answer reasoning framework | |
CN111178319A (en) | Video behavior identification method based on compression reward and punishment mechanism | |
CN113592007B (en) | Knowledge distillation-based bad picture identification system and method, computer and storage medium | |
CN113190654A (en) | Knowledge graph complementing method based on entity joint embedding and probability model | |
CN116110022B (en) | Lightweight traffic sign detection method and system based on response knowledge distillation | |
CN114970517A (en) | Visual question and answer oriented method based on multi-modal interaction context perception | |
CN114330580A (en) | Robust knowledge distillation method based on ambiguity-oriented mutual label updating | |
CN115510226A (en) | Emotion classification method based on graph neural network | |
CN116844041A (en) | Cultivated land extraction method based on bidirectional convolution time self-attention mechanism | |
CN115035341A (en) | Image recognition knowledge distillation method capable of automatically selecting student model structure | |
CN113191445B (en) | Large-scale image retrieval method based on self-supervision countermeasure Hash algorithm | |
CN114332565A (en) | Method for generating image by generating confrontation network text based on distribution estimation condition | |
CN112528168B (en) | Social network text emotion analysis method based on deformable self-attention mechanism | |
CN113420833A (en) | Visual question-answering method and device based on question semantic mapping | |
CN114817568A (en) | Knowledge hypergraph link prediction method combining attention mechanism and convolutional neural network | |
CN114817568B (en) | Knowledge hypergraph link prediction method combining attention mechanism and convolutional neural network | |
CN116187349A (en) | Visual question-answering method based on scene graph relation information enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |