CN116757283A - Knowledge graph link prediction method - Google Patents
Knowledge graph link prediction method Download PDFInfo
- Publication number
- CN116757283A CN116757283A CN202310749432.1A CN202310749432A CN116757283A CN 116757283 A CN116757283 A CN 116757283A CN 202310749432 A CN202310749432 A CN 202310749432A CN 116757283 A CN116757283 A CN 116757283A
- Authority
- CN
- China
- Prior art keywords
- matrix
- training
- neural network
- entity
- relation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 62
- 230000007246 mechanism Effects 0.000 claims abstract description 27
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 24
- 238000003062 neural network model Methods 0.000 claims abstract description 18
- 238000012360 testing method Methods 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 51
- 230000006870 function Effects 0.000 claims description 38
- 239000013598 vector Substances 0.000 claims description 17
- 238000005070 sampling Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 238000007477 logistic regression Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 229940050561 matrix product Drugs 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000002452 interceptive effect Effects 0.000 abstract description 7
- 230000009286 beneficial effect Effects 0.000 description 6
- 238000013519 translation Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
- G06N5/025—Extracting rules from data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a knowledge graph link prediction method, which comprises the following steps: pre-training the entity and the relation by using a TransE model to obtain an initialized embedded representation of the entity and the relation; constructing a training set and a testing set based on the initialized entity and the embedded representation of the relation, inputting data of the training set into a neural network model based on a convolutional neural network and a self-attention mechanism for training, calculating a loss function value of the training set on the neural network model based on the convolutional neural network and the self-attention mechanism, and storing optimized model parameters; and (3) based on the optimized model parameters, scoring and sorting the candidate triplets of the test set, predicting missing entities or relations, and finishing the link prediction of the knowledge graph. The invention is oriented to the knowledge graph link prediction task, and combines a convolutional neural network and a self-attention mechanism to fully mine the interactive information in the triples, thereby improving the model link prediction capability.
Description
Technical Field
The invention relates to the technical field of knowledge graph link prediction, in particular to a knowledge graph link prediction method.
Background
The knowledge graph models everything and the relation among everything in the form of a graph, has strong common sense understanding and reasoning capability, has wide application in the fields of search engines, intelligent questions and answers, intelligent medical treatment and the like, and is an important point of attention of researchers. Incomplete knowledge-graph can limit the development and application of the knowledge-graph in the field of artificial intelligence, so that knowledge-graph link prediction is a key for improving the downstream task effect. The knowledge-graph-based embedded representation is used as a link prediction method, and three types of methods are translation-based, tensor decomposition-based and neural network-based.
Translation-based methods typically consider relationships as translation or transfer operations from a head entity to a tail entity, scoring triples by distance computation, and thus achieving knowledge-graph link prediction. The number of parameters is small, the operation is simple, but complex relation types cannot be learned in general. The tensor decomposition-based method mainly decomposes the knowledge graph tensor into the product form of the entity tensor, the relation tensor and the entity tensor transpose to realize the link prediction. The full expression of the knowledge graph can be realized, but a certain mathematical basis is needed, and the knowledge graph is not easy to apply to a large-scale knowledge graph. The neural network-based method models the entity and the relation through a neural network model, obtains embedded representation and further carries out link prediction. The method can model complex relation types and obtain high-quality embedded representation of rich expression. The convolutional neural network is applied to a knowledge graph link prediction task to obtain a plurality of outstanding results, and has the advantages that abundant interaction information between entities and relations can be captured through simple and efficient convolutional operation, potential semantic information of triples is mined, and high-quality entity and relation embedding representation is achieved, so that the link prediction effect is improved.
However, in the existing method for predicting the knowledge graph link by using the convolutional neural network, when the interactive information of the entity and the relation is learned, only the interactive features which represent the same dimension are embedded by the triplets, and the associated information between the entity and the different dimensions of the relation is ignored, so that the performance of knowledge graph link prediction is affected.
Disclosure of Invention
Aiming at the defects in the prior art, the knowledge graph link prediction method provided by the invention solves the problem that the performance of knowledge graph link prediction is affected by only considering the interactive features of the same dimension of the triplet embedding representation and neglecting the associated information between the entity and different dimensions of the relation in the existing method for predicting the knowledge graph link by using the convolutional neural network.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: a knowledge graph link prediction method comprises the following steps:
s1: pre-training the entity and the relation by using a TransE model to obtain an initialized embedded representation of the entity and the relation;
s2: constructing a training set and a testing set based on the initialized entity and the embedded representation of the relation, inputting data of the training set into a neural network model based on a convolutional neural network and a self-attention mechanism for training, calculating a loss function value of the training set on the neural network model based on the convolutional neural network and the self-attention mechanism, and storing optimized model parameters;
s3: and (3) based on the optimized model parameters, scoring and sorting the candidate triplets of the test set, predicting missing entities or relations, and finishing the link prediction of the knowledge graph.
The beneficial effect of above-mentioned scheme is: according to the method, the prediction of the knowledge graph link is realized by using the convolutional neural network and the self-attention mechanism, the interactive information in the triples can be fully mined, the prediction capability of the model is improved, and the problems that the existing method for predicting the knowledge graph link by using the convolutional neural network only considers the interactive characteristics of the triples embedded in the representation of the same dimension, ignores the associated information between the entity and different dimensions of the relationship, and influences the prediction performance of the knowledge graph link are solved.
Further, constructing the training set in S2 includes the following sub-steps:
s2-1: inputting an initial training set, an entity set, a relation set, training rounds, batch training numbers, a negative sampling rate, a learning rate and regularization parameters into a neural network model based on a convolutional neural network and a self-attention mechanism;
s2-2: taking samples with the same number as the batch training number from the initial training set as a positive sample set of the iterative training, and obtaining a negative sample set of the iterative training according to a negative sampling rate;
s2-3: a training set for iterative training is constructed from the positive and negative sample sets.
The beneficial effects of the above-mentioned further scheme are: through the technical scheme, each parameter is input into the TransE model for pre-training, initial embedded representation of the entity and the relation is obtained, and a training set is constructed according to the sample set for subsequent model training.
Further, the step S2 further comprises the following sub-steps:
s2-4: inputting each triplet in the training set into a neural network model based on a convolutional neural network and a self-attention mechanism, and calculating a score of each triplet;
s2-5: based on the triplet score, calculating a loss function value of the training set on a neural network model based on a convolutional neural network and a self-attention mechanism;
s2-6: and carrying out iterative updating by adopting an Adam optimizer to obtain optimized model parameters which minimize the total loss function value of the sample, and vector representation of entity and relation updating, and storing the optimized model parameters.
The beneficial effects of the above-mentioned further scheme are: through the technical scheme, the optimized model parameters with the minimum total loss function value of the sample are obtained and stored and are used for predicting missing entities or relations.
Further, each triplet score is calculated in S2-4, comprising the following sub-steps:
s2-4-1: splicing the initialized entity and the embedded representation of the relation, and representing the initial triplet embedded matrix X as
X=[h,r,t]∈R k×3
Wherein h, R and t are triplet vectors, R k×3 A splice matrix of k rows and 3 columns, k being the dimension of the embedded representation of the initialized entities and relationships;
s2-4-2: generating three key matrices in a self-attention mechanism by using three mapping matrices, wherein the three key matrices comprise a query matrix Q, a key matrix K and a value matrix V;
s2-4-3: performing matrix multiplication operation on the transpose of the query matrix Q and the key matrix K, and performing normalization processing to obtain an attention distribution matrix;
s2-4-4: performing matrix product operation on the attention distribution matrix and the value matrix V to obtain an output matrix X', wherein each row of vectors of the output matrix are obtained by weighting and summing all initial row vectors, so that the mutual fusion of the information of different dimensions of the triplets is realized;
s2-4-5: splicing the initial triplet embedded matrix X and the output matrix X' obtained by using the self-attention mechanism into a matrix with k rows and 6 columnsAs input to a subsequent convolutional layer;
s2-4-6: using τ convolution check matrices of size 1 x 3Performing convolution operation to obtain tau characteristic matrixes k multiplied by 2;
s2-4-7: splicing the feature matrices after nonlinear activation function, and performing dot product operation with a weight matrix w to obtain a triplet score with the formula of
Wherein f r (h, t) is a scoring function, concat represents stitching, g is an activation function, and Ω is a set of convolution kernels.
The beneficial effects of the above-mentioned further scheme are: by the technical scheme, the triplet score is calculated, so that the total loss function value of the sample can be calculated.
Further, the calculation loss function value in S2-5 adopts a logistic regression loss function L added with regularization term, and the formula is
Wherein y is (h,r,t) Label for positive and negative sample, T + Is a positive sample set, T - Is a negative sample set, U is a union set, exp is an exponential function based on e, lambda is a regularization coefficient,representing the square operation of the two norms.
The beneficial effects of the above-mentioned further scheme are: through the technical scheme, a logistic regression loss function of a regularization term is added to calculate a loss function value, the loss function is used for calculating the loss values of all positive and negative samples, and then a gradient descent method is adopted to optimize the loss values, so that the optimal model parameters and the final embedded representation of entities and relations are obtained.
Further, in S3, performing negative sampling on the triples of the test set to obtain a plurality of corresponding candidate triples, and using the trained scoring function f r (h, t) scoring the candidate triples, and arranging the candidate triples in descending order according to the scores, wherein the entity or relation which is considered to be missing is ranked at the top, so that the link prediction of the knowledge graph is completed.
The beneficial effects of the above-mentioned further scheme are: through the technical scheme, the candidate triples of the test set are scored and sequenced, and the triples with the top ranking are selected to be the real triples, so that the link prediction of the entity and the relation is realized.
Drawings
FIG. 1 is a flowchart of a knowledge graph link prediction method.
Fig. 2 is a schematic diagram of a translation-based classical model transition.
Fig. 3 is a schematic diagram of a model framework of a knowledge graph link prediction method.
Detailed Description
The invention will be further described with reference to the drawings and specific examples.
As shown in fig. 1, a knowledge graph link prediction method includes the following steps:
s1: pre-training the entity and the relation by using a TransE model to obtain an initialized embedded representation of the entity and the relation;
s2: constructing a training set and a testing set based on the initialized entity and the embedded representation of the relation, inputting data of the training set into a neural network model based on a convolutional neural network and a self-attention mechanism for training, calculating a loss function value of the training set on the neural network model based on the convolutional neural network and the self-attention mechanism, and storing optimized model parameters;
s3: and (3) based on the optimized model parameters, scoring and sorting the candidate triplets of the test set, predicting missing entities or relations, and finishing the link prediction of the knowledge graph.
In one embodiment of the invention, to facilitate learning more accurate vector representations of entities and relationships, entities and relationships need to be pre-trained using a pre-training model TransE to obtain their initial embedded representations. As shown in fig. 2, which is a schematic diagram of a translation-based classical model, both entities and relationships are modeled as vectors in the same space, and relationships are seen as displacements between vectors, such that the vector sum of the head entity and the relationship is close to the tail entity vector representation. Thus, the scoring function of the TransE is defined as a distance value, thereby enabling model training.
The training set construction in S2 includes the following sub-steps:
s2-1: inputting an initial training set, an entity set, a relation set, training rounds, batch training numbers, a negative sampling rate, a learning rate and regularization parameters into a neural network model based on a convolutional neural network and a self-attention mechanism;
s2-2: taking samples with the same number as the batch training number from the initial training set as a positive sample set of the iterative training, and obtaining a negative sample set of the iterative training according to a negative sampling rate;
s2-3: a training set for iterative training is constructed from the positive and negative sample sets.
S2, the following sub-steps are also included:
s2-4: inputting each triplet in the training set into a neural network model based on a convolutional neural network and a self-attention mechanism, and calculating a score of each triplet;
s2-5: based on the triplet score, calculating a loss function value of the training set on a neural network model based on a convolutional neural network and a self-attention mechanism;
s2-6: and carrying out iterative updating by adopting an Adam optimizer to obtain optimized model parameters which minimize the total loss function value of the sample, and vector representation of entity and relation updating, and storing the optimized model parameters.
As shown in FIG. 3, each triplet score is calculated in S2-4, comprising the sub-steps of:
s2-4-1: splicing the initialized entity and the embedded representation of the relation, and representing the initial triplet embedded matrix X as
X=[h,r,t]∈R k×3
Wherein h, R and t are triplet vectors, R k×3 A splice matrix of k rows and 3 columns, k being the dimension of the embedded representation of the initialized entities and relationships;
s2-4-2: generating three key matrices in a self-attention mechanism by using three mapping matrices, wherein the three key matrices comprise a query matrix Q, a key matrix K and a value matrix V;
s2-4-3: performing matrix multiplication operation on the transpose of the query matrix Q and the key matrix K, and performing normalization processing to obtain an attention distribution matrix;
s2-4-4: performing matrix product operation on the attention distribution matrix and the value matrix V to obtain an output matrix X', wherein each row of vectors of the output matrix are obtained by weighting and summing all initial row vectors, so that the mutual fusion of the information of different dimensions of the triplets is realized;
s2-4-5: splicing the initial triplet embedded matrix X and the output matrix X' obtained by using the self-attention mechanism into a matrix with k rows and 6 columnsAs input to a subsequent convolutional layer;
s2-4-6: using τ convolution check matrices of size 1 x 3Performing convolution operation to obtain tau characteristic matrixes k multiplied by 2;
s2-4-7: splicing the feature matrices after nonlinear activation function, and performing dot product operation with a weight matrix w to obtain a triplet score with the formula of
Wherein f r (h, t) is a scoring function, concat represents stitching, g is an activation function, and Ω is a set of convolution kernels.
S2-5, calculating a loss function value by adopting a logistic regression loss function L added with a regularization term, wherein the formula is
Wherein y is (h,r,t) Label for positive and negative sample, T + Is a positive sample set, T - As a set of negative examples of the sample,u is the union, exp is the exponential function based on e, lambda is the regularization coefficient,representing the square operation of the two norms.
The goal of model training is to minimize the loss function value so that the correct triplet gets a higher score and the wrong triplet gets a lower score, thus better distinguishing between positive and negative samples, where a logistic regression loss function that adds regularization term is used.
S3, performing negative sampling on the triples of the test set to obtain a plurality of corresponding different candidate triples, and using a trained scoring function f r (h, t) scoring the candidate triples, and arranging the candidate triples in descending order according to the scores, wherein the entity or relation which is considered to be missing is ranked at the top, so that the link prediction of the knowledge graph is completed.
The invention fully learns the triplet to embed the associated information representing different dimensionalities by using a self-attention mechanism, thereby reducing information loss; and the interactive information of the entity and the relation is learned by utilizing the convolutional neural network, and the information learned by a self-attention mechanism is combined to extract rich high-order features in the triples, so that a better link prediction effect is realized, and the artificial intelligence downstream task is facilitated.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit of the invention, and such modifications and combinations are still within the scope of the invention.
Claims (6)
1. The knowledge graph link prediction method is characterized by comprising the following steps of:
s1: pre-training the entity and the relation by using a TransE model to obtain an initialized embedded representation of the entity and the relation;
s2: constructing a training set and a testing set based on the initialized entity and the embedded representation of the relation, inputting data of the training set into a neural network model based on a convolutional neural network and a self-attention mechanism for training, calculating a loss function value of the training set on the neural network model based on the convolutional neural network and the self-attention mechanism, and storing optimized model parameters;
s3: and (3) based on the optimized model parameters, scoring and sorting the candidate triplets of the test set, predicting missing entities or relations, and finishing the link prediction of the knowledge graph.
2. The knowledge-graph link prediction method according to claim 1, wherein constructing the training set in S2 includes the following sub-steps:
s2-1: inputting an initial training set, an entity set, a relation set, training rounds, batch training numbers, a negative sampling rate, a learning rate and regularization parameters into a neural network model based on a convolutional neural network and a self-attention mechanism;
s2-2: taking samples with the same number as the batch training number from the initial training set as a positive sample set of the iterative training, and obtaining a negative sample set of the iterative training according to a negative sampling rate;
s2-3: a training set for iterative training is constructed from the positive and negative sample sets.
3. The knowledge-graph link prediction method according to claim 2, wherein the step S2 further comprises the following sub-steps:
s2-4: inputting each triplet in the training set into a neural network model based on a convolutional neural network and a self-attention mechanism, and calculating a score of each triplet;
s2-5: based on the triplet score, calculating a loss function value of the training set on a neural network model based on a convolutional neural network and a self-attention mechanism;
s2-6: and carrying out iterative updating by adopting an Adam optimizer to obtain optimized model parameters which minimize the total loss function value of the sample, and vector representation of entity and relation updating, and storing the optimized model parameters.
4. A knowledge-graph link prediction method according to claim 3, wherein the calculating of each triplet score in S2-4 comprises the following sub-steps:
s2-4-1: splicing the initialized entity and the embedded representation of the relation, and representing the initial triplet embedded matrix X as
X=[h,r,t]∈R k×3
Wherein h, R and t are triplet vectors, R k×3 A splice matrix of k rows and 3 columns, k being the dimension of the embedded representation of the initialized entities and relationships;
s2-4-2: generating three key matrices in a self-attention mechanism by using three mapping matrices, wherein the three key matrices comprise a query matrix Q, a key matrix K and a value matrix V;
s2-4-3: performing matrix multiplication operation on the transpose of the query matrix Q and the key matrix K, and performing normalization processing to obtain an attention distribution matrix;
s2-4-4: performing matrix product operation on the attention distribution matrix and the value matrix V to obtain an output matrix X', wherein each row of vectors of the output matrix are obtained by weighting and summing all initial row vectors, so that the mutual fusion of the information of different dimensions of the triplets is realized;
s2-4-5: splicing the initial triplet embedded matrix X and the output matrix X' obtained by using the self-attention mechanism into a matrix with k rows and 6 columnsAs input to a subsequent convolutional layer;
s2-4-6: using τ convolution check matrices of size 1 x 3Performing convolution operation to obtain tau characteristic matrixes k multiplied by 2;
s2-4-7: splicing the feature matrices after nonlinear activation function, and performing dot product operation with a weight matrix w to obtain a triplet score with the formula of
Wherein f r (h, t) is a scoring function, concat represents stitching, g is an activation function, and Ω is a set of convolution kernels.
5. The knowledge-graph link prediction method according to claim 4, wherein the calculation loss function value in S2-5 uses a logistic regression loss function L added with regularization term, and the formula is
Wherein y is (h,r,t) Label for positive and negative sample, T + Is a positive sample set, T - Is a negative sample set, U is a union set, exp is an exponential function based on e, lambda is a regularization coefficient,representing the square operation of the two norms.
6. The knowledge-graph link prediction method according to claim 5, wherein the step S3 of performing negative sampling on the triples of the test set to obtain a plurality of different candidate triples, and using the trained scoring function f r (h, t) scoring the candidate triples, and arranging the candidate triples in descending order according to the scores, wherein the entity or relation which is considered to be missing is ranked at the top, so that the link prediction of the knowledge graph is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310749432.1A CN116757283A (en) | 2023-06-21 | 2023-06-21 | Knowledge graph link prediction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310749432.1A CN116757283A (en) | 2023-06-21 | 2023-06-21 | Knowledge graph link prediction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116757283A true CN116757283A (en) | 2023-09-15 |
Family
ID=87947526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310749432.1A Pending CN116757283A (en) | 2023-06-21 | 2023-06-21 | Knowledge graph link prediction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116757283A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117891955A (en) * | 2024-01-17 | 2024-04-16 | 哈尔滨工业大学 | Knowledge graph link prediction method based on multi-scale attention network |
CN118036812A (en) * | 2024-02-29 | 2024-05-14 | 中电普信(北京)科技发展有限公司 | Battlefield win-lose prediction method based on dynamic knowledge graph |
-
2023
- 2023-06-21 CN CN202310749432.1A patent/CN116757283A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117891955A (en) * | 2024-01-17 | 2024-04-16 | 哈尔滨工业大学 | Knowledge graph link prediction method based on multi-scale attention network |
CN118036812A (en) * | 2024-02-29 | 2024-05-14 | 中电普信(北京)科技发展有限公司 | Battlefield win-lose prediction method based on dynamic knowledge graph |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Han et al. | A survey on metaheuristic optimization for random single-hidden layer feedforward neural network | |
CN111695779B (en) | Knowledge tracking method, knowledge tracking device and storage medium | |
CN111523047A (en) | Multi-relation collaborative filtering algorithm based on graph neural network | |
CN111881342A (en) | Recommendation method based on graph twin network | |
CN116757283A (en) | Knowledge graph link prediction method | |
Wu et al. | SGKT: Session graph-based knowledge tracing for student performance prediction | |
CN113190688B (en) | Complex network link prediction method and system based on logical reasoning and graph convolution | |
CN113065587B (en) | Scene graph generation method based on hyper-relation learning network | |
Wu et al. | Exam paper generation based on performance prediction of student group | |
CN112667824B (en) | Knowledge graph completion method based on multi-semantic learning | |
CN112131403B (en) | Knowledge graph representation learning method in dynamic environment | |
CN112949929B (en) | Knowledge tracking method and system based on collaborative embedded enhanced topic representation | |
CN117131933A (en) | Multi-mode knowledge graph establishing method and application | |
CN109840595A (en) | A kind of knowledge method for tracing based on group study behavior feature | |
CN112527993A (en) | Cross-media hierarchical deep video question-answer reasoning framework | |
CN113704438A (en) | Conversation recommendation method of abnormal picture based on layered attention mechanism | |
CN114240539B (en) | Commodity recommendation method based on Tucker decomposition and knowledge graph | |
CN115982373A (en) | Knowledge graph recommendation method combining multi-level interactive contrast learning | |
CN115249061A (en) | Pruning method and system for ViT network model | |
CN114742292A (en) | Knowledge tracking process-oriented two-state co-evolution method for predicting future performance of students | |
CN113157889A (en) | Visual question-answering model construction method based on theme loss | |
CN113704439A (en) | Conversation recommendation method based on multi-source information heteromorphic graph | |
Feng et al. | Energy-efficient and robust cumulative training with net2net transformation | |
Yang et al. | Skill-Oriented Hierarchical Structure for Deep Knowledge Tracing | |
Wang et al. | Gaskt: A graph-based attentive knowledge-search model for knowledge tracing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |