CN115408536A - Knowledge graph complementing method based on context information fusion - Google Patents

Knowledge graph complementing method based on context information fusion Download PDF

Info

Publication number
CN115408536A
CN115408536A CN202211031111.XA CN202211031111A CN115408536A CN 115408536 A CN115408536 A CN 115408536A CN 202211031111 A CN202211031111 A CN 202211031111A CN 115408536 A CN115408536 A CN 115408536A
Authority
CN
China
Prior art keywords
entity
context
embedding
context information
entity relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211031111.XA
Other languages
Chinese (zh)
Inventor
马战川
张立和
孔雨秋
陈思龙
尹宝才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202211031111.XA priority Critical patent/CN115408536A/en
Publication of CN115408536A publication Critical patent/CN115408536A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3347Query execution using vector based model

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to the technical field of natural language processing, and particularly relates to a knowledge graph complementing method based on context information fusion. The invention processes the context information of the knowledge graph by using 3D convolution for the first time and introduces head-tail double relations to solve the coding problem of complex relations. Firstly, an entity relationship coding module is used for coding an input entity relationship pair to obtain entity relationship characteristics; then, a context coding module is used for coding the input context information to obtain context characteristics; then, inputting the entity relationship characteristics and the context characteristics into a characteristic fusion module for characteristic fusion to obtain a query vector; and finally, calculating the similarity of the query vector and the candidate tail entity vector to obtain the scores of different candidate entities. The method utilizes the 3D convolution to extract the characteristics of the context structure information, integrates the characteristics into the entity relationship characteristics, and further improves the accuracy of the knowledge graph completion method on a plurality of common data sets.

Description

Knowledge graph complementing method based on context information fusion
Technical Field
The invention belongs to the technical field of knowledge graphs, and particularly relates to a knowledge graph complementing method based on context information fusion.
Background
In recent years, the development of the knowledge-graph field has been receiving much attention. Knowledge Graph (KG) has been successfully applied to many questions in the field of artificial intelligence, such as question answering and information retrieval. The knowledge graph contains triples < head entity, relation, tail entity >, denoted < h, r, t >, which are useful resources for many natural language processing, particularly for information retrieval applications such as semantic search and question and answer. However, large knowledgegraphs, even containing billions of triples, are still incomplete, i.e., many valid triples are lost. Therefore, much research effort has focused on the knowledge-graph completion task, which aims to predict missing portions of knowledge-graph triples.
Distance-based knowledge graph spectrum completion models typically define a scoring function, similar in form to h + r-t, that measures the reasonableness of a given triplet. For example, the TransE model directly takes the embedding space as the translation space; the TransH model models the relationship as a translation operation on a hyperplane; the TransR model models entities and relations in different spaces, namely an entity space and a plurality of relation spaces; the TransD model models each entity or relationship by using two vectors; the TranSparse model mainly considers heterogeneity and imbalance in the knowledge graph; the PTransE model integrates the relationship path into a TransE model; the ITransF model uses a sparse attention mechanism to discover hidden relational concepts and to transfer knowledge through concept sharing. The above common distance calculation-based knowledge-graph completion model generally has higher model training speed and model parameter efficiency.
Recently, researchers have begun to explore a KGE (convolutional graph embedding) method based on a convolutional neural network and have obtained good results. For example, the ConvE model uses 2D convolution on embedded and multi-layered nonlinear features to model the knowledge-graph; the ConvKB model also uses a convolutional neural network for KGE; the ConMask extracts the embedding of the dependency relationship from the text characteristics of the entities and the relationships in the knowledge graph by using a content mask of the dependency relationship, a full convolution neural network and a semantic averaging method; RSNs research a path-level knowledge graph embedding learning method and propose a cyclic hopping network using sequence model learning relationship path; KG-BERT integrates a BERT model into a KGE model; the COKE model uses a Transformer to extend the ConvE model by increasing the interactions with feature alignment, feature reshaping and cyclic convolution.
As Graph Convolutional Networks (GCNs) have received more and more attention, how to utilize context structure information has become an important research. The R-GCN uses a GCN based approach to process the context information of each entity; the A2N model uses a use attention mechanism to encode neighbor context structure information; the CompGCN model jointly embeds nodes and relationships into a relationship graph.
However, the above method has the following three problems: (1) The method based on distance calculation is difficult to solve the coding problem of complex relation; (2) Convolutional neural network based approaches fail to balance model complexity and characterization capability; and (3) the existing model cannot utilize context information efficiently.
Disclosure of Invention
In order to solve the problems, the invention provides a knowledge graph complementing method based on context information fusion. The algorithm firstly uses an entity relation coding module to code an input entity relation pair to obtain an entity relation characteristic vector; secondly, coding the input context structure information by using a context coding module to obtain a context characteristic matrix; inputting the obtained entity relationship feature vector and the context feature matrix into a feature fusion module for feature fusion to obtain a query vector; and finally, calculating the similarity score of the query vector and the candidate tail entity vector, wherein the tail entity with the highest score is the result predicted by the algorithm.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a knowledge graph complementing method based on context information fusion specifically comprises the following steps:
step S1: and (4) preprocessing data. According to the triple data in the data set, input data of an algorithm is constructed, wherein the input data comprises an entity relation pair, a context information pair and a candidate entity list;
further, the step S1 specifically includes:
step S11: a head entity h and a tail entity t appearing in the data set form a set E, namely a candidate entity list;
step S12: for a given triple < h, r, t >, taking a head entity h and a relation r to form an entity relation pair p: < h, r >;
step S13: context structure information is constructed for each entity-relationship pair. For a given entity relationship pair p: < h, r >, a set C of all entity relationship pairs having the same entity h or the same relationship r as p is found in the data set, i.e. the set C is a list of context information pairs corresponding to the entity relationship pair p, and the set C formed by all C is a set of context information pairs corresponding to all entity relationship pairs.
Step S2: inputting the entity relationship pair into an entity relationship coding module to obtain entity relationship characteristics;
further, the step S2 specifically includes:
step S21: initializing and embedding the entity relationship pair to obtain an entity relationship matrix;
step S22: respectively inputting the entity relationship matrix into a plurality of different 2D convolutional networks for feature extraction to obtain a plurality of entity relationship features with different scales, and then splicing the plurality of entity relationship features together to obtain an initial entity relationship feature;
step S23: and the initial entity relationship characteristics are mapped by the full connection layer to change the embedding dimension to obtain the entity relationship characteristics.
And step S3: inputting a group of context information into a context coding module to obtain a plurality of context characteristics;
further, the step S3 specifically includes:
step S31: initializing and embedding a group of context information pairs corresponding to the entity relationship pair to obtain a plurality of context embeddings;
step S32: splicing a plurality of context embedding sequences together to obtain context embedding;
step S33: and embedding the context into the input 3D convolutional network for feature extraction to obtain a group of context features.
And step S4: inputting the entity relationship characteristics and a group of context characteristics into a characteristic fusion module to obtain a query vector;
further, the step S4 specifically includes:
step S41: inputting the entity relationship characteristics and a group of context characteristics into a Transformer network to obtain an initial query vector;
step S42: and inputting the initial query vector into a multilayer perceptron to obtain the query vector.
Step S5: and calculating the similarity between the query vector and the candidate entity to obtain the probability distribution of the candidate entity.
Further, the step S5 specifically includes:
step S51: initializing and embedding the candidate entity list to obtain candidate entity embedding;
step S52: calculating a similarity score of the query vector and the candidate entity embedding based on the cosine similarity; and then calculating the probability distribution of the candidate entity through a sigmoid function.
Step S6: and training the whole algorithm according to the loss function so that the predicted result of the algorithm is fitted with a correct result. The obtained whole algorithm model is a tool of the knowledge graph completion method.
The invention has the beneficial effects that:
(1) We innovatively introduce 3D convolution to encode context structure information, 3D convolution has a greater characterization capability than 2D convolution, and is able to process sequence data. The 3D convolution can not only process context structure information efficiently, but also perform information interaction during the encoding process.
(2) Compared with other models based on the convolutional neural network, the head-tail dual relation is used for solving the coding problem of the complex relation, and the distance-based method and the convolutional neural network-based method are fused to a certain extent, so that the convolutional neural network-based model can also solve the coding problem of the complex relation. Compared with other neural network-based models, the model has the advantages of less parameter quantity and shorter training time.
(3) We innovatively propose to use the Transformer structure to fuse the context structure information, because the Transformer structure is more excellent in the information interaction fusion between the processing of the sequence data and the data, so we further process and fuse the context structure information with the Transformer.
Drawings
FIG. 1 is an overall block diagram of the design of the present invention;
FIG. 2 is a specific process of extracting entity relationship features by the entity relationship encoding module according to the present invention;
FIG. 3 is a detailed process of extracting context features by the context encoding module in the present invention;
FIG. 4 is a detailed process of the feature fusion module fusing entity relationship features and context features in the present invention;
fig. 5 is a specific process of obtaining the probability distribution of the candidate entity through similarity calculation in the present invention.
Detailed description of the preferred embodiment
The technical solution of the present invention will be further described with reference to the following specific embodiments and accompanying drawings.
A knowledge graph complementing method based on context information fusion comprises the following steps:
step S1: and (4) preprocessing data. And according to the triple data in the data set, constructing input data of an algorithm, wherein the input data comprises an entity relation pair, a context information pair and a candidate entity list.
The step S1 specifically comprises the following steps:
step S11: a head entity h and a tail entity t appearing in the data set form a set E, namely a candidate entity list;
step S12: for a given triple < h, r, t >, taking a head entity h and a relation r to form an entity relation pair p: < h, r >;
step S13: context structure information is constructed for each entity-relationship pair. For a given entity relationship pair p: < h, r >, a set C of all entity relationship pairs having the same entity h or the same relationship r as p is found in the data set, i.e. the set C is a list of context information pairs corresponding to the entity relationship pair p, and the set C formed by all C is a set of context information pairs corresponding to all entity relationship pairs.
Step S2: as shown in fig. 2, the entity relationship pair is input into the entity relationship encoding module to obtain the entity relationship characteristic.
The step S2 specifically comprises the following steps:
step S21: initializing and embedding the entity relation to p to obtain entity embedding E h ∈R 1×d And relation embedding E r ∈R 1×d Wherein d is the embedding dimension of the knowledge-graph representation; then E is h And E r Splicing and remodeling to obtain an entity relation matrix
Figure BDA0003817197430000066
Wherein d is 1 And d 2 Respectively the width and height of the entity relationship matrix, satisfies the condition d 1 ×d 2 =2d;
Step S22: relating the entity matrix M p Respectively inputting 3 different 2D convolution networks for feature extraction to obtain 3 entity relationship features
Figure BDA0003817197430000061
Then splicing the three entity relationship characteristics together to obtain an initial entity relationship characteristic f p ∈R 1×3d I.e. define
Figure BDA0003817197430000062
Wherein [;]representing a splice;
step S23: will f is p Inputting full connection layer to change embedding dimension to obtain entity relation characteristic F p ∈R 1×d
And step S3: as shown in fig. 3, the context information pair corresponding to the entity relationship pair is input to the context encoding module to obtain a plurality of context features.
The step S3 specifically comprises the following steps:
step S31: processing the context information pair c corresponding to the entity relationship pair p according to the mode described in the step S21 to obtain n context embedded matrixes
Figure BDA0003817197430000063
Wherein n represents the number of context information pairs, j belongs to [1,n ]]Representing the jth pair of context information, d 1 And d 2 Width and height of context embedding, respectively, satisfying the condition d 1 ×d 2 =2d, d is the embedding dimension of the knowledge-graph representation;
step S32: embedding n contexts into V c Stitching together for contextual embedding
Figure BDA0003817197430000064
I.e. define
Figure BDA0003817197430000065
Wherein [;]representing a splicing operation;
step S33: embedding context into M c Inputting the data into a 3D convolution network for feature extraction to obtain n context features
Figure BDA0003817197430000071
And step S4: as shown in fig. 4, the entity relationship features and the context features are input into the feature fusion module to obtain a query vector.
Step S41: characterizing entity relationships F p And n context characteristics
Figure BDA0003817197430000072
Inputting a Transformer network to obtain an initial query vector q epsilon R 1×d Wherein the Transformer network is formed by stacking 6 identical Transformer layers, and each layer is an encoding module only using Transformer;
step S42: inputting the initial query vector Q into a multilayer perceptron to obtain a query vector Q belonging to R 1×d Therein are muchThe layer perceptron is composed of a full connection layer, a Relu layer and a full connection layer.
Step S5: and calculating the similarity between the query vector and the candidate entity to obtain the probability distribution of the candidate entity.
The step S5 specifically comprises the following steps:
step S51: initializing and embedding the candidate entity list E to obtain the candidate entity embedding E E in R 1×d D is the embedding dimension of the knowledge graph representation;
step S52: calculating e-similarity score s belonging to R of the query vector Q and the embedding of the candidate entity based on cosine similarity, namely
Figure BDA0003817197430000073
Wherein [. ]]Represents a matrix point multiplication operation, | x | represents the modulus of the vector x; then the probability p (t) of the candidate entity is obtained through sigmoid function calculation i |h,r)=sigmoid(s(t i H, r)), where i ∈ [1, | E]Representing the ith candidate entity, | E | represents the number of candidate entities, s (t) i H, r) represents the candidate entity t i The similarity score of p (t) i H, r) represents a candidate entity t i The probability of correctness.
Step S6: training the whole model algorithm, optimizing algorithm parameters, and minimizing a loss function:
Figure BDA0003817197430000074
where t is the candidate entity label vector, t =1 represents the correct candidate entity, t =0 represents the incorrect candidate entity, | E | represents the number of candidate entities, p (t) i H, r) denotes t i Probability that the entity is the correct entity.
The algorithm steps are carried out under the same data set, a plurality of groups of comparison tests are carried out for efficiency comparison, and the algorithm is proved to be superior to the ConvE method published in AAAI in 2018. Some of the control experiments were as follows:
on the FB15k-237 dataset, our algorithm predicted that the top 1 probability of correct outcome (Hit @ 1) exceeded the method by 0.018 (0.237 for this method, 0.255 for this invention), the top 3 probability of ranking (Hit @ 3) exceeded the method by 0.022 (0.356 for this method, 0.378 for this invention), and the top 10 probability of ranking (Hit @ 10) exceeded the method by 0.025 (0.501 for this method, 0.526 for this invention).
On NELL-995 data set, our algorithm predicted that the average rank (MR) of the correct results exceeded this method 714 (1941 for this method, 1227 for this invention), the 1 st probability of ranking (Hit @ 1) exceeded this method 0.021 (0.446 for this invention, 0.467 for this invention), the 3 rd probability of ranking (Hit @ 3) exceeded this method 0.025 (0.558 for this method, 0.583 for this invention), and the 10 th probability of ranking (Hit @ 10) exceeded this method 0.046 (0.621 for this invention, 0.667 for this invention).
Comparison experiments on different data sets show that the performance of the knowledge graph completion method based on context information fusion provided by the invention exceeds that of other reference algorithms, and the effectiveness of the algorithm is proved.

Claims (10)

1. A knowledge graph completing method based on context information fusion is characterized by comprising the following steps:
step S1: preprocessing data; according to the triple data in the data set, input data of an algorithm is constructed, wherein the input data comprises an entity relation pair, a context information pair and a candidate entity list;
step S2: inputting the entity relationship pair into an entity relationship coding module to obtain entity relationship characteristics;
step S21: initializing and embedding the entity relationship pair to obtain an entity relationship matrix;
step S22: respectively inputting the entity relationship matrix into a plurality of different 2D convolutional networks for feature extraction to obtain a plurality of entity relationship features with different scales, and then splicing the entity relationship features together to obtain an initial entity relationship feature;
step S23: the initial entity relationship characteristics are mapped and embedded dimensions are changed through a full connection layer to obtain entity relationship characteristics;
and step S3: inputting a group of context information into a context coding module to obtain a plurality of context characteristics;
step S31: initializing and embedding a group of context information pairs corresponding to the entity relationship pair to obtain a plurality of context embeddings;
step S32: splicing a plurality of context embedding sequences together to obtain context embedding;
step S33: embedding the context into an input 3D convolutional network for feature extraction to obtain a group of context features;
and step S4: inputting the entity relationship characteristics and a group of context characteristics into a characteristic fusion module to obtain a query vector;
step S41: inputting the entity relationship characteristics and a group of context characteristics into a Transformer network to obtain an initial query vector;
step S42: inputting the initial query vector into a multilayer perceptron to obtain a query vector;
step S5: calculating the similarity between the query vector and the candidate entity to obtain the probability distribution of the candidate entity;
step S51: initializing and embedding the candidate entity list to obtain candidate entity embedding;
step S52: calculating a similarity score of the query vector and the candidate entity embedding based on the cosine similarity; then, calculating by a sigmoid function to obtain the probability distribution of the candidate entity;
step S6: training the whole algorithm by minimizing the whole loss function, and fitting the predicted result of the algorithm with a correct result; the obtained whole algorithm model is a tool of the knowledge graph completion method.
2. The knowledge graph completing method based on context information fusion according to claim 1, wherein the step S1 specifically comprises:
step S11: a head entity h and a tail entity t appearing in the data set form a set E, namely a candidate entity list;
step S12: for a given triple < h, r, t >, taking a head entity h and a relation r to form an entity relation pair p: < h, r >;
step S13: constructing context structure information for each entity relationship pair; for a given entity relationship pair p: < h, r >, a set C of all entity relationship pairs having the same entity h or the same relationship r as p is found in the data set, i.e. the set C is a list of context information pairs corresponding to the entity relationship pair p, and the set C formed by all C is a set of context information pairs corresponding to all entity relationship pairs.
3. The knowledge-graph completion method based on context information fusion according to claim 1 or 2, wherein the step S2 specifically comprises:
step S21: respectively carrying out initialization embedding on the entity relation pairs p to obtain entity embedding E h ∈R 1×d And relation embedding E r ∈R 1×d Wherein d is the embedding dimension of the knowledge-graph representation; then E is h And E r Splicing and remodeling to obtain an entity relation matrix
Figure FDA0003817197420000021
Wherein d is 1 And d 2 Respectively the width and height of the entity relationship matrix, satisfies the condition d 1 ×d 2 =2d;
Step S22: relating the entity matrix M p Respectively inputting 3 different 2D convolution networks for feature extraction to obtain 3 entity relationship features
Figure FDA0003817197420000022
Then splicing the three entity relationship characteristics together to obtain an initial entity relationship characteristic f p ∈R 1×3d I.e. define
Figure FDA0003817197420000031
Wherein [;]representing a splice;
step S23: will f is p Inputting full connection layer to change embedding dimension to obtain entity relation characteristic F p ∈R 1×d
4. The knowledge-graph completion method based on context information fusion according to claim 1 or 2, wherein the step S3 specifically comprises:
step S31: processing the context information pair c corresponding to the entity relationship pair p according to the mode described in the step S21 to obtain n context embedded matrixes
Figure FDA0003817197420000032
Wherein n represents the number of context information pairs, j is E [1,n ∈ [ ]]Representing the jth pair of context information, d 1 And d 2 Width and height of context embedding, respectively, satisfying the condition d 1 ×d 2 =2d, d is the embedding dimension of the knowledge-graph representation;
step S32: embedding n contexts into V c Stitching together for contextual embedding
Figure FDA0003817197420000033
I.e. define
Figure FDA0003817197420000034
Wherein [;]representing a splicing operation;
step S33: embedding a context into M c Inputting the data into a 3D convolution network for feature extraction to obtain n context features
Figure FDA0003817197420000035
5. The knowledge-graph completion method based on context information fusion according to claim 3, wherein the step S3 specifically comprises:
step S31: processing the context information pair c corresponding to the entity relationship pair p according to the mode described in the step S21 to obtain n context embedded matrixes
Figure FDA0003817197420000036
Wherein n represents the number of context information pairs, j belongs to [1,n ]]Representing the jth pair of context information, d 1 And d 2 Respectively, width and height of context embeddingCondition d 1 ×d 2 =2d, d is the embedding dimension of the knowledge-graph representation;
step S32: embedding n contexts into V c Stitching together to obtain contextual embedding
Figure FDA0003817197420000037
I.e. define
Figure FDA0003817197420000038
Wherein [;]representing a splicing operation;
step S33: embedding a context into M c Inputting the data into a 3D convolution network for feature extraction to obtain n context features
Figure FDA0003817197420000039
6. The knowledge-graph completion method based on context information fusion according to claim 1, 2 or 5, wherein the step S4 specifically comprises:
step S41: characterizing entity relationships F p And n context characteristics
Figure FDA0003817197420000041
Inputting a Transformer network to obtain an initial query vector q epsilon R 1×d Wherein the Transformer network is formed by stacking 6 identical Transformer layers, and each layer is an encoding module only using Transformer;
step S42: inputting the initial query vector Q into a multilayer perceptron to obtain a query vector Q belonging to R 1×d The multilayer perceptron consists of a full connection layer, a Relu layer and a full connection layer.
7. The knowledge-graph completing method based on context information fusion according to claim 3, wherein the step S4 specifically comprises:
step S41: characterizing an entity relationship F p And n context characteristics
Figure FDA0003817197420000042
Inputting a Transformer network to obtain an initial query vector q epsilon R 1×d Wherein the Transformer network is formed by stacking 6 identical Transformer layers, and each layer is an encoding module only using Transformer;
step S42: inputting the initial query vector Q into a multilayer perceptron to obtain a query vector Q belonging to R 1×d The multilayer perceptron consists of a full connection layer, a Relu layer and a full connection layer.
8. The knowledge-graph completion method based on context information fusion according to claim 4, wherein the step S4 specifically comprises:
step S41: characterizing an entity relationship F p And n context characteristics
Figure FDA0003817197420000043
Inputting a Transformer network to obtain an initial query vector q epsilon R 1×d Wherein the Transformer network is formed by stacking 6 identical Transformer layers, and each layer is an encoding module only using Transformer;
step S42: inputting the initial query vector Q into a multilayer perceptron to obtain a query vector Q belonging to R 1×d The multilayer perceptron consists of a full connection layer, a Relu layer and a full connection layer.
9. The knowledge-graph completing method based on context information fusion according to claim 1, 2, 5, 7 or 8, wherein the step S5 specifically comprises:
step S51: initializing and embedding the candidate entity list E to obtain candidate entity embedding E belonging to R 1×d D is the embedding dimension of the knowledge-graph representation;
step S52: calculating e-similarity score s belonging to R of the query vector Q and the embedding of the candidate entity based on cosine similarity, namely
Figure FDA0003817197420000051
Where [. Represents the matrix dot product operation, | x | represents the modulus of the vector x; then the probability p (t) of the candidate entity is obtained through sigmoid function calculation i |h,r)=sigmoid(s(t i H, r)), where i ∈ [1, | E]Representing the ith candidate entity, | E | represents the number of candidate entities, s (t) i H, r) represents the candidate entity t i Score of p (t) i | h, r) represents the candidate entity t i The probability of the classification being correct.
10. The knowledge-graph completing method based on context information fusion according to claim 1, 2, 5, 7 or 8, wherein the step S5 specifically comprises:
step S51: initializing and embedding the candidate entity list E to obtain candidate entity embedding E belonging to R 1×d D is the embedding dimension of the knowledge-graph representation;
step S52: calculating e-similarity score s belonging to R of the query vector Q and the embedding of the candidate entity based on cosine similarity, namely
Figure FDA0003817197420000052
Wherein [. ]]Represents a matrix point multiplication operation, | x | represents the modulus of the vector x; then the probability p (t) of the candidate entity is obtained through sigmoid function calculation i |h,r)=sigmoid(s(t i H, r)), where i ∈ [1, | E-]Representing the ith candidate entity, | E | representing the number of candidate entities, s (t) i H, r) represents the candidate entity t i Score of p (t) i | h, r) represents the candidate entity t i The probability of correct classification.
CN202211031111.XA 2022-08-26 2022-08-26 Knowledge graph complementing method based on context information fusion Pending CN115408536A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211031111.XA CN115408536A (en) 2022-08-26 2022-08-26 Knowledge graph complementing method based on context information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211031111.XA CN115408536A (en) 2022-08-26 2022-08-26 Knowledge graph complementing method based on context information fusion

Publications (1)

Publication Number Publication Date
CN115408536A true CN115408536A (en) 2022-11-29

Family

ID=84161770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211031111.XA Pending CN115408536A (en) 2022-08-26 2022-08-26 Knowledge graph complementing method based on context information fusion

Country Status (1)

Country Link
CN (1) CN115408536A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116186350A (en) * 2023-04-23 2023-05-30 浙江大学 Power transmission line engineering searching method and device based on knowledge graph and topic text

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116186350A (en) * 2023-04-23 2023-05-30 浙江大学 Power transmission line engineering searching method and device based on knowledge graph and topic text
CN116186350B (en) * 2023-04-23 2023-07-25 浙江大学 Power transmission line engineering searching method and device based on knowledge graph and topic text

Similar Documents

Publication Publication Date Title
CN111310438B (en) Chinese sentence semantic intelligent matching method and device based on multi-granularity fusion model
CN110298037B (en) Convolutional neural network matching text recognition method based on enhanced attention mechanism
CN110222140B (en) Cross-modal retrieval method based on counterstudy and asymmetric hash
WO2022057669A1 (en) Method for pre-training knowledge graph on the basis of structured context information
CN112232925A (en) Method for carrying out personalized recommendation on commodities by fusing knowledge maps
CN110083770B (en) Sequence recommendation method based on deeper feature level self-attention network
Gao et al. The joint method of triple attention and novel loss function for entity relation extraction in small data-driven computational social systems
CN112818676A (en) Medical entity relationship joint extraction method
CN113628059B (en) Associated user identification method and device based on multi-layer diagram attention network
CN114817568B (en) Knowledge hypergraph link prediction method combining attention mechanism and convolutional neural network
CN112860930B (en) Text-to-commodity image retrieval method based on hierarchical similarity learning
CN115526236A (en) Text network graph classification method based on multi-modal comparative learning
CN111461175A (en) Label recommendation model construction method and device of self-attention and cooperative attention mechanism
CN114332519A (en) Image description generation method based on external triple and abstract relation
CN116932722A (en) Cross-modal data fusion-based medical visual question-answering method and system
CN115331075A (en) Countermeasures type multi-modal pre-training method for enhancing knowledge of multi-modal scene graph
CN115408536A (en) Knowledge graph complementing method based on context information fusion
Xu et al. Idhashgan: deep hashing with generative adversarial nets for incomplete data retrieval
Zhang et al. TS-GCN: Aspect-level sentiment classification model for consumer reviews
CN115640418B (en) Cross-domain multi-view target website retrieval method and device based on residual semantic consistency
CN111241326A (en) Image visual relation referring and positioning method based on attention pyramid network
CN116955650A (en) Information retrieval optimization method and system based on small sample knowledge graph completion
CN115408605A (en) Neural network recommendation method and system based on side information and attention mechanism
CN112364192A (en) Zero sample Hash retrieval method based on ensemble learning
CN117408247B (en) Intelligent manufacturing triplet extraction method based on relational pointer network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination