CN113360664B - Knowledge graph complementing method - Google Patents
Knowledge graph complementing method Download PDFInfo
- Publication number
- CN113360664B CN113360664B CN202110599781.0A CN202110599781A CN113360664B CN 113360664 B CN113360664 B CN 113360664B CN 202110599781 A CN202110599781 A CN 202110599781A CN 113360664 B CN113360664 B CN 113360664B
- Authority
- CN
- China
- Prior art keywords
- entity
- relationship
- relation
- knowledge graph
- path
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a knowledge graph complementing method, which comprises the steps of firstly downloading a knowledge graph and obtaining text description of each relation, then obtaining vector initialization of each relation based on a text embedding mode, and then inputting the vector initialization of each relation into the downloaded knowledge graph to obtain a new knowledge graph; a user provides a triple to be complemented, a head entity and a tail entity of the triple are input into an MSNN, and context information and relationship path characteristics of the entities are respectively extracted through two parallel sub-networks in the MSNN; and finally, deducing the missing relationship according to the context information and the relationship path characteristics, and supplementing the missing relationship into the original knowledge graph.
Description
Technical Field
The invention belongs to the technical field of knowledge graphs, and particularly relates to a knowledge graph complementing method.
Background
With the vigorous development of the internet technology, under the historical wave of artificial intelligence, big data provides convenience for the life of people and simultaneously provides a difficult challenge. In the face of multi-source heterogeneous data, how to effectively screen out required information from massive structured and unstructured data is an important problem. The concept of the knowledge graph provides a feasible solution for the problem, the knowledge graph can reduce man-machine interaction obstacles, and a simple computer can support human knowledge, so that the computer is more intelligent and automatic. Google proposed the concept of Knowledge Graph (KG) for the first time in 2012, and it can be known from the definition of the Knowledge Graph that the essence of the Knowledge Graph is a semantic network composed of nodes and edges, and is intended to describe the relationship between various concepts in the real world. From the perspective of the graph, entities in the knowledge-graph correspond to nodes in the graph, and relationships in the knowledge-graph correspond to edges in the graph. The entities in the knowledge-graph may be any object that exists in the objective world, such as humans, animals, places, cities, countries, and the like. In a formal view, the knowledge graph usually exists in an organization form of triples, and each triplet (h, r, t) can be regarded as a fact, wherein h, r and t respectively represent a head entity, a relation and a tail entity in the triplet.
At present, the knowledge graph constructed by the industrial and academic circles has large scale and more contents, is wide in related field and covers rich factual information. The Wikidatap repository has over 1800 ten thousand entities, 1632 relationships, and the number of triple facts is about 6600 ten thousand. YAGO knowledge maps more than 4.47 hundred million triplets. Although the knowledge graph constructed at present achieves a very large scale in both the number of fact items and entity relations, the content of the knowledge base is still incomplete and very sparse, and a large number of undiscovered facts exist. For example, the Freebase knowledge base has over 300 million people entity information, but about 70% of people entity information lacks information about origin and nationality. In the DBpedia knowledge base, about 60 percent of scientific researchers do not mark the research directions in which the researchers are respectively engaged. The sparsity and incompleteness of the knowledge graph are difficult to meet the actual use requirements. For example, a search engine directly provides a search service using a DBpedia knowledge base, and a user cannot display a corresponding research field when searching a scientist based on the search engine.
In the face of the problem of content loss in the knowledge base, if the missing information is supplemented by a manual reasoning method, the efficiency is low, a large amount of manpower and financial resources are consumed, and particularly, the cup of the salary is more apparent in the face of the manual means of the hundred million-level knowledge base. How to dig out implicit information according to the existing facts in the knowledge graph, supplement missing contents, and alleviate the problems of data sparsity and incompleteness becomes a focus of attention of current researchers. From a task definition perspective, the knowledge-graph reasoning can be divided into two categories, including Link Prediction (Link Prediction) and Relationship Prediction (Relationship Prediction). The link prediction refers to a given head entity, a relation prediction tail entity or a given tail entity and a relation prediction head entity; the goal of relational reasoning is to achieve a given head and tail entity to predict the relationship between. For example, given two known triple facts (e.g., university of electronic technology, located in, metropolis), (metropolis, province, Sichuan province), this fact can be inferred by means of knowledge reasoning (e.g., university of electronic technology, located in, Sichuan province).
With the continuous development of deep learning, deep Neural Networks represented by Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are widely applied to Computer Vision (Computer Vision) and Natural Language Processing (NLP), and have attracted attention of many researchers. A series of knowledge inference models based on deep learning, most typically including the ConvE model, have emerged, but including these existing models, much research work has been done to this end, but the center of gravity of most of the work has focused primarily on the problem of link prediction. Because relatively few studies have been conducted in view of current relationship reasoning, the main focus of the study herein is relationship reasoning.
Most of traditional methods utilize single-dimension semantic information to model knowledge graphs, ConvE uses a single triplet group to model, RSN only adopts path semantics, and the model expression capacity has certain limitations.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a knowledge graph complementing method, which simultaneously extracts the relationship path information between the context information of the entity neighborhood and the entity through an MSNN (multiple spanning neural network) and organically combines the information of the two dimensions to deduce the relationship of entity loss.
In order to achieve the above object, the present invention provides a method for complementing a knowledge graph, which is characterized by comprising the following steps:
(1) acquiring a knowledge graph;
downloading a knowledge graph KG, wherein the knowledge graph comprises triples, each triplet comprises a head entity h, a relation r and a tail entity t, and the ith triplet is marked as (h)i,ri,ti) I ═ 1,2, … represents the triplet number;
obtain each relation riText description of (T)i,TiIncluding textual descriptions of entity types and textual descriptions of relationships themselves;
(2) initializing a relation vector based on text embedding;
(2.1) describing each text by TiInputting the word vector into a pre-trained ELMo model to obtain a word vector of each word of the text;
(2.2) carrying out weighted addition on the word vector of each word in the text by using an SIF (smooth Inverse frequency) algorithm to obtain each text description TiSentence embedding xi;
(2.3) embedding each sentence into xiAs a line, splicing into a text embedding matrix; decomposing the text embedded matrix by using a truncated singular value decomposition method, finding a characteristic vector corresponding to the maximum singular value, and recording the characteristic vector as u;
(2.4) calculating embedding x of each sentenceiThe common semantics of (c): u. ofTuxiThe superscript T denotes transposition;
(2.5) embedding each sentence into xiSubtracting the common semantics to obtain the filtered sentence embedding xi':xi'=xi-uTuxi;
(2.6) advantageEmbedding x into each filtered sentence by Principal Component Analysis (PCA)i' dimension reduction is performed to obtain each relation riIs represented by the vector initialization of (1), denoted as ri *;
(2.7) mixingi *Inputting the knowledge graph into KG downloaded in the step (1), and recording the obtained knowledge graph as KG';
(3) the user provides a triplet to be complemented, which comprises a head entity h*And tail entity t*Wherein r is*To be completed;
(4) the head entity h*And tail entity t*Inputting the information into an MSNN (minimum shift neural network), and respectively extracting context information and relationship path characteristics of an entity through two parallel sub-networks in the MSNN;
(4.1) extracting h from KG' by using convolution information propagation network*And t*Context information C (h) of*,t*);
(4.1.1), setting the maximum iteration number K, and initializing the current iteration number j to be 1;
(4.1.2) attaching head entity h*And tail entity t*Inputting the data to a convolution information propagation network, and calculating a head entity after the jth iteration λ1Representation and head entity h*The number of the associated relationships; tail entity λ2Representation and tail entity t*The number of the associated relationships; w is a1、w2B is a learning parameter, and σ (-) is an activation function;
(4.1.3), judging whether the current iteration time j reaches the maximum iteration time K, and if so, outputting the Kth iterationThe final semantic representations of the head entity and the tail entity are respectively marked asOtherwise, let j equal to j +1, and return to step (4.1.2);
(4.2) extracting h from KG' by using bidirectional cyclic neural network*And t*The relationship path feature set of (1);
(4.2.1) constructing a head entity h by utilizing a random walk algorithm*And tail entity t*Set of possible relationship paths between pi ═ pi1,π2,…,πk,…,πL]Wherein the kth relationship path pikExpressed as: pik={r1,r2,…,rl},rlRepresents the ith relationship;
(4.2.2) utilizing a bidirectional recurrent neural network BiGRU to each relation path pikCarrying out encoding;
(4.2.2.1), will be related to the path πkInputting into BiGRU, calculating forward hidden state via forward GRU network in BiGRUCalculating a backward hidden state through a backward GRU network in the BiGRU;
(4.2.2.2), splicing the forward hidden state and the backward hidden stateThen, the splicing result is subjected to linear transformation to obtain a coded relation pathW is a linear transformation matrix;
(4.2.2.3) and repeating the steps (4.2.2.1) - (4.2.2.2) to obtain all the coded relationship paths, so as to form a coded relationship path feature set P ═ P1,p2,…,pk,…,pL];
(5) Aggregating the relationship path features by using an attention mechanism;
(5.1) binding context information C (h)*,t*) Calculating the weight of each relationship path by using an attention mechanism;
(5.2) weight α to all relationship pathskCarrying out weighted addition to obtain the relation path characteristics
(6) According to the context information C (h)*,t*) And relational Path featuresDeducing the missing relation r*;
(6.1) semantic fusion of context information and relationship paths: context information C (h)*,t*) And relational Path featuresAdding the fusion to obtain the fusion characteristic chi (h)*,t*);
(6.2) mixing chi (h)*,t*) As input to the softmax function, its output is the head entity h*And tail entity t*The missing relationship r between*;
(7) The head entity h*Tail entity t*And the relation r*Make up toIn the knowledge map.
The invention aims to realize the following steps:
the invention relates to a knowledge graph complementing method, which comprises the steps of firstly downloading a knowledge graph and obtaining text description of each relation, then obtaining vector initialization of each relation based on a text embedding mode, and then inputting the vector initialization of each relation into the downloaded knowledge graph to obtain a new knowledge graph; a user provides a triple to be complemented, a head entity and a tail entity of the triple are input into an MSNN, and context information and relationship path characteristics of the entities are respectively extracted through two parallel sub-networks in the MSNN; and finally, deducing the missing relationship according to the context information and the relationship path characteristics, and supplementing the missing relationship into the original knowledge graph.
Meanwhile, the knowledge graph complementing method also has the following beneficial effects:
(1) the context information of the entity is extracted from the knowledge graph by using the convolution information propagation network, and compared with the averaging operation of the propagation function in the traditional information propagation network, the weight of the entity relationship is calculated in a learnable mode, so that the weight calculation of the entity relationship is more flexible.
(2) Compared with the traditional random initialization method, the method introduces external text description into the ELMo model, and obtains the vector initialization of each relation in the ELMo model based on the text embedding mode.
(3) The invention uses an attention mechanism to calculate the weight of each relationship path, realizes the feature aggregation of the relationship paths and further highlights important semantics.
Drawings
FIG. 1 is a flow chart of a knowledge-graph completion method of the present invention;
FIG. 2 is a block diagram of one embodiment of a knowledge-graph;
FIG. 3 is a textual description diagram of a relationship;
FIG. 4 is a flow diagram of relationship vector initialization based on text embedding;
fig. 5 is a flow chart for extracting context information and relationship path characteristics of an entity through the MSNN network.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
FIG. 1 is a flow chart of a knowledge-graph completion method of the present invention;
in this embodiment, as shown in fig. 1, the knowledge-graph completion method of the present invention includes the following steps:
s1, acquiring a knowledge graph;
downloading a knowledge-graph KG, as shown in FIG. 2, the knowledge-graph is composed of triples, each triplet includes a head entity h, a relation r and a tail entity t, wherein the ith triplet is denoted as (h)i,ri,ti) I ═ 1,2, … represents the triplet number; in this embodiment, as shown in fig. 2, a triplet (lary-ellison, leader, oracle) is taken as an example, where the head entity h is lary-ellison, the relation r is leader, and the tail entity t is oracle.
Obtain each relation riText description of (T)i,TiIncluding textual descriptions of entity types and textual descriptions of relationships themselves; in this embodiment, as shown in fig. 3, for example, the head entity h is a lary-ellison, the relationship r is a leader, and the tail entity t is a text description of an oracle.
S2, initializing a relation vector based on text embedding, wherein the specific flow is shown in FIG. 4;
s2.1, describing each text by TiInputting the word vector into a pre-trained ELMo model to obtain a word vector of each word of the text;
s2.2, carrying out weighted addition on the word vector of each word in the text by using an SIF (smooth Inverse frequency) algorithm to obtain each text description TiSentence embedding xi;
S2.3, embedding each sentence into xiAs a line, splicing into a text embedding matrix; decomposing the text embedded matrix by using a truncated singular value decomposition method, finding a characteristic vector corresponding to the maximum singular value, and recording the characteristic vector as u;
s2.4, calculating each sentence embedding xiThe common semantics of (c): u. ofTuxiThe superscript T denotes transposition;
s2.5, embedding each sentence into xiSubtracting the common semantics to obtain the filtered sentence embedding xi':xi'=xi-uTuxi;
S2.6, utilizing Principal Component Analysis (PCA) to embed x into each filtered sentencei' dimension reduction is performed to obtain each relation riIs represented by the vector initialization of (1), denoted as ri *;
S2.7, mixingi *Inputting the knowledge map into KG downloaded in step S1, and recording the obtained knowledge map as KG';
s3, providing the triple to be completed by the user, including the head entity h*And tail entity t*Wherein r is*To be completed; in the present embodiment, a user-provided head entity h*And tail entity t*Respectively as follows: larisy, ellison, and oracle.
S4, as shown in FIG. 5, head entity h*And tail entity t*Inputting the information into an MSNN (minimum shift neural network), and respectively extracting context information and relationship path characteristics of an entity through two parallel sub-networks in the MSNN;
s4.1, extracting h from KG' by using convolution information propagation network*And t*Context information C (h) of*,t*);
S4.1.1, setting a maximum iteration number K, and initializing a current iteration number j to be 1;
s4.1.2 head entity h*And tail entity t*Inputting the data to a convolution information propagation network, and calculating a head entity after the jth iteration λ1Representation and head entity h*The number of the associated relationships; tail entity λ2Representation and tail entity t*The number of the associated relationships; w is a1、w2B is a learning parameter, and σ (-) is an activation function;
s4.1.3, judging whether the current iteration number j reaches the maximum iteration number K, if so, outputting the final semantic representations of the head entity and the tail entity after the K iteration, and respectively recording the final semantic representations asOtherwise, let j equal to j +1, and then return to step S4.1.2;
S4.2, extracting h from KG' by using bidirectional cyclic neural network*And t*The relationship path feature set of (1);
s4.2.1 constructing head entity h by using random walk algorithm*And tail entity t*Set of possible relationship paths between pi ═ pi1,π2,…,πk,…,πL]Wherein the kth relationship path pikExpressed as: pik={r1,r2,…,rl},rlRepresents the ith relationship;
s4.2.2, using bidirectional recurrent neural network BiGRU to each stripRelationship path pikCarrying out encoding;
s4.2.2.1, will relation path pikInputting into BiGRU, calculating forward hidden state via forward GRU network in BiGRUCalculating a backward hidden state through a backward GRU network in the BiGRU;
s4.2.2.2 splicing the forward hidden state and the backward hidden stateThen, the splicing result is subjected to linear transformation to obtain a coded relation pathW is a linear transformation matrix;
s4.2.2.3, repeating steps S4.2.2.1-S4.2.2.2 to obtain all encoded relationship paths, thereby forming an encoded relationship path feature set P ═ P1,p2,…,pk,…,pL];
S5, aggregating the relation path characteristics by using an attention mechanism;
s5.1, combining context information C (h)*,t*) Calculating the weight of each relationship path by using an attention mechanism;
s5.2, weight alpha of all relation pathskCarrying out weighted addition to obtain the relation path characteristics
S6, according to the context information C (h)*,t*) And relational Path featuresDeducing the missing relation r*;
S6.1, semantic fusion of context information and relationship paths: context information C (h)*,t*) And relational Path featuresAdding the fusion to obtain the fusion characteristic chi (h)*,t*);
S6.2, mixing x (h)*,t*) As input to the softmax function, its output is the head entity h*And tail entity t*The missing relationship r between*;
S7 head entity h*Tail entity t*And the relation r*And (5) complementing the knowledge graph.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.
Claims (1)
1. A knowledge graph complementing method is characterized by comprising the following steps:
(1) acquiring a knowledge graph;
downloading a knowledge graph KG, wherein the knowledge graph comprises triples, each triplet comprises a head entity h, a relation r and a tail entity t, and the ith triplet is marked as (h)i,ri,ti) I ═ 1,2, … represents the triplet number;
obtain each relation riText description of (T)i,TiIncluding textual descriptions of entity types and textual descriptions of relationships themselves;
(2) initializing a relation vector based on text embedding;
(2.1)、each text is described as TiInputting the word vector into a pre-trained ELMo model to obtain a word vector of each word of the text;
(2.2) carrying out weighted addition on the word vector of each word in the text by using an SIF (smooth Inverse frequency) algorithm to obtain each text description TiSentence embedding xi;
(2.3) embedding each sentence into xiAs a line, splicing into a text embedding matrix; decomposing the text embedded matrix by using a truncated singular value decomposition method, finding a characteristic vector corresponding to the maximum singular value, and recording the characteristic vector as u;
(2.4) calculating embedding x of each sentenceiThe common semantics of (c): u. ofTuxiThe superscript T denotes transposition;
(2.5) embedding each sentence into xiSubtracting the common semantics to obtain the filtered sentence embedding xi':xi'=xi-uTuxi;
(2.6) embedding x into each sentence after filtering by using Principal Component Analysis (PCA)i' dimension reduction is performed to obtain each relation riIs represented by the vector initialization of (1), denoted as ri *;
(2.7) mixingi *Inputting the knowledge graph into KG downloaded in the step (1), and recording the obtained knowledge graph as KG';
(3) the user provides a triplet to be complemented, which comprises a head entity h*And tail entity t*Wherein r is*To be completed;
(4) the head entity h*And tail entity t*Inputting the information into an MSNN (minimum shift neural network), and respectively extracting context information and relationship path characteristics of an entity through two parallel sub-networks in the MSNN;
(4.1) extracting h from KG' by using convolution information propagation network*And t*Context information C (h) of*,t*);
(4.1.1), setting the maximum iteration number K, and initializing the current iteration number j to be 1;
(4.1.2) attaching head entity h*And tail entity t*Input to convolutional information propagation networkCalculating head entity after j iterationλ1Representation and head entity h*The number of the associated relationships; tail entityλ2Representation and tail entity t*The number of the associated relationships; w is a1、w2B is a learning parameter, and σ (-) is an activation function;
(4.1.3) judging whether the current iteration number j reaches the maximum iteration number K, if so, outputting the final semantic representations of the head entity and the tail entity after the K-th iteration, and respectively recording the final semantic representations asOtherwise, let j equal to j +1, and return to step (4.1.2);
(4.2) extracting h from KG' by using bidirectional cyclic neural network*And t*The relationship path feature set of (1);
(4.2.1) constructing a head entity h by utilizing a random walk algorithm*And tail entity t*Set of possible relationship paths between pi ═ pi1,π2,…,πk,…,πL]Wherein the kth relationship path pikExpressed as: pik={r1,r2,…,rl},rlRepresents the ith relationship;
(4.2.2) utilizing a bidirectional recurrent neural network BiGRU to each relation path pikCarrying out encoding;
(4.2.2.1), will be related to the path πkInputting into BiGRU, calculating forward hidden state via forward GRU network in BiGRUCalculating a backward hidden state through a backward GRU network in the BiGRU;
(4.2.2.2), splicing the forward hidden state and the backward hidden stateThen, the splicing result is subjected to linear transformation to obtain a coded relation pathW is a linear transformation matrix;
(4.2.2.3) and repeating the steps (4.2.2.1) - (4.2.2.2) to obtain all the coded relationship paths, so as to form a coded relationship path feature set P ═ P1,p2,…,pk,…,pL];
(5) Aggregating the relationship path features by using an attention mechanism;
(5.1) binding context information C (h)*,t*) Calculating the weight of each relationship path by using an attention mechanism;
(5.2) weight α to all relationship pathskCarrying out weighted addition to obtain the relation path characteristics
(6) According to the context information C (h)*,t*) And relational Path featuresDeducing the missing relation r*;
(6.1) semantic fusion of context information and relationship paths: context information C (h)*,t*) And relational Path featuresAdding the fusion to obtain the fusion characteristic chi (h)*,t*);
(6.2) mixing chi (h)*,t*) As input to the softmax function, its output is the head entity h*And tail entity t*The missing relationship r between*;
(7) The head entity h*Tail entity t*And the relation r*And (5) complementing the knowledge graph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110599781.0A CN113360664B (en) | 2021-05-31 | 2021-05-31 | Knowledge graph complementing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110599781.0A CN113360664B (en) | 2021-05-31 | 2021-05-31 | Knowledge graph complementing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113360664A CN113360664A (en) | 2021-09-07 |
CN113360664B true CN113360664B (en) | 2022-03-25 |
Family
ID=77528327
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110599781.0A Active CN113360664B (en) | 2021-05-31 | 2021-05-31 | Knowledge graph complementing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113360664B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115391553B (en) * | 2022-08-23 | 2023-10-13 | 西北工业大学 | Method for automatically searching time sequence knowledge graph completion model |
CN115391563B (en) * | 2022-09-01 | 2024-02-06 | 广东工业大学 | Knowledge graph link prediction method based on multi-source heterogeneous data fusion |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977234A (en) * | 2019-03-28 | 2019-07-05 | 哈尔滨工程大学 | A kind of knowledge mapping complementing method based on subject key words filtering |
CN110347847A (en) * | 2019-07-22 | 2019-10-18 | 西南交通大学 | Knowledge mapping complementing method neural network based |
CN111026875A (en) * | 2019-11-26 | 2020-04-17 | 中国人民大学 | Knowledge graph complementing method based on entity description and relation path |
CN112000815A (en) * | 2020-10-28 | 2020-11-27 | 科大讯飞(苏州)科技有限公司 | Knowledge graph complementing method and device, electronic equipment and storage medium |
CN112035672A (en) * | 2020-07-23 | 2020-12-04 | 深圳技术大学 | Knowledge graph complementing method, device, equipment and storage medium |
CN112463987A (en) * | 2020-12-09 | 2021-03-09 | 中国园林博物馆北京筹备办公室 | Chinese classical garden knowledge graph completion and cognitive reasoning method |
KR102234850B1 (en) * | 2019-11-15 | 2021-04-02 | 숭실대학교산학협력단 | Method and apparatus for complementing knowledge based on relation network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170124497A1 (en) * | 2015-10-28 | 2017-05-04 | Fractal Industries, Inc. | System for automated capture and analysis of business information for reliable business venture outcome prediction |
CN112015868B (en) * | 2020-09-07 | 2022-08-26 | 重庆邮电大学 | Question-answering method based on knowledge graph completion |
CN112288091B (en) * | 2020-10-30 | 2023-03-21 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Knowledge inference method based on multi-mode knowledge graph |
-
2021
- 2021-05-31 CN CN202110599781.0A patent/CN113360664B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977234A (en) * | 2019-03-28 | 2019-07-05 | 哈尔滨工程大学 | A kind of knowledge mapping complementing method based on subject key words filtering |
CN110347847A (en) * | 2019-07-22 | 2019-10-18 | 西南交通大学 | Knowledge mapping complementing method neural network based |
KR102234850B1 (en) * | 2019-11-15 | 2021-04-02 | 숭실대학교산학협력단 | Method and apparatus for complementing knowledge based on relation network |
CN111026875A (en) * | 2019-11-26 | 2020-04-17 | 中国人民大学 | Knowledge graph complementing method based on entity description and relation path |
CN112035672A (en) * | 2020-07-23 | 2020-12-04 | 深圳技术大学 | Knowledge graph complementing method, device, equipment and storage medium |
CN112000815A (en) * | 2020-10-28 | 2020-11-27 | 科大讯飞(苏州)科技有限公司 | Knowledge graph complementing method and device, electronic equipment and storage medium |
CN112463987A (en) * | 2020-12-09 | 2021-03-09 | 中国园林博物馆北京筹备办公室 | Chinese classical garden knowledge graph completion and cognitive reasoning method |
Non-Patent Citations (3)
Title |
---|
Existence Computation: Revelation on Entity vs. Relationship for Relationship Defined Everything of Semantics;Yucong Duan;《2019 20th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing》;20191219;1-6 * |
基于深度学习的知识图谱补全技术研究;倪立旺;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20200715;I138-1478 * |
基于知识图谱上下文的图注意矩阵补全;孙伟 等;《计算机工程与应用》;20210420;1-14 * |
Also Published As
Publication number | Publication date |
---|---|
CN113360664A (en) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | A survey on evolutionary neural architecture search | |
CN108108854B (en) | Urban road network link prediction method, system and storage medium | |
CN110083705A (en) | A kind of multi-hop attention depth model, method, storage medium and terminal for target emotional semantic classification | |
CN113360664B (en) | Knowledge graph complementing method | |
Quilodrán-Casas et al. | Digital twins based on bidirectional LSTM and GAN for modelling the COVID-19 pandemic | |
CN111046187B (en) | Sample knowledge graph relation learning method and system based on confrontation type attention mechanism | |
CN111753054B (en) | Machine reading inference method based on graph neural network | |
CN112529166A (en) | Fusion neuron model, neural network structure, training and reasoning method, storage medium and device | |
CN111400452B (en) | Text information classification processing method, electronic device and computer readable storage medium | |
CN115099219B (en) | Aspect-level emotion analysis method based on enhancement graph convolutional neural network | |
CN113065649B (en) | Complex network topology graph representation learning method, prediction method and server | |
CN115659234A (en) | Heterogeneous graph representation learning method integrating text attributes | |
CN114565053A (en) | Deep heterogeneous map embedding model based on feature fusion | |
CN114692602A (en) | Drawing convolution network relation extraction method guided by syntactic information attention | |
CN115952424A (en) | Graph convolution neural network clustering method based on multi-view structure | |
CN116028604A (en) | Answer selection method and system based on knowledge enhancement graph convolution network | |
Lyu et al. | A survey of model compression strategies for object detection | |
CN114969078A (en) | Method for updating expert research interest of federated learning through real-time online prediction | |
CN117874238A (en) | Method, system, equipment and storage medium for analyzing emotion of online public opinion text | |
CN116469155A (en) | Complex action recognition method and device based on learnable Markov logic network | |
CN117743597A (en) | Method, system, equipment and medium for detecting social robots in social network | |
CN117172376A (en) | Graph structure link prediction method and system of generated graph neural network based on network reconstruction | |
CN112015890A (en) | Movie scenario abstract generation method and device | |
CN116737943A (en) | News field-oriented time sequence knowledge graph link prediction method | |
Xia et al. | Efficient synthesis of compact deep neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |