CN113836318A - Dynamic knowledge graph completion method and device and electronic equipment - Google Patents

Dynamic knowledge graph completion method and device and electronic equipment Download PDF

Info

Publication number
CN113836318A
CN113836318A CN202111131711.9A CN202111131711A CN113836318A CN 113836318 A CN113836318 A CN 113836318A CN 202111131711 A CN202111131711 A CN 202111131711A CN 113836318 A CN113836318 A CN 113836318A
Authority
CN
China
Prior art keywords
entity
embedding
time step
dynamic
structured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111131711.9A
Other languages
Chinese (zh)
Inventor
李直旭
陈志刚
何莹
曹思远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Intelligent Voice Innovation Development Co ltd
Iflytek Suzhou Technology Co Ltd
Original Assignee
Hefei Intelligent Voice Innovation Development Co ltd
Iflytek Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Intelligent Voice Innovation Development Co ltd, Iflytek Suzhou Technology Co Ltd filed Critical Hefei Intelligent Voice Innovation Development Co ltd
Priority to CN202111131711.9A priority Critical patent/CN113836318A/en
Publication of CN113836318A publication Critical patent/CN113836318A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/383Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a dynamic knowledge graph complementing method, a device and electronic equipment, wherein the dynamic knowledge graph complementing method comprises the following steps: acquiring known entities, known relations and known time information in a quadruple relational expression of a dynamic knowledge graph to be complemented; obtaining final entity embedding corresponding to the known time information according to the known entity and the known relation; predicting a probability that the final entity is embedded as a missing entity; wherein obtaining the end entity embedding corresponding to the known time information comprises: taking a first time step corresponding to the known entity, the known relationship and the known time information as an input of a structure encoder, and obtaining a first structured entity embedding of the first time step; embedding the first structured entity as the final entity embedding. The method and the device represent multi-hop structural information embodying the dynamic knowledge graph through the structural entity, fully excavate neighborhood information and improve the completion rate of the dynamic knowledge graph.

Description

Dynamic knowledge graph completion method and device and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a dynamic knowledge graph complementing method and device and electronic equipment.
Background
In recent years, knowledge-graphs have received wide attention from the academia as a form of structured human knowledge. Knowledge maps can be divided into static knowledge maps and dynamic knowledge maps according to existing research. The dynamic knowledge graph contains a large amount of dynamic fact knowledge. However, the dynamic knowledge-graph still has the problem of incomplete. Reasoning and complementing the dynamic fact knowledge missing in the dynamic knowledge graph is a challenging task and is important for application of event prediction, social network analysis, recommendation systems and the like.
Static knowledge-maps represent facts as triplets (subject, relationship, object) (subject, predicate, object), such as (vombent, play, diving race), while dynamic knowledge-maps associate each triplet with a timestamp, such as (vombent, play, diving race, 2000). Dynamic Knowledge-graphs are considered to consist of discrete time-stamps, which means that they can be represented as a series of static Knowledge-Graph snapshots (snapshots), in which the task of reasoning about the Completion of missing facts is called dynamic Knowledge-Graph Completion (TKGC).
Knowledge Graph representation (KGE) is a precondition and support for Knowledge Graph Completion (KGC), and aims to map entities and relationships to a low-dimensional vector space, thereby realizing representation of semantic information of the entities and the relationships. The traditional knowledge graph representation method ignores the known time information and is not competent for the knowledge reasoning task related to the time dimension information. To solve this problem, researchers at home and abroad have recently begun to encode known time information into knowledge graph representations to improve the performance of knowledge graph inference completion. This Knowledge map representation containing known Temporal information may be referred to as a dynamic Knowledge map representation (TKGE) for inference and completion of the dynamic Knowledge map. However, most of the existing dynamic knowledge map representation methods simply embed the known time information into the knowledge representation, and the methods are preliminary, only the known time information is considered, and the topological structure information of the map is ignored, so that a large progress space is provided in the aspect of comprehensive modeling of the time and structure information.
The dynamic knowledge graph representation is currently mainly concerned about how to embed the known temporal information into the knowledge representation. The earliest work proposed learning the temporal order between relationships first (e.g., wasBorIn → wonPize → DiedIn), and then merging these relationship orders into constraints at the knowledge graph representation stage, without directly incorporating known temporal information into the learned representation. TransE proposes a variety of known time information representation methods. As well as concatenating known temporal information and relationships together, time is represented in the same vector space as the entities and relationships, with time points having separate representations and time points being used as coefficients to influence the representation of the triplet relationships. HyTE partitions the dynamic knowledge graph into multiple static sub-graphs, each sub-graph corresponding to a timestamp. The relationships of the entity and each sub-graph are then projected onto a timestamp specific hyperplane, learning a common representation of the hyperplane (normal vector) and knowledge graph elements over time. However, when the number of timestamps is large, the effect is not good, and the new timestamps cannot be popularized. The TA-TransE decomposes a given timestamp into a sequence consisting of timestamps, then connects a relationship tag and a time modifier tag (such as a nonce or a neutral) sequence with the timestamp sequence, and obtains a predicate sequence representation after processing the connection as an input code of the LSTM. DE-simpel considers that some characteristics of an entity are fixed and some may change over time, so he proposes embedding functions over time to control the entity characteristic representation at different points in time.
The existing work about dynamic knowledge graph completion mainly focuses on making some improvements on the representation of known time information, researches on a time-dependent scoring function, and combines with a static knowledge graph representation method to score the possibility of missing facts, so as to complete the task of dynamic knowledge graph completion. Although the methods can effectively complement the missing dynamic fact knowledge, the methods do not consider multi-hop structural information in the dynamic knowledge graph and have poor mining and complementing effects on neighborhood information.
Also, existing methods lack the ability to answer queries (queries) using temporal facts in nearby knowledge-graph snapshots. Such a fact as (vominxia, acquisition, 3 meters platoon, 1996) or (vominxia, participation, sydney olympic, 2000) helps to answer (vominxia, acquisition, 2000) queries for the tail entity.
Disclosure of Invention
In view of the foregoing, the present invention is directed to a dynamic knowledge graph completing method, apparatus and electronic device, and accordingly provides a computer-readable storage medium, by which multi-hop structural information embodying a dynamic knowledge graph can be represented by a structured entity, neighborhood information can be sufficiently mined, and a completing rate of the dynamic knowledge graph can be improved.
The technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a dynamic knowledge-graph complementing method, including:
acquiring known entities, known relations and known time information in a four-tuple relational expression of a dynamic knowledge graph to be complemented, wherein the known entities comprise a head entity or a tail entity;
obtaining final entity embedding corresponding to the known time information according to the known entity and the known relation;
predicting a probability that the final entity is embedded as a missing entity;
wherein obtaining the end entity embedding corresponding to the known time information comprises:
taking a first time step corresponding to the known entity, the known relationship and the known time information as an input of a structure encoder, and obtaining a first structured entity embedding of the first time step;
embedding the first structured entity as the final entity embedding.
In one possible implementation manner, obtaining the final entity embedding corresponding to the known time information further includes:
taking the last time step of the known entity in the active state before the first time step as a second time step;
using the first structured entity embedding and the first dynamic entity embedding of the second time step as input of a time encoder, obtaining the second dynamic entity embedding of the first time step as final dynamic entity embedding of the first time step;
and the number of the first and second electrodes,
the final dynamic entity embedding is taken as the final entity embedding.
In one possible implementation manner, obtaining the final entity embedding corresponding to the known time information further includes:
taking the last time step of the known entity in the active state before the first time step as a second time step;
using the known entity, the known relationship, and the second time step as inputs to a structural encoder, obtaining a second structured entity embedding for the second time step;
obtaining a third structured entity embedding for the first time step using attribution theory in combination with the first structured entity embedding and the second structured entity embedding;
using the third structured entity embedding and the first dynamic entity embedding at the second time step as input of a time encoder, obtaining the third dynamic entity embedding at the first time step as a final dynamic entity embedding at the first time step;
and the number of the first and second electrodes,
the final dynamic entity embedding is taken as the final entity embedding.
In one possible implementation manner, obtaining the final entity embedding corresponding to the known time information further includes:
obtaining a synthetic entity embedding as the final entity embedding according to the first structured entity embedding and the final dynamic entity embedding by using a gating mechanism.
In one possible implementation, the structural encoder includes a first training model of a multi-relationship based messaging neural network.
In one possible implementation, the time encoder includes a second training model based on a recurrent neural network.
In one possible implementation manner, obtaining the second dynamic entity embedding of the first time step includes:
calculating a first decay rate of the first dynamic entity embedding;
a fourth dynamic entity embedding that calculates the second time step as a function of the first decay rate and the first dynamic entity embedding;
and taking the first structured entity embedding and the fourth dynamic entity embedding as the input of the time encoder to obtain the second dynamic entity embedding of the first time step.
In one possible implementation, obtaining the third structured entity embedding includes:
calculating a second decay rate of the known entity between the second time step and the first time step, the second decay rate serving as a first weight for embedding the second structured entity into the third structured entity;
and calculating a weighted sum by using the first weight, the first structured entity embedding and the second structured entity embedding to serve as a third structured entity embedding.
In one possible implementation, obtaining the third structured entity embedding includes:
calculating a second decay rate of the known entity between the second time step and the first time step, the second decay rate serving as a first weight for embedding the second structured entity into the third structured entity;
taking a last time step after the first time step when the known entity is in an active state as a third time step;
taking the known entity, the known relationship and the third time step as input of a structure encoder, and acquiring a fourth structured entity embedding of the third time step;
calculating a third attenuation rate of the known entity between the third time step and the first time step and a fourth attenuation rate of the known entity within the first time step, the third attenuation rate being used as a second weight of the fourth structured entity embedding on the third structured entity, the fourth attenuation rate being used as a third weight of the first structured entity embedding on the third structured entity;
calculating a weighted sum using the first, second, and fourth structured entity embeddings and corresponding decay rates as the third structured entity embeddings.
In one possible implementation, obtaining the synthetic entity embedding includes:
respectively calculating the frequency of the known entity, the known relation and the corresponding relation of the known entity and the known relation on a time window, wherein the time window comprises a time point or a time period corresponding to the known time information;
the frequencies of the known entities, the known relationships, and the corresponding relationships of the known entities and the known relationships form a frequency vector of missing entities corresponding to the dynamic knowledge graph to be complemented;
obtaining a fourth weight of the first structured entity embedding to the synthetic entity embedding according to the frequency vector;
and calculating a weighted sum as the comprehensive entity embedding according to the fourth weight, the first structured entity embedding and the final dynamic entity embedding.
In a second aspect, the present invention further provides a dynamic knowledge-graph complementing device, which includes a task receiving module, a final entity embedding obtaining module and a predicting module;
the task receiving module is used for acquiring known entities, known relations and known time information in a quadruple relational expression of the dynamic knowledge graph to be complemented, wherein the known entities comprise a head entity or a tail entity;
the final entity embedding obtaining module is used for obtaining final entity embedding corresponding to the known time information according to the known entity and the known relation;
the prediction module is used for predicting the probability that the final entity is embedded as a missing entity;
the final entity embedding obtaining module includes a structure encoder, and the structure encoder is configured to obtain a first structured entity embedding of a first time step corresponding to the known entity and the known time information by using the first time step, and use the first structured entity embedding as the final entity embedding.
In one possible implementation manner, the final entity embedding obtaining module further includes a time step determining submodule and a time encoder;
the time step determination submodule is used for taking the last time step of the known entity in the active state before the first time step as a second time step;
the time encoder is configured to obtain a second dynamic entity embedding of the first time step as a final dynamic entity embedding of the first time step by using the first structured entity embedding and the first dynamic entity embedding of the second time step as input of the time encoder, and to use the final dynamic entity embedding as the final entity embedding.
In one possible implementation, the structural encoder is further configured to obtain, by using attribution theory, a third structured entity embedding for the first time step in combination with the first structured entity embedding and the second structured entity embedding for the second time step;
the time encoder is further configured to obtain a third dynamic entity embedding for the first time step using the third structured entity embedding and the first dynamic entity embedding for the second time step, as a final dynamic entity embedding for the first time step, and to embed the final dynamic entity as the final entity embedding.
In one possible implementation manner, the final entity embedding obtaining module further includes a gate unit, and the gate unit is configured to obtain a comprehensive entity embedding according to the first structured entity embedding and the final dynamic entity embedding, as the final entity embedding.
In a third aspect, the present invention further provides an electronic device, including:
one or more processors, memory, and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform the dynamic knowledge-graph completion method described above.
In a fourth aspect, the present invention also provides a computer-readable storage medium, in which a computer program is stored, which, when run on a computer, causes the computer to perform the above-mentioned dynamic knowledge-graph complementing method.
In a fifth aspect, the present invention also provides a computer program product for performing the method of the first aspect or any possible implementation manner of the first aspect, when the computer program product is executed by a computer.
In a possible design of the fifth aspect, the relevant program related to the product may be stored in whole or in part on a memory packaged with the processor, or may be stored in part or in whole on a storage medium not packaged with the processor.
The concept of the invention is that when the dynamic knowledge graph is supplemented, a structure encoder is adopted to capture topological structure information in the knowledge graph, the neighborhood information of the entity is screened, and the knowledge graph is supplemented based on the screening result, so that the structuralization and the supplementation rate of the dynamic knowledge graph are improved. The invention also considers the time fact in the snapshot of the nearby knowledge graph, uses a time coder to integrate the entity representation information of the cross time step, optimizes the topological structure information of the entity and provides a solid foundation for accurate completion. In addition, the invention also eliminates the adverse effects of time variability and time sparsity on the completion of the dynamic knowledge graph.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a dynamic knowledge-graph completion method provided by the present invention;
FIG. 2 is a flow diagram of a preferred embodiment of the present invention providing for obtaining end entity embedding;
FIG. 3 is a flow chart of the present invention providing a third structured entity embedding for obtaining a first time step;
FIG. 4 is a block diagram of a dynamic knowledge-graph complementing device provided by the present invention;
FIG. 5 is a block diagram of an end entity embedding acquisition module provided by the present invention;
FIG. 6 is a schematic diagram of the dynamic knowledge-map complementing device corresponding to FIG. 2;
fig. 7 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
First, the following basic description is made for dynamic knowledge graph completion:
given a dynamic knowledge graphSpectrum
Figure BDA0003280768260000085
Wherein G is(t)=(E,R,D(t)),t∈{0,1…,T}。G(t)Is a knowledge-graph snapshot of time step t. E and R represent the union of the set of entities of the knowledge-graph snapshots at all time steps and the union of the set of relationships of the knowledge-graph snapshots at all time steps, respectively, and are known. D(t)Represents the set of triples (s, R, o) contained in the knowledge-graph snapshot at time step t, where s E, o E, R E. Is provided with
Figure BDA0003280768260000081
Representing the set of triples correct at time step t, i.e.
Figure BDA0003280768260000082
The dynamic knowledge-graph completion problem is to rank candidate head or tail entities given a head entity query (
Figure BDA0003280768260000083
But do not
Figure BDA0003280768260000084
Where the head entity query (.
In view of the foregoing core concept, the present invention provides at least one embodiment of a dynamic knowledge-graph completion method, as shown in fig. 1, which may include the following steps:
s110: and acquiring known entities, known relations and known time information in the four-tuple relational expression of the dynamic knowledge graph to be complemented, wherein the known entities comprise a head entity or a tail entity.
For example, in the above-mentioned head entity query (. In the above-mentioned tail entity query (s, r,.
The present invention is described below with reference to the tail entity query (s, r,.
S120: and obtaining final entity embedding corresponding to the known time information according to the known entity and the known relation.
Specifically, final entity embedding is obtained using at least one training model. This section is further explained in the following.
S130: predicting a probability that the final entity is embedded as a missing entity.
Specifically, as one embodiment, the probability is predicted by training the model using a scoring function.
As an embodiment, in step S120, obtaining the final entity embedding corresponding to the known time information includes:
s1201: and taking the known entity, the known relation and a first time step t corresponding to the known time information as the input of a structure encoder, and obtaining the first structured entity embedding of the first time step. And predicting the probability of embedding the first structured entity as the final entity embedding.
The fabric coder is used for taking a snapshot G based on the dynamic knowledge-graph in each time step(t)Structured entity embedding is generated.
In particular, as one embodiment, the fabric encoder includes at least a first training model of a multi-relationship based messaging neural network. As an example, the first training model is a Relational Graph Convolutional Network (RGCN) model, please refer to fig. 6.
Based on an RGCN model with L layers, taking a first time step t corresponding to the known entity, the known relation and the known time information as the input of a structure encoder, and the output of the model is hidden layer embedding of the L layer in the first time step t
Figure BDA0003280768260000091
Will transmit the result toEmbedding x as a first structured entitys,tThe hidden layer is embedded and summarized with a dynamic knowledge graph snapshot G(t)Neighborhood information in an L-level multi-hop structure within.
Wherein the hidden layer of the l +1 th layer is embedded
Figure BDA0003280768260000092
Figure BDA0003280768260000093
Figure BDA0003280768260000094
Wherein the content of the first and second substances,
Figure BDA0003280768260000095
hidden layer embedding for layer 0 output of the first time step t, W0For solid embedding of matrices, uiBeing an entity e of a nodeiOne-hot (one-hot) embedding of (A) Wr (l)And Ws (l)To train the relational transformation matrix and the head entity transformation matrix of the l-th layer of the model,
Figure BDA0003280768260000101
being an entity e of a nodeiSet of neighborhood entities connected by a relationship r, size
Figure BDA0003280768260000102
As a normalization constant, for averaging the neighborhood node information.
Thus, embedding of a first structured entity at a first time step
Figure BDA0003280768260000103
It is understood that the present invention can adopt any other multi-relation graph encoder formed by a multi-relation based message passing neural network as the first training model, such as comp gcn, EdgeGAT, etc.
In the embodiment, the structural attributes of the message passing neural network based on multiple relations are adopted to capture the L-layer topological structure information in the knowledge graph, so that the neighborhood information of the entity is screened according to the known relations, the completion of the knowledge graph is carried out on the basis of the screening result, and the structuralization and completion rate of the dynamic knowledge graph are improved.
None of the existing methods, as well as the embodiments described above, take advantage of temporal facts in the knowledge-graph snapshots around known temporal information in the query to answer the query. For example, the fact of (vominxia, acquisition, 3 meters platoon, 1996) or (vominxia, participation, sydney olympic, 2000) helps to answer (vominxia, acquisition,.
In view of the above, in a preferred embodiment, obtaining the end entity embedding corresponding to the known time information further comprises:
s1202: taking the last time step before the first time step t when the known entity is in the active state as a second time step t-
S1203: embedding the first structured entity in xs,tAnd said second time step t-First dynamic entity embedding of
Figure BDA0003280768260000104
Obtaining a second dynamic entity embedding z of said first time step t as input to a time coders,t
In the preferred embodiment, a second dynamic entity at a first time step t is embedded in zs,tEmbedding the final dynamic entity as the final dynamic entity at the first time step t, and embedding the final dynamic entity as the final entity, as shown in FIG. 2.
The time encoder is used for integrating the structured entity embedding across time steps, and the time facts in the knowledge-graph snapshots nearby the known time information in the query are utilized to answer the query. In particular, the temporal encoder includes a second training model based on a Recurrent neural network (e.g., Long short-term memory (LSTM), Gated Recurrent Unit (GRU), etc.).
The preferred embodiment will be described below by taking GRU as an example.
In one possible embodiment, the second dynamic entity embedding z for obtaining the first time steps,tThe method comprises the following steps:
s12031: calculating a first decay rate of the first dynamic entity embedding
Figure BDA0003280768260000111
Figure BDA0003280768260000112
Wherein λ iszAnd bzIs a parameter that can be learned by the user,
Figure BDA0003280768260000113
t-e { t- τ, … t-1}, τ being the number of knowledge-graph snapshots of the input model in time, see FIG. 6.
S12032: according to the first attenuation rate
Figure BDA0003280768260000114
And the first dynamic entity embedding
Figure BDA00032807682600001110
Fourth dynamic entity embedding to calculate the second time step
Figure BDA0003280768260000115
Figure BDA0003280768260000116
When t is-E { t-tau, … t-1},
Figure BDA0003280768260000117
is non-zeroAnd (5) vector quantity.
S12033: embedding the first structured entity in xs,tAnd the fourth dynamic entity embedding
Figure BDA0003280768260000118
Obtaining a second dynamic entity embedding z of the first time step t as input to the time coders,t
Figure BDA0003280768260000119
In the preferred embodiment, the second training model may visit all time steps during the training. All (incomplete) knowledge-graph snapshot information D during the training process, although there is missing data at each time step(t)Are available.
In the preferred embodiment, information for a particular time step prior to the first time step t is integrated so that the entity queries for information that references a past time step.
Although the above-described embodiments work well to address the problem of data structuring and information integration at nearby time steps, such dynamic knowledge-graph data suffers from temporal sparsity. Temporal sparsity issues indicate that only a small portion of the entities are active in the knowledge-graph snapshot at each time step (if an entity has at least one neighboring entity in the same knowledge-graph snapshot, the entity is active in the time step corresponding to the knowledge-graph snapshot). In the existing knowledge graph completion method, the same embedding is generally distributed to inactive entities at different time steps, and the processing mode cannot completely represent time-sensitive characteristics.
Based on the above problem, the present application proposes a preferred embodiment, and based on the step S1202, the obtaining the end entity embedding corresponding to the known time information further includes:
s1204: comparing the known entity, the known relationship, and the second time step t-Obtaining the second time step as an input to a structure encodert-Second structured entity embedding of
Figure BDA00032807682600001212
S1205: embedding x in conjunction with the first structured entity using attribution theorys,tAnd the second structured entity embedding
Figure BDA00032807682600001213
Obtaining a third structured entity embedding for the first time step t
Figure BDA0003280768260000121
Figure BDA0003280768260000122
As a possible implementation, as shown in fig. 3, obtaining the third structured entity embedding includes:
s12051: calculating the second time step t-A second decay rate of the known entity between the first time step t
Figure BDA0003280768260000123
Figure BDA0003280768260000124
Wherein λ isxAnd bxAre learnable parameters.
The second attenuation rate is measured
Figure BDA0003280768260000125
Embedding as the second structured entity
Figure BDA00032807682600001214
Embedding the third structured entity
Figure BDA0003280768260000126
The first weight of (1).
S12052: embedding x in the first structured entity using the first weights,tThe second structured entity embedding
Figure BDA00032807682600001215
Computing a weighted sum, embedding as a third structured entity
Figure BDA0003280768260000127
Figure BDA0003280768260000128
The embodiment solves the problem of time sparsity caused by inactive entities in the knowledge-graph snapshot before the first time step t. On this basis, in a preferred embodiment, obtaining the third structured entity embedding further comprises:
after S12051, S12053 is executed: taking the latest time step after the first time step t when the known entity is in the active state as a third time step t+
S12054: comparing the known entity, the known relationship, and the third time step t+Obtaining the third time step t as an input to a structure encoder+Fourth structured entity embedding of
Figure BDA00032807682600001216
S12055: calculating a third decay rate for the known entity between the third time step and the first time step
Figure BDA0003280768260000129
And a fourth decay rate of the known entity within the first time step
Figure BDA00032807682600001210
Figure BDA00032807682600001211
Figure BDA0003280768260000131
The third attenuation rate is measured
Figure BDA0003280768260000132
Embedded as the fourth structured entity
Figure BDA00032807682600001310
Embedding the third structured entity
Figure BDA0003280768260000133
Second weight of (d), the fourth decay rate
Figure BDA0003280768260000134
Embedding x as the first structured entitys,tEmbedding the third structured entity
Figure BDA0003280768260000135
And (3) a third weight.
S12056: embedding x with the first structured entitys,tThe second structured entity embedding
Figure BDA00032807682600001311
And the fourth structured entity embedding
Figure BDA00032807682600001312
And a corresponding decay rate calculation weighted sum embedded as the third structured entity
Figure BDA0003280768260000136
Figure BDA0003280768260000137
The preferred embodiment eliminates temporal sparsity caused by inactive entities in the knowledge-graph snapshots at both past and future time steps.
S1206: embedding the third structured entity
Figure BDA0003280768260000138
And a first dynamic entity embedding z of said second time steps,tObtaining, as input to a temporal encoder, a third dynamic entity embedding z 'of the first temporal step's,t
Figure BDA0003280768260000139
In the preferred embodiment, a third dynamic entity is embedded in z's,tAnd embedding the final dynamic entity as the final dynamic entity embedding of the first time step, wherein the final dynamic entity embedding is used as the final entity embedding.
The preferred embodiment utilizes attribution theory to combine stale representations of inactive entities with temporal representations, uses a GRU model to reflect past and future effects of inactive entities on structured entity embedding, and uses a Bi-GRU model to optimize structured entity embedding.
The influence caused by time variability is ignored in the existing dynamic knowledge graph completion method. Temporal variability refers to the fact that in real-world dynamic knowledge-graphs, when different queries are answered, the model has access to different amounts of reference temporal information in nearby knowledge-graph snapshots, which have different weights due to constraints of specific entities and relationships in the queries. For example, in one sports event dataset, there was more quaternion data containing head entity-relationship pairs (acquired) than quaternion data containing (acquired) during 1996 to 2000.
In view of the above, in a preferred embodiment, obtaining the end entity embedding corresponding to the known time information further comprises:
s1207: obtaining a synthetic entity embedding as the final entity embedding according to the first structured entity embedding and the final dynamic entity embedding by using a gating mechanism.
Fig. 2 shows the overall flow of obtaining end entity embeddings corresponding to known time information in the preferred embodiment.
Entity embedding also depends on the amount of dynamic knowledge it has participated in the most recent time step, based on which the preferred embodiment utilizes a frequency-based gating mechanism to fuse structured entity embedding of the structured encoder output with dynamic entity embedding of the time encoder output in a frequency-dependent manner. To enable entities to be located in quadruplets to their location, a distinction is made between query type (head entity or tail entity query) and entity location (knowing whether an entity is a head entity or a tail entity in the fact being queried).
The term "pattern" is defined to describe a non-empty subset of the quadruple (s, r, o, t), the number of facts that have such a pattern within a time window being defined as the temporal frequency of the pattern. For example, a time frequency of a pattern (vominxia, attended, olympic, 1992) is the number of four-tuples (vominxia, attended, t0), where t0 is in a time window (e.g., from 2000 to 2014).
As one possible implementation, obtaining synthetic entity embedding includes:
s12071: and respectively calculating the frequency of the known entity, the known relation and the corresponding relation of the known entity and the known relation on a time window, wherein the time window comprises a time point or a time period corresponding to the known time information.
Based on the above, the Temporal Pattern Frequencies (TPFs) associated with the quadruples (s, r, o, t) include:
(1) head physical frequency
Figure BDA0003280768260000141
(2) Tail entity frequency
Figure BDA0003280768260000142
(3) Frequency of relationship
Figure BDA0003280768260000143
(4) Head entity-relationship frequency
Figure BDA0003280768260000144
(5) Relationship-tail entity frequency
Figure BDA0003280768260000145
S12072: the frequencies of the known entities, the known relationships, and the known entities' correspondences with the known relationships form a frequency vector of missing entities corresponding to the dynamic knowledge graph to be complemented.
The preferred embodiment is illustrated by way of a tail entity query without loss of generality. Gating mechanism (s, r, are, t), the goal is to predict missing tail entities in the quadruple.
The model can only access the frequency vector when answering the tail entity query (s, r,
Figure BDA0003280768260000146
s12073: obtaining a fourth weight α of the first structured entity embedding versus the synthetic entity embedding from the frequency vectoros
As a possible embodiment, αosLearned through a double-layer neural network, i.e. alphaos=MLPos(Fs),αos∈[0,1]。
S12074: according to the fourth weight alphaosThe first structured entity embedding xs,tThe final dynamic entity embedding zs,tCalculating a weighted sum as the synthetic entity embedding.
In particular, a frequency vector F is usedsIntegrated entity embedding of tail entities in tail entity queries
Figure BDA0003280768260000151
Above defines a gate:
Figure BDA0003280768260000152
understandably, z is embedded with the second dynamic entity of the tail entitys,tReplacing to a third dynamic entity embedding z's,tThe problem of time variability in tail entity query can be better solved.
Understandably, for head entity queries, frequency vectors
Figure BDA0003280768260000153
Integrated entity embedding of head entities
Figure BDA0003280768260000154
Above defines a gate:
Figure BDA0003280768260000155
wherein alpha isooEmbedding weights for the synthetic entity embedding for the first structured entity embedding of the head entity; x is the number ofo,tEmbedding the first structured entity of the head entity, and acquiring the first structured entity of the tail entity by the same method as the embedding x of the first structured entity of the tail entitys,t;zo,tSecond dynamic entity embedding for head entity, its obtaining method is same as second dynamic entity embedding z of tail entitys,t
Understandably, z is embedded with the second dynamic entity of the head entityo,tThird dynamic entity embedding z replaced with header entity o,tThe problem of time variability in the head entity query can be better solved.
For step S130, let
Figure BDA0003280768260000156
Representing the probability of embedding the inquired final entity into the quadruple formed by the tail entity or the head entity(i.e., scoring), let DEC represent the static knowledge-graph any suitable decoding function, e.g., TransE, HyTE, TA-TransE, DE-SimplE, etc. The scores of the quadruples are defined as follows:
Figure BDA0003280768260000157
Figure BDA0003280768260000158
and
Figure BDA0003280768260000159
the final entity embedding, which represents the head entity and the tail entity, is obtained through step S120.
Figure BDA00032807682600001510
The learnable embedding of the representation relationship is obtained by the existing word embedding method.
To train the model using this scoring function, a small batch of gradient-based optimization learning model parameters is employed. For each triplet η ═ (s, r, o) ∈ D(t)Sampling a set of negative case entities
Figure BDA0003280768260000161
Without loss of generality, the cross-entropy loss function of negative examples in tail entity queries is as follows:
Figure BDA0003280768260000162
thus, the training loss function of the scoring function training model is the sum of two query losses: l ═ Lsub+Lobj
It will be appreciated that the penalty function in the head entity query and the penalty function for the negative sample are the same.
Corresponding to the above embodiments and preferred solutions, the present invention further provides an embodiment of a dynamic knowledge graph completing apparatus, as shown in fig. 4, which may specifically include a task receiving module 410, a final entity embedding obtaining module 420, and a predicting module 430;
the task receiving module 410 is configured to obtain known entities, known relationships, and known time information in a quadruple relational expression of a dynamic knowledge graph to be complemented, where the known entities include a head entity or a tail entity.
The final entity embedding obtaining module 420 is configured to obtain a final entity embedding corresponding to the known time information according to the known entity and the known relationship.
The prediction module 430 is used to predict the probability that the final entity is embedded as a missing entity.
In one possible implementation manner, as shown in fig. 5, the final entity embedding obtaining module 420 includes a structure encoder 4201, where the structure encoder 4201 is configured to obtain a first structured entity embedding of a first time step corresponding to the known entity and the known time information by using the first time step. In one possible implementation, the first structured entity embedding is taken as the final entity embedding.
In one possible implementation, the structure encoder 4201 is further configured to obtain a second structured entity embedding for a second time step using the known entity and the second time step.
In one possible implementation, the structure encoder 4201 is further configured to obtain a fourth structured entity embedding for a third time step using the known entity and the third time step.
In one possible implementation, as shown in fig. 5, the end entity embedded obtaining module 420 further includes a time step determining submodule 4202 and a time encoder 4203.
The time step determination submodule 4202 is configured to use a last time step of the known entity in the active state before the first time step as a second time step.
The time encoder 4203 is configured to obtain the second dynamic entity embedding for the first time step using the first structured entity embedding and the first dynamic entity embedding for the second time step as input to the time encoder.
In one possible implementation, the second dynamic entity embedding is used as the final dynamic entity embedding of the first time step, and the final dynamic entity embedding is used as the final entity embedding.
In one possible implementation, the structure encoder 4201 is further configured to obtain, using attribution theory, a third structured entity embedding for the first time step in combination with a second structured entity embedding for the second time step; the time encoder 4203 is further configured to obtain a third dynamic entity embedding for the first time step using the third structured entity embedding and the first dynamic entity embedding for the second time step.
In one possible implementation, a third dynamic entity embedding is embedded as a final dynamic entity of the first time step, and the final dynamic entity embedding is embedded as the final entity.
In one possible implementation manner, as shown in fig. 5, the final entity embedding obtaining module 420 further includes a gating unit 4204, where the gating unit 4204 is configured to obtain a comprehensive entity embedding according to the first structured entity embedding and the final dynamic entity embedding, as the final entity embedding.
It should be understood that the division of the components of the dynamic knowledge-graph complementing device shown in fig. 4 is only a logical division, and the actual implementation can be wholly or partially integrated into one physical entity or physically separated. And these components may all be implemented in software invoked by a processing element; or may be implemented entirely in hardware; and part of the components can be realized in the form of calling by the processing element in software, and part of the components can be realized in the form of hardware. For example, a certain module may be a separate processing element, or may be integrated into a certain chip of the electronic device. Other components are implemented similarly. In addition, all or part of the components can be integrated together or can be independently realized. In implementation, each step of the above method or each component above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above components may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), one or more microprocessors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, these components may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
In view of the foregoing examples and their preferred embodiments, it will be appreciated by those skilled in the art that in practice, the invention may be practiced in a variety of embodiments, and that the invention is illustrated schematically in the following vectors:
(1) an electronic device, which may comprise:
one or more processors, memory, and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the apparatus, cause the apparatus to perform the steps/functions of the foregoing embodiments or equivalent implementations.
Fig. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present invention, where the electronic device may be an electronic device or a circuit device built in the electronic device. The electronic device can be a PC, a server, an intelligent terminal (a mobile phone, a tablet, a watch, glasses, etc.), an intelligent television, a teller machine, a robot, an unmanned aerial vehicle, an ICV, an intelligent (automobile) vehicle, a vehicle-mounted device, etc. The present embodiment does not limit the specific form of the XXX device.
As shown in particular in fig. 7, the electronic device 900 includes a processor 910 and a memory 930. Wherein, the processor 910 and the memory 930 can communicate with each other and transmit control and/or data signals through the internal connection path, the memory 930 is used for storing computer programs, and the processor 910 is used for calling and running the computer programs from the memory 930. The processor 910 and the memory 930 may be combined into a single processing device, or more generally, separate components, and the processor 910 is configured to execute the program code stored in the memory 930 to implement the functions described above. In particular implementations, the memory 930 may be integrated with the processor 910 or may be separate from the processor 910.
In addition, to further enhance the functionality of the electronic device 900, the device 900 may further include one or more of an input unit 960, a display unit 970, an audio circuit 980, a camera 990, a sensor 901, and the like, which may further include a speaker 982, a microphone 984, and the like. The display unit 970 may include a display screen, among others.
Further, the electronic device 900 may also include a power supply 950 for providing power to various devices or circuits within the device 900.
It should be understood that the electronic device 900 shown in fig. 4 is capable of implementing the processes of the methods provided by the foregoing embodiments. The operations and/or functions of the various components of the apparatus 900 may each be configured to implement the corresponding flow in the above-described method embodiments. Reference is made in detail to the foregoing description of embodiments of the method, apparatus, etc., and a detailed description is omitted here as appropriate to avoid redundancy.
It should be understood that the processor 910 in the electronic device 900 shown in fig. 4 may be a system on chip SOC, and the processor 910 may include a Central Processing Unit (CPU), and may further include other types of processors, which are described in detail below.
In summary, various portions of the processors or processing units within the processor 910 may cooperate to implement the foregoing method flows, and corresponding software programs for the various portions of the processors or processing units may be stored in the memory 930.
(2) A readable storage medium, on which a computer program or the above-mentioned apparatus is stored, which, when executed, causes the computer to perform the steps/functions of the above-mentioned embodiments or equivalent implementations.
In the several embodiments provided by the present invention, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on this understanding, some aspects of the present invention may be embodied in the form of software products, which are described below, or portions thereof, which substantially contribute to the art.
(3) A computer program product (which may include the above apparatus) which, when run on a terminal device, causes the terminal device to perform the dynamic knowledge-graph complementing method of the preceding embodiments or equivalent implementations.
From the above description of the embodiments, it is clear to those skilled in the art that all or part of the steps in the above implementation method can be implemented by software plus a necessary general hardware platform. With this understanding, the above-described computer program products may include, but are not limited to, refer to APP; continuing on, the aforementioned device/terminal may be a computer device (e.g., a mobile phone, a PC terminal, a cloud platform, a server cluster, or a network communication device such as a media gateway). Moreover, the hardware structure of the computer device may further specifically include: at least one processor, at least one communication interface, at least one memory, and at least one communication bus; the processor, the communication interface and the memory can all complete mutual communication through the communication bus. The processor may be a central Processing unit CPU, a DSP, a microcontroller, or a digital Signal processor, and may further include a GPU, an embedded Neural Network Processor (NPU), and an Image Signal Processing (ISP), and may further include a specific integrated circuit ASIC, or one or more integrated circuits configured to implement the embodiments of the present invention, and the processor may have a function of operating one or more software programs, and the software programs may be stored in a storage medium such as a memory; and the aforementioned memory/storage media may comprise: non-volatile memories (non-volatile memories) such as non-removable magnetic disks, U-disks, removable hard disks, optical disks, etc., and Read-Only memories (ROM), Random Access Memories (RAM), etc.
In the embodiments of the present invention, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of skill in the art will appreciate that the various modules, elements, and method steps described in the embodiments disclosed in this specification can be implemented as electronic hardware, combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other. In particular, for embodiments of devices, apparatuses, etc., since they are substantially similar to the method embodiments, reference may be made to some of the descriptions of the method embodiments for their relevant points. The above-described embodiments of devices, apparatuses, etc. are merely illustrative, and modules, units, etc. described as separate components may or may not be physically separate, and may be located in one place or distributed in multiple places, for example, on nodes of a system network. Some or all of the modules and units can be selected according to actual needs to achieve the purpose of the above-mentioned embodiment. Can be understood and carried out by those skilled in the art without inventive effort.
The structure, features and effects of the present invention have been described in detail with reference to the embodiments shown in the drawings, but the above embodiments are merely preferred embodiments of the present invention, and it should be understood that technical features related to the above embodiments and preferred modes thereof can be reasonably combined and configured into various equivalent schemes by those skilled in the art without departing from and changing the design idea and technical effects of the present invention; therefore, the invention is not limited to the embodiments shown in the drawings, and all the modifications and equivalent embodiments that can be made according to the idea of the invention are within the scope of the invention as long as they are not beyond the spirit of the description and the drawings.

Claims (16)

1. A dynamic knowledge graph complementing method is characterized by comprising the following steps:
acquiring known entities, known relations and known time information in a four-tuple relational expression of a dynamic knowledge graph to be complemented, wherein the known entities comprise a head entity or a tail entity;
obtaining final entity embedding corresponding to the known time information according to the known entity and the known relation;
predicting a probability that the final entity is embedded as a missing entity;
wherein obtaining the end entity embedding corresponding to the known time information comprises:
taking a first time step corresponding to the known entity, the known relationship and the known time information as an input of a structure encoder, and obtaining a first structured entity embedding of the first time step;
embedding the first structured entity as the final entity embedding.
2. The dynamic knowledge-graph complementing method of claim 1, wherein obtaining final entity embedding corresponding to the known temporal information further comprises:
taking the last time step of the known entity in the active state before the first time step as a second time step;
using the first structured entity embedding and the first dynamic entity embedding of the second time step as input of a time encoder, obtaining the second dynamic entity embedding of the first time step as final dynamic entity embedding of the first time step;
and the number of the first and second electrodes,
the final dynamic entity embedding is taken as the final entity embedding.
3. The dynamic knowledge-graph complementing method of claim 1, wherein obtaining final entity embedding corresponding to the known temporal information further comprises:
taking the last time step of the known entity in the active state before the first time step as a second time step;
using the known entity, the known relationship, and the second time step as inputs to a structural encoder, obtaining a second structured entity embedding for the second time step;
obtaining a third structured entity embedding for the first time step using attribution theory in combination with the first structured entity embedding and the second structured entity embedding;
using the third structured entity embedding and the first dynamic entity embedding at the second time step as input of a time encoder, obtaining the third dynamic entity embedding at the first time step as a final dynamic entity embedding at the first time step;
and the number of the first and second electrodes,
the final dynamic entity embedding is taken as the final entity embedding.
4. The dynamic knowledge-graph complementing method of claim 2 or 3, wherein obtaining final entity embedding corresponding to the known time information further comprises:
obtaining a synthetic entity embedding as the final entity embedding according to the first structured entity embedding and the final dynamic entity embedding by using a gating mechanism.
5. The dynamic knowledge-graph completion method of claim 1 wherein the structure encoder comprises a first training model of a multi-relationship based messaging neural network.
6. The dynamic knowledge-graph complementing method of claim 2 or 3, wherein said time coder comprises a second training model based on a recurrent neural network.
7. The dynamic knowledge-graph complementing method of claim 6, wherein obtaining a second dynamic entity embedding for the first time step comprises:
calculating a first decay rate of the first dynamic entity embedding;
a fourth dynamic entity embedding that calculates the second time step as a function of the first decay rate and the first dynamic entity embedding;
and taking the first structured entity embedding and the fourth dynamic entity embedding as the input of the time encoder to obtain the second dynamic entity embedding of the first time step.
8. The dynamic knowledge-graph complementing method of claim 3, wherein obtaining the third structured entity embedding comprises:
calculating a second decay rate of the known entity between the second time step and the first time step, the second decay rate serving as a first weight for embedding the second structured entity into the third structured entity;
and calculating a weighted sum by using the first weight, the first structured entity embedding and the second structured entity embedding to serve as a third structured entity embedding.
9. The dynamic knowledge-graph complementing method of claim 3, wherein obtaining the third structured entity embedding comprises:
calculating a second decay rate of the known entity between the second time step and the first time step, the second decay rate serving as a first weight for embedding the second structured entity into the third structured entity;
taking a last time step after the first time step when the known entity is in an active state as a third time step;
taking the known entity, the known relationship and the third time step as input of a structure encoder, and acquiring a fourth structured entity embedding of the third time step;
calculating a third attenuation rate of the known entity between the third time step and the first time step and a fourth attenuation rate of the known entity within the first time step, the third attenuation rate being used as a second weight of the fourth structured entity embedding on the third structured entity, the fourth attenuation rate being used as a third weight of the first structured entity embedding on the third structured entity;
calculating a weighted sum using the first, second, and fourth structured entity embeddings and corresponding decay rates as the third structured entity embeddings.
10. The dynamic knowledge-graph complementing method of claim 4, wherein obtaining the synthetic entity embedding comprises:
respectively calculating the frequency of the known entity, the known relation and the corresponding relation of the known entity and the known relation on a time window, wherein the time window comprises a time point or a time period corresponding to the known time information;
the frequencies of the known entities, the known relationships, and the corresponding relationships of the known entities and the known relationships form a frequency vector of missing entities corresponding to the dynamic knowledge graph to be complemented;
obtaining a fourth weight of the first structured entity embedding to the synthetic entity embedding according to the frequency vector;
and calculating a weighted sum as the comprehensive entity embedding according to the fourth weight, the first structured entity embedding and the final dynamic entity embedding.
11. A dynamic knowledge graph complementing device is characterized by comprising a task receiving module, a final entity embedding obtaining module and a prediction module;
the task receiving module is used for acquiring known entities, known relations and known time information in a quadruple relational expression of the dynamic knowledge graph to be complemented, wherein the known entities comprise a head entity or a tail entity;
the final entity embedding obtaining module is used for obtaining final entity embedding corresponding to the known time information according to the known entity and the known relation;
the prediction module is used for predicting the probability that the final entity is embedded as a missing entity;
the final entity embedding obtaining module includes a structure encoder, and the structure encoder is configured to obtain a first structured entity embedding of a first time step corresponding to the known entity and the known time information by using the first time step, and use the first structured entity embedding as the final entity embedding.
12. The dynamic knowledge-graph complementing device of claim 11, wherein said final entity embedding obtaining module further comprises a time step determining submodule and a time coder;
the time step determination submodule is used for taking the last time step of the known entity in the active state before the first time step as a second time step;
the time encoder is configured to obtain a second dynamic entity embedding of the first time step as a final dynamic entity embedding of the first time step by using the first structured entity embedding and the first dynamic entity embedding of the second time step as input of the time encoder, and to use the final dynamic entity embedding as the final entity embedding.
13. The dynamic knowledge-graph complementing device of claim 12, wherein said structure encoder is further configured to obtain a third structured entity embedding for the first time step by using attribution theory in combination with the first structured entity embedding and a second structured entity embedding for the second time step;
the time encoder is further configured to obtain a third dynamic entity embedding for the first time step using the third structured entity embedding and the first dynamic entity embedding for the second time step, as a final dynamic entity embedding for the first time step, and to embed the final dynamic entity as the final entity embedding.
14. The dynamic knowledge-graph complementing device of claim 12 or 13, wherein said final entity embedding obtaining module further comprises a gating unit for obtaining a synthetic entity embedding as said final entity embedding according to said first structured entity embedding and said final dynamic entity embedding.
15. An electronic device, comprising:
one or more processors, memory, and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the dynamic knowledge-graph complementation method of any of claims 1-10.
16. A computer-readable storage medium, having stored thereon a computer program which, when run on a computer, causes the computer to perform the dynamic knowledge-graph complementation method of any one of claims 1-10.
CN202111131711.9A 2021-09-26 2021-09-26 Dynamic knowledge graph completion method and device and electronic equipment Pending CN113836318A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111131711.9A CN113836318A (en) 2021-09-26 2021-09-26 Dynamic knowledge graph completion method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111131711.9A CN113836318A (en) 2021-09-26 2021-09-26 Dynamic knowledge graph completion method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113836318A true CN113836318A (en) 2021-12-24

Family

ID=78970272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111131711.9A Pending CN113836318A (en) 2021-09-26 2021-09-26 Dynamic knowledge graph completion method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113836318A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114218405A (en) * 2022-02-14 2022-03-22 科大讯飞(苏州)科技有限公司 Knowledge extraction method, related device, electronic equipment and medium
CN114547273A (en) * 2022-03-18 2022-05-27 科大讯飞(苏州)科技有限公司 Question answering method and related device, electronic equipment and storage medium
CN115238100A (en) * 2022-09-21 2022-10-25 科大讯飞(苏州)科技有限公司 Entity alignment method, device, equipment and computer readable storage medium
CN115599927A (en) * 2022-11-08 2023-01-13 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院)(Cn) Timing sequence knowledge graph completion method and system based on metric learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956254A (en) * 2019-11-12 2020-04-03 浙江工业大学 Case reasoning method based on dynamic knowledge representation learning
KR20200117690A (en) * 2019-04-05 2020-10-14 연세대학교 산학협력단 Method and Apparatus for Completing Knowledge Graph Based on Convolutional Learning Using Multi-Hop Neighborhoods
CN111881219A (en) * 2020-05-19 2020-11-03 杭州中奥科技有限公司 Dynamic knowledge graph completion method and device, electronic equipment and storage medium
CN112559757A (en) * 2020-11-12 2021-03-26 中国人民解放军国防科技大学 Time sequence knowledge graph completion method and system
CN113190654A (en) * 2021-05-08 2021-07-30 北京工业大学 Knowledge graph complementing method based on entity joint embedding and probability model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200117690A (en) * 2019-04-05 2020-10-14 연세대학교 산학협력단 Method and Apparatus for Completing Knowledge Graph Based on Convolutional Learning Using Multi-Hop Neighborhoods
CN110956254A (en) * 2019-11-12 2020-04-03 浙江工业大学 Case reasoning method based on dynamic knowledge representation learning
CN111881219A (en) * 2020-05-19 2020-11-03 杭州中奥科技有限公司 Dynamic knowledge graph completion method and device, electronic equipment and storage medium
CN112559757A (en) * 2020-11-12 2021-03-26 中国人民解放军国防科技大学 Time sequence knowledge graph completion method and system
CN113190654A (en) * 2021-05-08 2021-07-30 北京工业大学 Knowledge graph complementing method based on entity joint embedding and probability model

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114218405A (en) * 2022-02-14 2022-03-22 科大讯飞(苏州)科技有限公司 Knowledge extraction method, related device, electronic equipment and medium
CN114218405B (en) * 2022-02-14 2022-08-16 科大讯飞(苏州)科技有限公司 Knowledge extraction method, related device, electronic equipment and medium
CN114547273A (en) * 2022-03-18 2022-05-27 科大讯飞(苏州)科技有限公司 Question answering method and related device, electronic equipment and storage medium
CN114547273B (en) * 2022-03-18 2022-08-16 科大讯飞(苏州)科技有限公司 Question answering method and related device, electronic equipment and storage medium
CN115238100A (en) * 2022-09-21 2022-10-25 科大讯飞(苏州)科技有限公司 Entity alignment method, device, equipment and computer readable storage medium
CN115599927A (en) * 2022-11-08 2023-01-13 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院)(Cn) Timing sequence knowledge graph completion method and system based on metric learning

Similar Documents

Publication Publication Date Title
CN113836318A (en) Dynamic knowledge graph completion method and device and electronic equipment
US11037072B2 (en) Scalable complex event processing with probabilistic machine learning models to predict subsequent geolocations
WO2021258967A1 (en) Neural network training method and device, and data acquisition method and device
US20190221187A1 (en) System, apparatus and methods for adaptive data transport and optimization of application execution
WO2022161202A1 (en) Multimedia resource classification model training method and multimedia resource recommendation method
KR20210073569A (en) Method, apparatus, device and storage medium for training image semantic segmentation network
Ng et al. Reputation-aware hedonic coalition formation for efficient serverless hierarchical federated learning
CN116244513B (en) Random group POI recommendation method, system, equipment and storage medium
EP3685266A1 (en) Power state control of a mobile device
Chowdhury et al. Mobile Crowd‐Sensing for Smart Cities
CN113887704A (en) Traffic information prediction method, device, equipment and storage medium
CN116562399A (en) Model training method and device with end Bian Yun cooperated
WO2023000261A1 (en) Regional traffic prediction method and device
CN110188123A (en) User matching method and equipment
CN114417174A (en) Content recommendation method, device, equipment and computer storage medium
CN116630630B (en) Semantic segmentation method, semantic segmentation device, computer equipment and computer readable storage medium
CN110209704A (en) User matching method and equipment
CN114900435B (en) Connection relation prediction method and related equipment
CN113448876B (en) Service testing method, device, computer equipment and storage medium
CN112364258B (en) Recommendation method and system based on map, storage medium and electronic equipment
CN113244629A (en) Lost account recall method and device, storage medium and electronic equipment
Xue et al. Urban population density estimation based on spatio‐temporal trajectories
CN114254738A (en) Double-layer evolvable dynamic graph convolution neural network model construction method and application
CN103645889B (en) Dynamic software self-adaption generating method
CN116811895B (en) Vehicle running speed determination model processing method and vehicle running speed determination method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination