CN113190654A - Knowledge graph complementing method based on entity joint embedding and probability model - Google Patents

Knowledge graph complementing method based on entity joint embedding and probability model Download PDF

Info

Publication number
CN113190654A
CN113190654A CN202110502522.1A CN202110502522A CN113190654A CN 113190654 A CN113190654 A CN 113190654A CN 202110502522 A CN202110502522 A CN 202110502522A CN 113190654 A CN113190654 A CN 113190654A
Authority
CN
China
Prior art keywords
entity
graph
vector
probability
embedding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110502522.1A
Other languages
Chinese (zh)
Inventor
竹翠
刘露蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110502522.1A priority Critical patent/CN113190654A/en
Publication of CN113190654A publication Critical patent/CN113190654A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a knowledge graph complementing method based on entity joint embedding and a probability model, which utilizes the powerful feature aggregation capability of a graph convolution network to integrate the knowledge graph as a topological structure of graph data and node features into the representation learning of an entity; by utilizing the dynamic embedding of the entity, the learning entity can show the characteristics of different states at different time, the characteristics of the entity are divided into two parts for learning, one part of characteristics dynamically change along with the time, and the other part of characteristics keep static attributes; the probability model carries out deep extraction of semantic information on the triples by using a convolutional neural network, reconstructs the triples in a vector space, measures the effectiveness of the triples, and outputs a probability vector to predict missing entities. The output of entity joint embedding is used as the input of a probability model, and the completion problem of missing entities in a dynamically changing knowledge graph is effectively solved.

Description

Knowledge graph complementing method based on entity joint embedding and probability model
Technical Field
The invention relates to the technical field of knowledge engineering, in particular to a knowledge graph complementing method based on entity joint embedding and a probability model.
Background
The knowledge map organizes unstructured and unorganized data in the internet into canonical knowledge resources in an intuitive and clear manner so that knowledge can be further reasoned and learned. However, due to the shortcomings of manual construction methods and relational extraction techniques, existing large-scale knowledge maps are often incomplete. Knowledge-graph is generally expressed in the form of triples (head entity, relationship, tail entity), and the research of predicting or deducing new factual relationships by using the existing knowledge in the knowledge-graph is called knowledge-graph completion. At present, most methods map entities and relations into a low-dimensional dense vector space based on knowledge graph representation, and design a scoring function for triples to measure the effectiveness of the triples. On the basis, the entity and the relation are predicted and inferred. The concrete method is divided into two types: the first is a distributed representation-based approach that utilizes simple vector or matrix operations to construct a scoring function, such as vector inner products or matrix multiplications. Such rapid, shallow methods are simple to implement, but only learn a small number of feature expressions. The second is a deep learning based approach. The method utilizes a deep network architecture to carry out deep feature extraction on entities and relations, such as a convolutional neural network, a graph convolutional network and the like, so that more feature expressions can be learned. However, most of the methods only pay attention to static knowledge map completion, neglect the important components of time in the knowledge map, and have yet to be improved on the dynamically changing knowledge map completion effect.
The triple relationships in the knowledge graph are all established at a specific time, so the fact relationships are constantly changing. In order to complement a knowledge graph with time stamps, a knowledge graph complementing method capable of facing time reasoning needs to be designed, and multi-angle modeling is carried out on an entity, so that the complementing effect of missing entities in the knowledge graph is improved.
Disclosure of Invention
In order to realize the completion of the dynamic knowledge Graph and improve the accuracy of entity prediction, the invention provides a knowledge Graph completion method based on entity joint embedding and a probability model. In modeling the triples, a probability model is used as a scoring function to evaluate whether the triples are correct or not. The probability model extracts the semantic features of the triples by using a deep network architecture such as a convolutional neural network, and outputs predictions of entities. The traditional scoring function needs to construct a negative sample (an error triple) by randomly replacing the correct entity in the triple, and the random replacement mode cannot learn the negative sample well, so that the uncertainty of the model is easily increased. And the probability model outputs the probability vector to realize the prediction of the entity by matching the calculated vector with all entities, thereby avoiding the construction of negative samples. The method can be used for modeling the entity from a plurality of layers and learning more characteristic expressions of the knowledge graph, so that the dynamic knowledge graph can be supplemented better.
In order to achieve the purpose, the invention adopts the following technical scheme: firstly, a graph convolution network is utilized, a knowledge graph is regarded as an undirected graph, entities are regarded as nodes, relationships are regarded as edges between the nodes, and feature representation of each node is updated by aggregating neighbor node information. And then, by utilizing the dynamic embedding of the entity, the occurrence time of each triple is merged into the representation learning of the entity, the feature vector of the entity is divided into static features and time features, the time features dynamically change along with the time, and the static features keep the features unchanged in the time. And combining the results of the two modules, and performing mean value operation on the results to serve as a combined embedded learning result of the entity. And finally, the joint embedding learning of the entities is used as input and is sent into a probability model, the probability model utilizes a convolutional neural network to model the relationship, the triplets are reconstructed, and the output probability vector is used for predicting the missing entities.
A knowledge graph complementing method based on entity joint embedding and a probability model comprises the following steps:
step 1, inputting a triple set, and initializing an entity and a relation vector.
And 2, using the GCN learning knowledge graph as characteristic information of the graph structure.
And 3, learning the dynamically changed semantic features of the entity in the time dimension by utilizing the dynamic embedding of the entity.
And 4, fusing the results of the step 2 and the step 3, carrying out mean value operation on the results, and averaging the two learned characteristics to obtain the joint embedded expression of the entity.
And 5, the joint embedding of the entities obtains vector representation of the entities, the vector representation is used as the input of the probability model, and the probability vector is output through the convolution layer and the full-connection mapping to predict and complement the knowledge graph.
According to the method, the entity is modeled independently from the graph structure and the time dimension, so that the more comprehensive characteristic expression of the entity is learned, and on the basis, the probability model is sent to predict the knowledge graph by using the convolutional neural network. Compared with the conventional method of randomly initializing entity vectors, the inference based on the known features is characterized in that the model is learned not from zero but based on known information, so that the effect of knowledge graph completion is improved better.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic illustration of feature embedding of the graph structure;
FIG. 3 is a schematic diagram of dynamic embedding of entities;
FIG. 4 is a schematic diagram of entity federation embedding;
FIG. 5 is a schematic diagram of a probabilistic model;
Detailed Description
The invention will be described in detail below with reference to the model diagrams shown in the figures.
The flow chart of the method is shown in figure 1, a triple set is input, the entities in the triples are subjected to joint embedding learning, and the result is used as the input of a probability model for prediction. The method specifically comprises the following steps:
step 1, inputting a triple set, and initializing an entity and a relation vector.
The knowledge graph is stored in the form of triplets, and subsequent predictions and inferences are made for conversion into a computer recognizable form. The entity and the relation need to be represented and learned, and before learning, the relation and the entity vector are initialized randomly.
And 2, using the GCN learning knowledge graph as characteristic information of the graph structure.
In order to learn the knowledge graph as the structural feature of the graph, graph structural feature embedding learning is performed on the entity by using a weight graph convolution network WGCN. The knowledge graph is abstracted into an undirected graph, the entity is abstracted into nodes, and the relationship is abstracted into edges. FIG. 1 illustrates the learning of a central yellow node under a two-layer graph convolution that encodes the central node using the features of surrounding nodes. Specifically expressed as the following equation:
Figure BDA0003056965780000031
Figure BDA0003056965780000032
Figure BDA0003056965780000033
representing a yellow central node ViAt the input vector of the l-th layer,
Figure BDA0003056965780000034
represents a node ViOutput vectors at layer l. Alpha is alphatThe weight for each relationship type, the learnable parameters for the algorithm,
Figure BDA0003056965780000035
in particular the relation t weight of the ith layer. Wherein T is more than or equal to 1 and less than or equal to T, and T is the number of relationship types. N is a radical ofiIs node ViIs selected.
Figure BDA0003056965780000036
Represents a node ViAggregating neighbor nodes VjIs called aggregation function.
Figure BDA0003056965780000037
Is node VjInput feature vector at layer I, WlIs the linear variation matrix of the l-th layer. The aggregation mode is the accumulation of the neighbor features.
And 3, learning the dynamically changed semantic features of the entity in the time dimension by utilizing the dynamic embedding of the entity.
When the knowledge graph is supplemented, different entity characteristics need to be learned for the triples at different times so as to adapt to the ever-changing fact relationship. Thus, with the advantage of time-lapse embedding, the feature representation of the entity is divided into temporal features and static features. The temporal features vary with the evolution of the event, and the static features preserve the fixed properties of the entity. As shown in the entity dynamic embedding module in fig. 3, the input triples are provided with specific time stamps and are represented in the form of year, month and day. When an entity learns a feature, the time at which the relationship occurs is integrated into the solid circles representing learning, which are represented in the figure as green, red, and light blue. Static features are represented by yellow filled circles representing features that remain over time. The above process is as follows:
Figure BDA0003056965780000038
wherein e is(h,τ)Represented as a feature vector representation of the head entity h at time tau,
Figure BDA0003056965780000039
is a vector e(h,τ)The ith component of (a).
Figure BDA00030569657800000310
Are learnable parameters of an algorithm. k is the embedding dimension of the entity, and gamma belongs to [0,1 ]]Is the proportion of time characteristics and is used as a hyper-parameter of the model. e.g. of the type(h,τ)The first γ k feature elements of (1- γ) are used to capture the temporal features of the entity and the remaining (1- γ) k features are used to capture the static features of the entity. Parameter alphahThe degree of importance of the feature, parameter w, is controlledhThe weight vector as time controls the importance of time. By the above processThe process can learn the characteristics of an entity at any time, and thus make an accurate prediction at any time.
And 4, fusing the results of the step 2 and the step 3, carrying out mean value operation on the results, and averaging the two learned characteristics to obtain the joint embedded expression of the entity. As shown in fig. 4.
And 5, the joint embedding of the entities obtains vector representation of the entities, the vector representation is used as the input of the probability model, and the probability vector is output through the convolution layer and the full-connection mapping to predict and complement the knowledge graph.
The previous modules perform joint embedding learning on the entities. Next, the relationship and triplet structure needs to be modeled. The traditional method directly carries out matrix operation on head and tail entity vectors and relation vectors or carries out convolution on triples. During training, the correct triplet scores are made as high as possible and the incorrect triplet scores as low as possible. To do this, an error triplet needs to be constructed manually. The method of constructing an erroneous triplet is generally to randomly replace the head or tail entity in the correct triplet. Randomly replacing the correct entity increases the uncertainty of the model. For example, for a correct triplet (china, capital, beijing), a negative sample is now constructed, that is, replacing 'beijing' with a random one of all entities, and the randomness may cause 'beijing' to be replaced by entities with little relevance, such as 'zhangsan', 'doctor', 'sports'. However, the replaced triplets are clearly not true, which may cause the model to become exceptionally simple. For an error triple (china, capital, shanghai), it is difficult for the model to make a correct judgment, so that it is difficult to correctly complement the missing data. The above problem can be avoided using a probabilistic model.
The schematic diagram of the probabilistic model is shown in fig. 5, and is composed of three layers. At the input level, since the embedding dimensions of both entities and relationships are k, entity vectors and relationship vectors can be stacked into a matrix as input to the probabilistic model. For a triplet (h, r, t), the vector representation e is obtained through the joint embedding learning of the entityh∈RkRelation vector er∈RkObtained by random initialization. Let input matrix X ═ eh,er]∈Rk×2. On the convolutional layer, the convolution kernel ω is repeatedly convolved on the input matrix to obtain a series of feature maps N. Let N be [ N ]1,n2,n3...nk]∈Rk,Xi:jI to j rows of X, niGiven by the following equation: n isi=σ(ω·Xi:j+b)
σ () is the activation function and b is the offset value. The convolution kernel ω is linearly multiplied by the input matrix X. All the characteristic graphs are spliced to obtain an independent vector
Figure BDA0003056965780000041
s is the size of the convolution kernel, s ∈ {1, 3, 5. At the output layer, independent vectors
Figure BDA0003056965780000042
Mapping dimensionality to k through full connection, multiplying the k with an entity matrix, and obtaining a probability vector P with the size of | epsilon |, through a sigmoid functionh。PhEach entry of (a) represents a probability magnitude that the entity of the corresponding position is the correct tail entity. Finally, the probability vector P to be obtainedhSummarized as the formula:
Ph=σ(f(concat(g([eh,er]*M))))
m is the set of convolution kernels, concat represents the concatenation operation, and is the convolution operation. f represents a non-linear function, and independent vectors can be obtained
Figure BDA0003056965780000043
Is transformed into
Figure BDA0003056965780000044
And | epsilon | is the number of entities.
Finally, all modules are trained by minimizing a binary cross entropy loss function. As shown in the following equation, | ε | is the entity size, piAnd the ith component in the probability vector finally output by the model is greater than or equal to 0 and less than or equal to | epsilon | -1. For the loss function, the algorithm needs to beReal tag vector to construct tail entity
Figure BDA0003056965780000051
And the predicted probability vectors are used to fit the true tag vectors. Each term component of the real tag vector T is defined as follows, TiE.g. I, which is a valid set of tail entities given a head entity h and a relation r,
Figure BDA0003056965780000052
is a set of real triples. At the moment, the label vector T is used as a one-hot vector, the component value of 0 represents zero probability, and 1 represents full probability, so that the model is lack of adaptability and is too self-confident. Therefore, partial noise is added to the real distribution through a smooth label mechanism, so that overfitting is avoided, and the generalization capability of the model is enhanced.
Figure BDA0003056965780000053
The smoothed vector of the label is represented as follows. α is a parameter for label smoothing, typically 0.1.
Figure BDA0003056965780000054
Figure BDA0003056965780000055
Figure BDA0003056965780000056
During testing, the entity corresponding to the position with the maximum probability value in the probability vector is the prediction of the missing entity.
The specific implementation of the present invention is now described.

Claims (3)

1. A knowledge graph complementing method based on entity joint embedding and probability models is characterized by comprising the following steps: the method comprises the following steps:
step 1, inputting a triple set, and initializing an entity and a relation vector;
step 2, using GCN learning knowledge graph as characteristic information of graph structure;
step 3, learning the dynamically changed semantic features of the entity in the time dimension by utilizing the dynamic embedding of the entity;
step 4, fusing the results of the step 2 and the step 3, carrying out mean value operation on the results, and averaging the two learned characteristics to obtain a joint embedded expression of the entity;
and 5, the joint embedding of the entities obtains vector representation of the entities, the vector representation is used as the input of the probability model, and the probability vector is output through the convolution layer and the full-connection mapping to predict and complement the knowledge graph.
2. The knowledge-graph completion method based on entity joint embedding and probability model as claimed in claim 1, wherein: the method comprises the following three parts: the structure characteristic embedding module of the graph, the dynamic embedding module of the entity and the probability model are used for final prediction and completion; the structure feature embedding module of the graph utilizes a graph convolution network GCN learning knowledge graph as a topological structure and node features of the graph; specifically, a weight graph convolution network WGCN is utilized to abstract a knowledge graph into an undirected graph, entities into nodes, and relationships into edges; the WGCN accumulates the structure information of the graph and the characteristics of other nodes in the aggregation process, and learns different weight parameters for different edge types, thereby generating a new node representation; the aggregation of the information is mainly embodied by an aggregation function, the aggregation function acts on each node and neighbor nodes thereof in the graph, and each node is represented as the superposition of neighbor characteristics and self characteristics through the aggregation function; the dynamic embedding module of the entity considers the important composition of time factors in the knowledge graph and integrates the time of the occurrence of the triples into the modeling of the entity; specifically, the characteristic dimension of the entity is divided into a static characteristic and a time characteristic, wherein the static part retains the unchanged characteristic, and the time part changes along with the time; the probabilistic model utilizes a deep network architecture such as a neural network to model the relationship and output a prediction; specifically, the entity vectors and the relation vectors are spliced into a matrix and sent to a probability model, convolution operation is carried out on the matrix to obtain a series of characteristic graphs, the characteristic graphs are spliced into long vectors, the long vectors are mapped to the size with the same dimension as the entity characteristics through full connection, and then the long vectors are multiplied by an entity embedding matrix, and the probability vectors with the dimension of the entity size are output.
3. The knowledge-graph completion method based on entity joint embedding and probability model as claimed in claim 1, wherein: the optimization goal of the whole model is to minimize a binary cross entropy loss function, | ε | is the entity size, piIs the ith component in the probability vector finally output by the model, i is more than or equal to 0 and less than or equal to | epsilon | -1; for the loss function, the algorithm needs to construct the true tag vector of the tail entity
Figure FDA0003056965770000011
And fitting the true tag vector using the predicted probability vector; each term component of the real tag vector T is defined as follows, TiE.g. I, which is a valid set of tail entities given a head entity h and a relation r,
Figure FDA0003056965770000012
a real triple set; at the moment, the label vector T is used as a one-hot vector, the component value is 0 to represent zero probability, and 1 represents full probability, so that the model is lack of adaptability and is over self-confident; partial noise is added in the real distribution through a smooth label mechanism, so that overfitting is avoided, and the generalization capability of the model is enhanced;
Figure FDA0003056965770000013
representing the label smoothed vector.
CN202110502522.1A 2021-05-08 2021-05-08 Knowledge graph complementing method based on entity joint embedding and probability model Pending CN113190654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110502522.1A CN113190654A (en) 2021-05-08 2021-05-08 Knowledge graph complementing method based on entity joint embedding and probability model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110502522.1A CN113190654A (en) 2021-05-08 2021-05-08 Knowledge graph complementing method based on entity joint embedding and probability model

Publications (1)

Publication Number Publication Date
CN113190654A true CN113190654A (en) 2021-07-30

Family

ID=76988477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110502522.1A Pending CN113190654A (en) 2021-05-08 2021-05-08 Knowledge graph complementing method based on entity joint embedding and probability model

Country Status (1)

Country Link
CN (1) CN113190654A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113836318A (en) * 2021-09-26 2021-12-24 合肥智能语音创新发展有限公司 Dynamic knowledge graph completion method and device and electronic equipment
CN113869516A (en) * 2021-12-06 2021-12-31 深圳大学 Knowledge graph embedded model training method and device, electronic equipment and medium
CN114117064A (en) * 2021-11-09 2022-03-01 西南交通大学 Knowledge dynamic evolution method based on multi-time granularity and application
CN115062159A (en) * 2022-06-13 2022-09-16 西南交通大学 Multi-granularity dynamic knowledge graph embedded model construction method based on federal learning
CN115878861A (en) * 2023-02-07 2023-03-31 东南大学 Selection method of integrated key node group for graph data completion
WO2023115761A1 (en) * 2021-12-20 2023-06-29 北京邮电大学 Event detection method and apparatus based on temporal knowledge graph
CN115062159B (en) * 2022-06-13 2024-05-24 西南交通大学 Multi-granularity event early warning dynamic knowledge graph embedding model construction method based on federal learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881219A (en) * 2020-05-19 2020-11-03 杭州中奥科技有限公司 Dynamic knowledge graph completion method and device, electronic equipment and storage medium
CN112364108A (en) * 2020-11-13 2021-02-12 四川省人工智能研究院(宜宾) Time sequence knowledge graph completion method based on space-time architecture
CN112632290A (en) * 2020-12-21 2021-04-09 浙江大学 Self-adaptive knowledge graph representation learning method integrating graph structure and text information
CN112699247A (en) * 2020-12-23 2021-04-23 清华大学 Knowledge representation learning framework based on multi-class cross entropy contrast completion coding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881219A (en) * 2020-05-19 2020-11-03 杭州中奥科技有限公司 Dynamic knowledge graph completion method and device, electronic equipment and storage medium
CN112364108A (en) * 2020-11-13 2021-02-12 四川省人工智能研究院(宜宾) Time sequence knowledge graph completion method based on space-time architecture
CN112632290A (en) * 2020-12-21 2021-04-09 浙江大学 Self-adaptive knowledge graph representation learning method integrating graph structure and text information
CN112699247A (en) * 2020-12-23 2021-04-23 清华大学 Knowledge representation learning framework based on multi-class cross entropy contrast completion coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田满鑫等: "一种基于实体时间敏感度的知识表示方法", 软件工程, no. 01, 5 January 2020 (2020-01-05), pages 5 - 10 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113836318A (en) * 2021-09-26 2021-12-24 合肥智能语音创新发展有限公司 Dynamic knowledge graph completion method and device and electronic equipment
CN114117064A (en) * 2021-11-09 2022-03-01 西南交通大学 Knowledge dynamic evolution method based on multi-time granularity and application
CN113869516A (en) * 2021-12-06 2021-12-31 深圳大学 Knowledge graph embedded model training method and device, electronic equipment and medium
WO2023115761A1 (en) * 2021-12-20 2023-06-29 北京邮电大学 Event detection method and apparatus based on temporal knowledge graph
CN115062159A (en) * 2022-06-13 2022-09-16 西南交通大学 Multi-granularity dynamic knowledge graph embedded model construction method based on federal learning
CN115062159B (en) * 2022-06-13 2024-05-24 西南交通大学 Multi-granularity event early warning dynamic knowledge graph embedding model construction method based on federal learning
CN115878861A (en) * 2023-02-07 2023-03-31 东南大学 Selection method of integrated key node group for graph data completion

Similar Documents

Publication Publication Date Title
CN113190654A (en) Knowledge graph complementing method based on entity joint embedding and probability model
CN111612066B (en) Remote sensing image classification method based on depth fusion convolutional neural network
CN108829763B (en) Deep neural network-based attribute prediction method for film evaluation website users
CN110046252B (en) Medical text grading method based on attention mechanism neural network and knowledge graph
CN113190688B (en) Complex network link prediction method and system based on logical reasoning and graph convolution
CN109272332B (en) Client loss prediction method based on recurrent neural network
CN113609398A (en) Social recommendation method based on heterogeneous graph neural network
CN114817663B (en) Service modeling and recommendation method based on class perception graph neural network
CN109523021A (en) A kind of dynamic network Structure Prediction Methods based on long memory network in short-term
CN112487193B (en) Zero sample picture classification method based on self-encoder
CN113688253B (en) Hierarchical perception temporal knowledge graph representation learning method
CN116822382B (en) Sea surface temperature prediction method and network based on space-time multiple characteristic diagram convolution
CN113836312A (en) Knowledge representation reasoning method based on encoder and decoder framework
CN113962358A (en) Information diffusion prediction method based on time sequence hypergraph attention neural network
CN112308115A (en) Multi-label image deep learning classification method and equipment
CN112115967A (en) Image increment learning method based on data protection
CN113192647A (en) New crown confirmed diagnosis people number prediction method and system based on multi-feature layered space-time characterization
CN114679372A (en) Node similarity-based attention network link prediction method
CN112712165B (en) Intelligent cigarette formula maintenance method based on graph convolution neural network
CN116821519A (en) Intelligent recommendation method for system filtering and noise reduction based on graph structure
CN111353525A (en) Modeling and missing value filling method for unbalanced incomplete data set
CN109697511B (en) Data reasoning method and device and computer equipment
CN112836511B (en) Knowledge graph context embedding method based on cooperative relationship
CN116110232A (en) Traffic flow prediction method based on hierarchical dynamic residual map convolution network
CN115661450A (en) Category increment semantic segmentation method based on contrast knowledge distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination