CN113780564A - Knowledge graph reasoning method, device, equipment and storage medium fusing entity type information - Google Patents

Knowledge graph reasoning method, device, equipment and storage medium fusing entity type information Download PDF

Info

Publication number
CN113780564A
CN113780564A CN202111084761.6A CN202111084761A CN113780564A CN 113780564 A CN113780564 A CN 113780564A CN 202111084761 A CN202111084761 A CN 202111084761A CN 113780564 A CN113780564 A CN 113780564A
Authority
CN
China
Prior art keywords
entity
vector
inference
entity type
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111084761.6A
Other languages
Chinese (zh)
Other versions
CN113780564B (en
Inventor
朱怡安
段俊花
高昆
钟冬
姚烨
李联
陆伟
史先琛
张黎翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202111084761.6A priority Critical patent/CN113780564B/en
Publication of CN113780564A publication Critical patent/CN113780564A/en
Application granted granted Critical
Publication of CN113780564B publication Critical patent/CN113780564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a knowledge graph reasoning method, a knowledge graph reasoning device, knowledge graph reasoning equipment and a storage medium for fusing entity type information, wherein the method comprises the following steps: inputting the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix into a reasoning model, respectively extracting a head entity vector, a relation vector and a head entity type vector, and generating a convolution kernel of the reasoning model; convolving the head entity vector by the convolution kernel of the inference model to generate a hidden layer of the inference model; after the hidden layer passes through the full connection layer of the inference model, generating a mixed feature vector; and multiplying the mixed feature vector by the entity embedding matrix, carrying out normalization processing by adopting a sigmoid activation function, and outputting an inference result. The invention fuses entity type embedding and relation embedding, and convolves the head entity by using the fused feature vector, so that the internal relation among the entity, the entity type and the relation can be captured, and the entity type accuracy of the inference result is effectively improved.

Description

Knowledge graph reasoning method, device, equipment and storage medium fusing entity type information
Technical Field
The invention relates to the technical field of knowledge graphs, in particular to a knowledge graph reasoning method, a knowledge graph reasoning device, knowledge graph reasoning equipment and a storage medium, wherein the knowledge graph reasoning method, the knowledge graph reasoning device, the knowledge graph reasoning equipment and the storage medium are used for fusing entity type information.
Background
The knowledge graph is a knowledge representation method which is stored and organized in a graph structure and is composed of entities and relations, and the relations among real things can be represented in the form of a graph and can be applied to a plurality of professional fields. Due to the complexity and the recognition limitation of the object relationship and the huge scale of the constructed knowledge graph in general conditions, the situation of incomplete knowledge graph information inevitably occurs, and the practical application of the knowledge graph is limited to a certain extent. The knowledge graph reasoning technology can utilize the existing knowledge in the knowledge graph to reason out the missing or hidden knowledge in the current knowledge graph, thereby perfecting the knowledge graph.
At present, a knowledge inference algorithm based on representation learning is a mainstream method of a knowledge graph inference technology, and is to numerically represent entities and relations in a knowledge graph, convert the entities and relations into vectors, and perform corresponding numerical calculation, so as to predict the accuracy of triples in the knowledge graph. Although the inference algorithm achieves the optimal performance in a plurality of inference tasks, the actual application of knowledge inference is far from enough, the accuracy is not high, and the inference algorithm is difficult to satisfy. Moreover, knowledge inference algorithms based on representation learning only compute scores between entities and relationships, lacking constraints on the type of inference results, resulting in unsatisfactory types of a large number of entities in the inference results.
It is noted that this section is intended to provide a background or context to the embodiments of the disclosure that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
Disclosure of Invention
The embodiment of the invention provides a knowledge graph reasoning method, a knowledge graph reasoning device, knowledge graph reasoning equipment and a storage medium for fusing entity type information, and aims to solve the problems that a knowledge reasoning algorithm based on representation learning in the prior art is low in accuracy, a large number of entity types in a reasoning result are not in line with requirements and the like.
In a first aspect, an embodiment of the present invention provides a knowledge graph inference method fusing entity type information, including:
converting a triple set in the knowledge graph into a vector matrix corresponding to the triple set, wherein the converted triple set comprises an entity embedded matrix, a relationship embedded matrix and an entity type embedded matrix;
inputting the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix into a reasoning model to respectively extract a head entity vector, a relation vector and a head entity type vector, and sequentially generating a convolution kernel of the reasoning model by the relation vector and the head entity type vector through an LSTM network;
convolving the head entity vector by the convolution kernel to generate a hidden layer of the inference model;
after the hidden layer passes through the full connection layer of the inference model, generating a mixed feature vector, wherein the dimension of the mixed feature vector is the same as the number of entities included in the triple set;
and multiplying the mixed feature vector by the entity embedding matrix, and performing normalization processing by adopting a sigmoid activation function to enable the inference model to output an inference result.
As a preferred mode of the first aspect of the present invention, before the converting the triple set in the knowledge-graph into the vector matrix corresponding to the triple set, the method further includes:
after a knowledge base to be processed is obtained, an entity set, a relation set and an entity type set in the knowledge graph are extracted, and a triple set of the knowledge graph is generated according to the entity set, the relation set and the entity type set.
As a preferred mode of the first aspect of the present invention, the triple set includes a forward triple and a reverse triple corresponding to the forward triple.
As a preferred mode of the first aspect of the present invention, after the converting the triple set in the knowledge-graph into the vector matrix corresponding to the triple set, the method further includes:
initializing the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix through xavier normal distribution.
As a preferred mode of the first aspect of the present invention, the multiplying the mixed feature vector by the entity embedding matrix, and performing normalization processing by using a sigmoid activation function to make the inference model output an inference result includes:
multiplying the mixed feature vector by the entity embedding matrix to obtain an output vector;
normalizing the output vector by adopting a sigmoid activation function to obtain the prediction probability of each triplet in the triplet set, so that the inference model outputs an inference result;
obtaining the prediction probability of each triple in the triple set according to the following formula:
Figure BDA0003262937640000031
wherein sigmoid () represents a sigmoid activation function, Vec (LSTM (r, t)1) Represent operations that change the shape of the convolution kernel,Vec′(h*Vec(LSTM(r,t1) ) represents the operation of changing the shape of the hidden layer, represents the convolution operation, W and b represent the weight matrix and the offset of the fully connected layer, and h, r, t represent the head entity, the relation and the tail entity in the triplet, respectively.
In a second aspect, an embodiment of the present invention provides a knowledge graph inference apparatus fusing entity type information, including:
the conversion unit is used for converting the triple set in the knowledge graph into a vector matrix corresponding to the triple set, and the converted triple set comprises an entity embedded matrix, a relationship embedded matrix and an entity type embedded matrix;
the extracting unit is used for inputting the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix into a reasoning model to respectively extract a head entity vector, a relation vector and a head entity type vector, and generating a convolution kernel of the reasoning model by the relation vector and the head entity type vector through an LSTM network in sequence;
a convolution unit, configured to convolve the head entity vector with the convolution kernel, and generate a hidden layer of the inference model;
the processing unit is used for generating a mixed feature vector after the hidden layer passes through the full connection layer of the inference model, and the dimension of the mixed feature vector is the same as the number of entities included in the triple set;
and the output unit is used for multiplying the mixed feature vector by the entity embedded matrix and carrying out normalization processing by adopting a sigmoid activation function so that the inference model outputs an inference result.
As a preferred mode of the second aspect of the present invention, the triple set includes a forward triple and a reverse triple corresponding to the forward triple.
As a preferable mode of the second aspect of the present invention, the output unit is specifically configured to:
multiplying the mixed feature vector by the entity embedding matrix to obtain an output vector;
normalizing the output vector by adopting a sigmoid activation function to obtain the prediction probability of each triplet in the triplet set, so that the inference model outputs an inference result;
obtaining the prediction probability of each triple in the triple set according to the following formula:
Figure BDA0003262937640000041
wherein sigmoid () represents a sigmoid activation function, Vec (LSTM (r, t)1) Denotes an operation of changing the shape of a convolution kernel, Vec' (h × Vec (LSTM (r, t))1) ) represents the operation of changing the shape of the hidden layer, represents the convolution operation, W and b represent the weight matrix and the offset of the fully connected layer, and h, r, t represent the head entity, the relation and the tail entity in the triplet, respectively.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores therein execution instructions, and the processor reads the execution instructions in the memory for executing the steps in the method for knowledge-graph inference of fused entity type information according to any one of the first aspect and its preferred embodiments.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer-executable instructions for performing the steps in the method for knowledge-graph inference of fused entity type information according to any one of the first aspect and its preferred embodiments.
According to the knowledge graph inference method, the knowledge graph inference device, the knowledge graph inference equipment and the storage medium, provided by the embodiment of the invention, the entity type information is fused into the knowledge inference algorithm, the entity type embedding and the relation embedding are fused, and the head entity is convolved by using the fused feature vector, so that the internal relation among the entity, the entity type and the relation can be captured, and the entity type information is fully applied in the inference process.
The method and the device can acquire the entity type information in the triple while acquiring the triple information, greatly improve the entity type accuracy of the inference result, enable the entity type to meet the requirements, effectively improve the accuracy of the inference result, and facilitate the practical application of the follow-up knowledge map completion.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a knowledge-graph inference method for fusing entity type information according to an embodiment of the present invention;
fig. 2 is an execution flowchart of a knowledge-graph inference method for fusing entity type information according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a knowledge-graph inference apparatus fusing entity type information according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Fig. 1 exemplarily shows a flow diagram of a knowledge graph inference method for fusing entity type information according to an embodiment of the present invention, and fig. 2 exemplarily shows an execution flow diagram of a knowledge graph inference method for fusing entity type information according to an embodiment of the present invention.
Referring to fig. 1 and 2, the method mainly includes the following steps:
step 101, converting a triple set in a knowledge graph into a vector matrix corresponding to the triple set, wherein the converted triple set comprises an entity embedded matrix, a relationship embedded matrix and an entity type embedded matrix;
102, inputting the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix into a reasoning model to respectively extract a head entity vector, a relation vector and a head entity type vector, and sequentially generating a convolution kernel of the reasoning model by the relation vector and the head entity type vector through an LSTM network;
103, convolving the head entity vector by the convolution core to generate a hidden layer of the inference model;
104, after the hidden layer passes through a full connection layer of the inference model, generating a mixed feature vector, wherein the dimension of the mixed feature vector is the same as the number of entities included in the triple set;
and 105, multiplying the mixed feature vector by the entity embedding matrix, and performing normalization processing by adopting a sigmoid activation function to enable the inference model to output an inference result.
The knowledge graph reasoning method fusing entity type information provided by the embodiment can be applied to various professional fields and general fields, such as data anomaly analysis, electronic commerce product recommendation, medical treatment and the like, and has a great popularization value.
Although researchers put a lot of effort in the construction and maintenance of the knowledge graph, the knowledge graph still has incomplete problems and even wrong knowledge, and the practical application of the knowledge graph is limited to a certain extent. The knowledge graph reasoning can use the existing knowledge in the knowledge graph to reason out the missing or hidden knowledge in the current knowledge graph, thereby perfecting the knowledge graph.
Before specifically describing the method for knowledge graph inference fusing entity type information provided by this embodiment, some symbolic definitions of the domain of knowledge inference are briefly described, as shown in table 1 below.
TABLE 1
Figure BDA0003262937640000071
Figure BDA0003262937640000081
The embodiment of the invention mainly solves the link prediction reasoning task of the knowledge graph and mainly predicts the head entity or the tail entity of the missing triple. As for the triplet (h, r, t), it is of the form: (1) given a head entity and a relationship (h, r,; (2) given the relationship and the tail entities (.
The method of the present embodiment will be described in detail below.
Before step 101, the following steps are also included:
step 100, after acquiring a knowledge base to be processed, extracting an entity set, a relationship set and an entity type set in the knowledge base, and generating a triple set of the knowledge graph according to the entity set, the relationship set and the entity type set.
In the step, a knowledge base to be processed is obtained, and then an entity set, a relation set and an entity type set are extracted from the knowledge base. The entity set refers to a set formed by all entities in the knowledge base, the relationship set refers to a set formed by all relationships in the knowledge base, and the entity type set refers to a set formed by type information of all the entities in the knowledge base.
And further, generating a triple set of the knowledge graph according to the obtained entity set, the relationship set and the entity type set. Triples are ways to represent knowledge in the form of (head entities, relationships, tail entities), a head entity referring to an entity that acts as a subject in a corpus, and a tail entity referring to an entity that acts as an object in a corpus. Typically, entity type information is attached directly behind an entity in a fixed format so that entity type information is processed at the same time as entity data is processed.
The method described in this embodiment introduces entity type information. The entity type information describes the classification of the entity, reflects part of the characteristics of the entity, limits the range of the inference result and is an important component of the entity. For example, for a triplet (disease-general drug-drug) the tail entity can only be one drug, and not other types of entities.
Preferably, the triple set includes a forward triple and a reverse triple corresponding to the forward triple.
Specifically, in the process of generating the triple set of the knowledge graph, an inverse relationship is also introduced to expand the triple set. For a certain triple (h, r, t), namely a forward triple, a reverse triple (t, r _ rev, h) is generated at the same time, and the two triples share entity embedding but not relationship embedding. The head entity and the tail entity of the original triple exchange positions, and a brand new relationship is added at the same time, so that the number of the entities is unchanged, but the number of the relationship is doubled.
In step 101, a triple set in the knowledge graph is embedded and converted into a low-dimensional vector space, and the triple set is converted into a corresponding vector matrix by combining an entity set, a relationship set and an entity type set. Wherein the converted triple set comprises an entity embedding matrix
Figure BDA0003262937640000091
Relationship embedding matrix
Figure BDA0003262937640000092
And entity type embedding matrix
Figure BDA0003262937640000093
And the entity embedding matrix comprises a head entity pair embedding matrix and a tail entity pair embedding matrix.
After the conversion, the subsequent treatment process can be facilitated.
After step 101, the following steps are also included:
101-1, initializing the entity embedded matrix, the relation embedded matrix and the entity type embedded matrix through a xavier normal distribution.
In the process, the entity embedded matrixes obtained by conversion are respectively initialized
Figure BDA0003262937640000101
Relationship embedding matrix
Figure BDA0003262937640000102
And entity type embedding matrix
Figure BDA0003262937640000103
And the initialization method preferably adopts xavier normal distribution initialization to obtain an initialized entity embedded matrix, a relationship embedded matrix and an entity type embedded matrix.
In step 102, before the inference model is used to perform the knowledge graph inference, each triplet in the knowledge graph is preprocessed and converted into data that can be processed by the inference model. The inference model is based on a convolutional neural network and utilizes the type information of the entity to carry out inference. The method can greatly improve the entity type accuracy of the knowledge graph reasoning result while improving the performance of the knowledge graph reasoning by a small degree, and is beneficial to the practical application of the follow-up knowledge graph completion.
Reading all triples in the triple set, respectively counting the number of included entities and the number of relationships, and then respectively representing the entities and the relationships by using IDs. When the entity is converted into the ID, a mapping table of the entity ID and the corresponding entity type ID is generated at the same time, and when the entity ID is input, the ID of the entity type can be obtained by searching the mapping table. And then inputting the triples represented by the IDs into an inference model, and the inference model can extract corresponding entities, relations and entity type vectors according to the IDs for learning.
After the above pre-processing of each triplet in the knowledge-graph, for a given triplet (h, r, t) and its entity type t1,t2Firstly, inputting a three-tuple head entity ID and a relation ID to be trained into a reasoning model, and extracting corresponding head entity vector h, relation vector r and head entity type vector t from an entity embedding matrix, a relation embedding matrix and an entity type embedding matrix according to the corresponding IDs1. Then the relation vector r and the head entity type vector t are used1Sequentially inputting the data into an LSTM network (time-cycle neural network), changing the dimensionality of an output result, and splicing the output result into a convolution kernel omega required by convolution operation, namely:
ω=vec(LSTM(r,t1)),
in the embodiment, the LSTM network is used as a super network for generating a convolution kernel of the convolution neural network, and the LSTM network can learn the implicit logics of the relationship vector and the head entity type vector, so that the expression capability of the relationship is enriched. A super network is a method of generating network weights for one network to another network, which can implement multi-layer weight sharing and can dynamically generate weights given input. The weight sharing enriches the expression capability of the reasoning model, so that the reasoning model can learn more entity and relationship interaction characteristics. The convolutional neural network has the greatest advantage in the inference model that the learning dimensionality is limited, the inference model is subjected to explicit regularization, and the phenomenon of overfitting is reduced, rather than learning a complex structure with high dimensionality possibly existing in an embedded vector.
In the step, LSTM network mixed relation embedding and head entity type embedding are used as input of the super network, so that the inference model can capture the relation between the entity type and the relation, and the accuracy of entity type prediction is improved. Meanwhile, although the inference model mixes only information between the head entity and the relationship, the information of the head entity type and the relationship is sufficient to express the relation between the entity types considering that the combination of the head entity type, the relationship, and the tail entity type is fixed.
Since the relationship vector mix-head entity type vector is used as the convolution kernel in this embodiment, there is no initialization of the convolution kernel. Dimension l of embedded matrix with relation of input dimension of LSTM networkrThe output dimension is 32 × 1 × 9, which is the dimension required by the convolution kernel. In this embodiment, the modified convolution kernel shape is [32, 1, 9 ]]。
In step 103, convolving the head entity vector h with the convolution kernel ω of the convolutional neural network in the inference model obtained in the above step to obtain the convolved temporary tensor x1The temporary tensor x1A hidden layer of the inference model is formed, namely:
x1=h*ω。
the convolution window size is 1 x 9, resulting in the output temporary vector x1∈[32,1,(le-2)]。
In this embodiment, the inference model uses a relation-based convolution kernel to convolve the head entity, thereby implementing multi-task knowledge sharing across relations. Compared with the ConvE model in the prior art, the method avoids changing the dimensionality of the embedded vector, and enables the interaction of the head entity and the relationship to be more comprehensive. In the ConvE model, only the positions where the two embedded matrixes are connected exist the interaction of the head entity and the relationship, while the inference model in the embodiment is that the interaction exists in each corresponding dimension of the two vectors, so that more comprehensive characteristics can be learned.
In step 104, the convolved temporary tensor x obtained as described above is used1Flattening the vector into a vector, and then obtaining a mixed feature vector x through a full connection layer of an inference model2∈[le]Making its dimension equal to the number of entities included in the triple set, i.e.:
x2=W·Vec′(x1)+b,
wherein W ∈ [32 × (l)e-2),ne],b∈[ne]W and b denote weight matrix and bias of the fully connected layer, Vec' () denotes a changeVarying temporal tensor x1The purpose of the shape manipulation is to map the blended head entity, head entity type and relationship features onto all entities.
In step 105, after the tail entity vector is obtained from the entity embedding matrix, the mixed feature vector obtained in the above steps is multiplied by the entity embedding matrix to obtain an output vector, and the output vector is normalized by using a sigmoid activation function to obtain a final inference result, that is, the prediction probability of each triplet in the triplet set.
When prediction is carried out, the prediction probabilities of all the triples are sequenced from high to low, so that the ranking of the prediction triples is obtained.
According to the prediction probability, the missing tail entity can be accurately predicted, so that the triple missing the tail entity in the knowledge graph can be completed.
In an alternative embodiment provided by the present application, step 105 may be implemented as follows:
and 1051, multiplying the mixed feature vector by the entity embedded matrix to obtain an output vector.
In the step, the mixed feature vector is multiplied by the entity embedding matrix to obtain an output vector, so that the prediction probability of each triple in the triple set can be further obtained through a sigmoid activation function.
Step 1052, normalizing the output vector by adopting a sigmoid activation function to obtain the prediction probability of each triple in the triple set, so that the inference model outputs an inference result.
In the step, the sigmoid activation function is used for carrying out normalization processing on the output vector, so that the prediction probability of each triple in the triple set can be calculated, and the inference model can output a corresponding inference result according to the prediction probability of each triple to complement the knowledge graph.
Specifically, the prediction probability of each triple in the triple set is obtained according to the following formula:
Figure BDA0003262937640000131
wherein sigmoid () represents a sigmoid activation function, Vec (LSTM (r, t)1) Denotes an operation of changing the shape of a convolution kernel, Vec' (h × Vec (LSTM (r, t))1) ) represents the operation of changing the shape of the hidden layer, represents the convolution operation, W and b represent the weight matrix and the offset of the fully connected layer, and h, r, t represent the head entity, the relation and the tail entity in the triplet, respectively.
The reasoning model regards the judgment of the correctness of the triples as a binary problem, and the sigmoid activation function maps the output result to (0, 1).
In order to verify the technical effect brought by the knowledge-graph inference method for fusing entity type information described in this embodiment, the application of the method in the medical field will be verified below.
First, the crawler is used to obtain the relevant Chinese medical data from the medical encyclopedia of the medical website. The constructed data set contains 38111 entities and 16 relationships, along with 8 entity types. The entity type information is directly attached to the entity in a fixed format so that the entity type information is processed simultaneously when the entity data is processed. Specific data sets are shown in tables 2 and 3 below:
TABLE 2
Figure BDA0003262937640000132
The training set is used for training the inference model, the verification set is used for testing the inference model under which parameters to perform better, and the test set is used for testing the final result of the inference model.
TABLE 3
Figure BDA0003262937640000141
The experimental environment configuration is shown in table 4 below:
TABLE 4
Figure BDA0003262937640000142
When the inference model is trained, the set over-parameter range is as follows: the entity embedding dimension, the relationship embedding dimension, and the entity type embedding dimension are all 200.
In the link prediction task of the knowledge graph, the inference task is mainly to predict the head entity or the tail entity of the missing triple. As for the triplet (h, r, t), it is of the form: (1) given a head entity and a relationship (h, r,; (2) given the relationship and the tail entities (.
For the entity type verification task, the testing method is the same as the reasoning task. But after the inference result is obtained, the entity types of the tail entity are checked in sequence, and the type accuracy is calculated.
Common evaluation indexes of the task are MR, MRR, Hits @1, Hits @3 and Hits @10, and testing is carried out based on a Filter mode. On the basis, the evaluation of the entity type accuracy rate is also added, namely Top10, Top30 and Top100, aiming at evaluating the type accuracy rate of the Top10, 30 and 100 entities in the inference result, the type of each entity in the inference result is detected in turn, and the evaluation is made. When the entity accuracy is tested, the Raw mode and the Filter mode are tested respectively. In this embodiment, a Filter mode is adopted for all the evaluation indexes. There are a large number of one-to-many, many-to-one, many-to-many relationships within the knowledge-graph that, when predicted, may lead to the appearance of correct, but not targeted, entities. For example, when a certain triplet (h, r, t) belongs to the correct triplet, consider the triplet (h, r, t'), when predicting (h, r,. In order to deal with the situation, a Filter mode is adopted to Filter all triples which are generated in a training set, a verification set and a test set except the triples to be tested before the ranking of the inference result is calculated. After culling these correct, but not given triples, the rank of the triples to be tested is calculated. When this strategy is not taken, i.e. the existing triples are not ignored, the pattern is named Raw pattern.
In addition, in the experiment, the method can be simultaneously compared with several models for indicating map inference in the prior art, such as a DistMult model, a ConvE model, a HypER model and the like.
The results of the above models on the median dataset are shown in tables 5 and 6 below, all of which were self-trained and tested:
TABLE 5
Figure BDA0003262937640000161
TABLE 6
Figure BDA0003262937640000162
From tables 5 and 6, it can be seen that:
(1) for the entity prediction task, the inference model of the invention is promoted on the basis of the HypER model. The method exceeds the HypER model in other indexes except the MR index, and proves that entity type information has an improvement effect on reasoning results. The ConvE model has the lowest MR index, and the MR index is sensitive to abnormal values, so that the ConvE model learns a relatively universal triple pattern. In table 6, it can also be found that the accuracy of the entity type of the ConvE model in the Raw mode is higher than that of the models except the inference model of the present invention.
(2) For the entity type accuracy, the inference model of the invention obtains the optimal result, and no matter in a Raw mode (without filtering) or a Filter mode (filtering other appearing entities), the entity type accuracy exceeds all other models, so that the fusion of entity type information is proved to have obvious help for improving the type accuracy. Wherein the Raw mode is improved by 100.0% on average, and the Filter mode is improved by 133.2% on average.
The following compares a set of inference results of specific head entities and relationships, and compares the Top10 inference results in the Raw mode of the HypER model and the inference model of the present invention, as shown in table 7 below. The set of head entities is the lupus zone test [ examination item ], the relationship is available for diagnosis, and the attributes of the tail entities should be diseases. The inference result mode is a Raw mode. As can be seen from table 7, the added entity type information changes the type of the inference result from all errors to all correct, and the entity type information plays a proper role.
TABLE 7
Figure BDA0003262937640000171
It should be noted that the above-mentioned embodiments of the method are described as a series of actions for simplicity of description, but those skilled in the art should understand that the present invention is not limited by the described sequence of actions. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
In summary, the method for inference of a knowledge graph fusing entity type information provided in the embodiments of the present invention fuses entity type information into a knowledge inference algorithm, fuses entity type embedding and relationship embedding, and convolves a head entity with a feature vector after fusion, so as to capture an internal relationship between an entity, an entity type, and a relationship, so that the entity type information is fully applied in an inference process.
The method and the device can acquire the entity type information in the triple while acquiring the triple information, greatly improve the entity type accuracy of the inference result, enable the entity type to meet the requirements, effectively improve the accuracy of the inference result, and facilitate the practical application of the follow-up knowledge map completion.
Based on the same inventive concept, fig. 3 exemplarily illustrates a knowledge graph inference apparatus fusing entity type information provided in an embodiment of the present invention, and since the principle of the apparatus for solving the technical problem is similar to a knowledge graph inference method fusing entity type information, specific embodiments of the apparatus may refer to specific embodiments of the method, and repeated details are omitted.
Referring to fig. 3, the apparatus mainly includes the following units:
a converting unit 301, configured to convert a triple set in a knowledge graph into a vector matrix corresponding to the triple set, where the converted triple set includes an entity embedding matrix, a relationship embedding matrix, and an entity type embedding matrix;
an extracting unit 302, configured to input the entity embedding matrix, the relationship embedding matrix, and the entity type embedding matrix into a reasoning model, respectively extract a head entity vector, a relationship vector, and a head entity type vector, and sequentially generate a convolution kernel of the reasoning model through an LSTM network for the relationship vector and the head entity type vector;
a convolution unit 303, configured to convolve the head entity vector with the convolution kernel, and generate a hidden layer of the inference model;
a passing unit 304, configured to pass the hidden layer through a full connection layer of the inference model, and generate a mixed feature vector, where a dimension of the mixed feature vector is the same as the number of entities included in the triple set;
and an output unit 305, configured to multiply the mixed feature vector by the entity embedding matrix, and perform normalization processing by using a sigmoid activation function, so that the inference model outputs an inference result.
It should be noted here that the above conversion unit 301, the extraction unit 302, the convolution unit 303, the pass unit 304, and the output unit 305 correspond to steps 101 to 105 in the above method embodiment, and five units are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the above method embodiment.
Preferably, an obtaining unit 100 is further included for:
after a knowledge base to be processed is obtained, an entity set, a relation set and an entity type set in the knowledge base are extracted, and a triple set of the knowledge graph is generated according to the entity set, the relation set and the entity type set.
Preferably, the triplet set includes a forward triplet and a reverse triplet corresponding to the forward triplet.
Preferably, the conversion unit 301 is further configured to:
initializing the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix through xavier normal distribution.
Preferably, the output unit 305 is specifically configured to:
multiplying the mixed feature vector by the entity embedding matrix to obtain an output vector;
normalizing the output vector by adopting a sigmoid activation function to obtain the prediction probability of each triplet in the triplet set, so that the inference model outputs an inference result;
obtaining the prediction probability of each triple in the triple set according to the following formula:
Figure BDA0003262937640000201
wherein sigmoid () represents a sigmoid activation function, Vec (LSTM (r, t)1) Denotes an operation of changing the shape of a convolution kernel, Vec' (h × Vec (LSTM (r, t))1) ) represents the operation of changing the shape of the hidden layer, represents the convolution operation, W and b represent the weight matrix and the offset of the fully connected layer, and h, r, t represent the head entity, the relation and the tail entity in the triplet, respectively.
It should be noted that the apparatus for reasoning a knowledge graph of merging entity type information provided in the embodiment of the present invention and the method for reasoning a knowledge graph of merging entity type information described in the foregoing embodiment belong to the same technical concept, and the specific implementation process thereof may refer to the description of the method steps in the foregoing embodiment, which is not described herein again.
It should be understood that the above knowledge graph inference device for fusing entity type information includes only logical division according to the functions implemented by the device, and in practical applications, the above units may be superimposed or split. The functions of the apparatus for inference of a knowledge graph based on entity type information provided in this embodiment correspond to the method for inference of a knowledge graph based on entity type information provided in the above embodiment one by one, and for the more detailed processing flow implemented by the apparatus, the above method embodiment has been described in detail, and will not be described in detail here.
In summary, the knowledge graph inference apparatus fusing entity type information provided in the embodiments of the present invention fuses entity type information into a knowledge inference algorithm, fuses entity type embedding and relationship embedding, and convolves a head entity with a fused feature vector, so as to capture an internal relationship between an entity, an entity type, and a relationship, so that the entity type information is fully applied in an inference process.
The method and the device can acquire the entity type information in the triple while acquiring the triple information, greatly improve the entity type accuracy of the inference result, enable the entity type to meet the requirements, effectively improve the accuracy of the inference result, and facilitate the practical application of the follow-up knowledge map completion.
Based on the same inventive concept, fig. 4 exemplarily shows an electronic device provided in an embodiment of the present invention, because a principle of solving a technical problem of the electronic device is similar to a knowledge graph inference method that fuses entity type information, a specific implementation of the electronic device may refer to a specific implementation of the method, and repeated details are not repeated.
Referring to fig. 4, an embodiment of the present invention provides an electronic device, which mainly includes a processor 401 and a memory 402, where the memory 402 stores execution instructions. The processor 401 reads the execution instructions in the memory 402 for executing the steps described in any of the embodiments of the embedded service composition compiling method. Alternatively, the processor 401 reads the execution instruction in the memory 402 to implement the functions of the units in any embodiment of the embedded service composition compiling apparatus.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 4, the electronic device includes a processor 401, a memory 402, and a transceiver 403; wherein, the processor 401, the memory 402 and the transceiver 403 are mutually communicated through a bus 404.
The aforementioned bus 404 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrowed line is shown, but does not indicate only one bus or one type of bus.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In summary, the electronic device provided in the embodiment of the present invention fuses the entity type information into the knowledge inference algorithm, fuses the entity type embedding and the relationship embedding, and convolves the head entity with the fused feature vector, so as to capture the internal relationship between the entity, the entity type, and the relationship, and fully apply the entity type information in the inference process.
The method and the device can acquire the entity type information in the triple while acquiring the triple information, greatly improve the entity type accuracy of the inference result, enable the entity type to meet the requirements, effectively improve the accuracy of the inference result, and facilitate the practical application of the follow-up knowledge map completion.
Embodiments of the present invention further provide a computer-readable storage medium, which contains computer-executable instructions, where the computer-executable instructions are used to perform the steps described in the above-mentioned method for knowledge-graph inference based on fused entity type information. Alternatively, the computer-executable instructions are used to perform the functions of the elements of the above-described knowledge-graph inference apparatus embodiment of fused entity type information.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
In addition, computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A knowledge graph reasoning method fusing entity type information is characterized by comprising the following steps:
converting a triple set in the knowledge graph into a vector matrix corresponding to the triple set, wherein the converted triple set comprises an entity embedded matrix, a relationship embedded matrix and an entity type embedded matrix;
inputting the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix into a reasoning model to respectively extract a head entity vector, a relation vector and a head entity type vector, and sequentially generating a convolution kernel of the reasoning model by the relation vector and the head entity type vector through an LSTM network;
convolving the head entity vector by the convolution kernel to generate a hidden layer of the inference model;
after the hidden layer passes through the full connection layer of the inference model, generating a mixed feature vector, wherein the dimension of the mixed feature vector is the same as the number of entities included in the triple set;
and multiplying the mixed feature vector by the entity embedding matrix, and performing normalization processing by adopting a sigmoid activation function to enable the inference model to output an inference result.
2. The method of claim 1, further comprising, prior to said converting the set of triples in the knowledge-graph to a vector matrix corresponding to the set of triples:
after a knowledge base to be processed is obtained, an entity set, a relation set and an entity type set in the knowledge base are extracted, and a triple set of the knowledge graph is generated according to the entity set, the relation set and the entity type set.
3. The method of claim 1, wherein the set of triples includes a forward triplet and a reverse triplet corresponding to the forward triplet.
4. The method of claim 1, wherein after the converting the set of triples in the knowledge-graph to a vector matrix corresponding to the set of triples, further comprising:
initializing the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix through xavier normal distribution.
5. The method of claim 1, wherein multiplying the mixed feature vector by the entity embedding matrix and performing normalization processing by using a sigmoid activation function to enable the inference model to output inference results comprises:
multiplying the mixed feature vector by the entity embedding matrix to obtain an output vector;
normalizing the output vector by adopting a sigmoid activation function to obtain the prediction probability of each triplet in the triplet set, so that the inference model outputs an inference result;
obtaining the prediction probability of each triple in the triple set according to the following formula:
Figure FDA0003262937630000021
wherein sigmoid () represents a sigmoid activation function, Vec (LSTM (r, t)1) Denotes an operation of changing the shape of a convolution kernel, Vec' (h × Vec (LSTM (r, t))1) ) represents the operation of changing the shape of the hidden layer, represents the convolution operation, W and b represent the weight matrix and the offset of the fully connected layer, and h, r, t represent the head entity, the relation and the tail entity in the triplet, respectively.
6. A knowledge-graph inference apparatus fusing entity type information, comprising:
the conversion unit is used for converting the triple set in the knowledge graph into a vector matrix corresponding to the triple set, and the converted triple set comprises an entity embedded matrix, a relationship embedded matrix and an entity type embedded matrix;
the extracting unit is used for inputting the entity embedding matrix, the relation embedding matrix and the entity type embedding matrix into a reasoning model to respectively extract a head entity vector, a relation vector and a head entity type vector, and generating a convolution kernel of the reasoning model by the relation vector and the head entity type vector through an LSTM network in sequence;
a convolution unit, configured to convolve the head entity vector with the convolution kernel, and generate a hidden layer of the inference model;
the processing unit is used for generating a mixed feature vector after the hidden layer passes through the full connection layer of the inference model, and the dimension of the mixed feature vector is the same as the number of entities included in the triple set;
and the output unit is used for multiplying the mixed feature vector by the entity embedded matrix and carrying out normalization processing by adopting a sigmoid activation function so that the inference model outputs an inference result.
7. The apparatus of claim 6, wherein the set of triples comprises a forward triplet and a reverse triplet corresponding to the forward triplet.
8. The apparatus of claim 6, wherein the output unit is specifically configured to:
multiplying the mixed feature vector by the entity embedding matrix to obtain an output vector;
normalizing the output vector by adopting a sigmoid activation function to obtain the prediction probability of each triplet in the triplet set, so that the inference model outputs an inference result;
obtaining the prediction probability of each triple in the triple set according to the following formula:
Figure FDA0003262937630000031
wherein sigmoid () represents a sigmoid activation function, Vec (LSTM (r, t)1) Denotes an operation of changing the shape of a convolution kernel, Vec' (h × Vec (LSTM (r, t))1) ) represents the operation of changing the shape of the hidden layer, represents the convolution operation, W and b represent the weight matrix and the offset of the fully connected layer, and h, r, t represent the head entity, the relation and the tail entity in the triplet, respectively.
9. An electronic device comprising a processor and a memory, wherein the memory stores execution instructions, and the processor reads the execution instructions in the memory for executing the steps of the method for knowledge-graph inference fusing entity type information according to any one of claims 1-5.
10. A computer-readable storage medium storing computer-executable instructions for performing the steps in the method of knowledge-graph inference fusing entity type information according to any of claims 1-5.
CN202111084761.6A 2021-09-15 2021-09-15 Knowledge graph reasoning method, device, equipment and storage medium integrating entity type information Active CN113780564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111084761.6A CN113780564B (en) 2021-09-15 2021-09-15 Knowledge graph reasoning method, device, equipment and storage medium integrating entity type information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111084761.6A CN113780564B (en) 2021-09-15 2021-09-15 Knowledge graph reasoning method, device, equipment and storage medium integrating entity type information

Publications (2)

Publication Number Publication Date
CN113780564A true CN113780564A (en) 2021-12-10
CN113780564B CN113780564B (en) 2024-01-12

Family

ID=78844496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111084761.6A Active CN113780564B (en) 2021-09-15 2021-09-15 Knowledge graph reasoning method, device, equipment and storage medium integrating entity type information

Country Status (1)

Country Link
CN (1) CN113780564B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763376A (en) * 2018-05-18 2018-11-06 浙江大学 Syncretic relation path, type, the representation of knowledge learning method of entity description information
CN111260064A (en) * 2020-04-15 2020-06-09 中国人民解放军国防科技大学 Knowledge inference method, system and medium based on knowledge graph of meta knowledge
CN111339320A (en) * 2020-03-02 2020-06-26 北京航空航天大学 Knowledge graph embedding and reasoning method introducing entity type automatic representation
WO2020140386A1 (en) * 2019-01-02 2020-07-09 平安科技(深圳)有限公司 Textcnn-based knowledge extraction method and apparatus, and computer device and storage medium
CN112949835A (en) * 2021-03-30 2021-06-11 太原理工大学 Inference method and device for knowledge graph based on convolution cyclic neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763376A (en) * 2018-05-18 2018-11-06 浙江大学 Syncretic relation path, type, the representation of knowledge learning method of entity description information
WO2020140386A1 (en) * 2019-01-02 2020-07-09 平安科技(深圳)有限公司 Textcnn-based knowledge extraction method and apparatus, and computer device and storage medium
US20210216880A1 (en) * 2019-01-02 2021-07-15 Ping An Technology (Shenzhen) Co., Ltd. Method, equipment, computing device and computer-readable storage medium for knowledge extraction based on textcnn
CN111339320A (en) * 2020-03-02 2020-06-26 北京航空航天大学 Knowledge graph embedding and reasoning method introducing entity type automatic representation
CN111260064A (en) * 2020-04-15 2020-06-09 中国人民解放军国防科技大学 Knowledge inference method, system and medium based on knowledge graph of meta knowledge
CN112949835A (en) * 2021-03-30 2021-06-11 太原理工大学 Inference method and device for knowledge graph based on convolution cyclic neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜文倩;李弼程;王瑞;: "融合实体描述及类型的知识图谱表示学习方法", 中文信息学报, no. 07 *

Also Published As

Publication number Publication date
CN113780564B (en) 2024-01-12

Similar Documents

Publication Publication Date Title
CN112784092B (en) Cross-modal image text retrieval method of hybrid fusion model
Xia et al. Complete random forest based class noise filtering learning for improving the generalizability of classifiers
CN113535984B (en) Knowledge graph relation prediction method and device based on attention mechanism
CN110659723B (en) Data processing method and device based on artificial intelligence, medium and electronic equipment
EP4006909B1 (en) Method, apparatus and device for quality control and storage medium
Bombara et al. Offline and online learning of signal temporal logic formulae using decision trees
CN113449204B (en) Social event classification method and device based on local aggregation graph attention network
CN112183881A (en) Public opinion event prediction method and device based on social network and storage medium
CN112086144A (en) Molecule generation method, molecule generation device, electronic device, and storage medium
CN112420125A (en) Molecular attribute prediction method and device, intelligent equipment and terminal
KR20230141683A (en) Method, apparatus and computer program for buildding knowledge graph using qa model
CN115238909A (en) Data value evaluation method based on federal learning and related equipment thereof
CN113221762A (en) Cost balance decision method, insurance claim settlement decision method, device and equipment
Yuan et al. Meta-learning causal feature selection for stable prediction
Caldeira et al. Image classification benchmark (ICB)
Zhang et al. VESC: a new variational autoencoder based model for anomaly detection
Bao et al. Confidence-based interactable neural-symbolic visual question answering
Ming A survey on visualization for explainable classifiers
CN113780564A (en) Knowledge graph reasoning method, device, equipment and storage medium fusing entity type information
CN115618065A (en) Data processing method and related equipment
CN112328879B (en) News recommendation method, device, terminal equipment and storage medium
US20220027722A1 (en) Deep Relational Factorization Machine Techniques for Content Usage Prediction via Multiple Interaction Types
Gupta Practical Data Science with Jupyter: Explore Data Cleaning, Pre-processing, Data Wrangling, Feature Engineering and Machine Learning using Python and Jupyter (English Edition)
CN114707070A (en) User behavior prediction method and related equipment thereof
CN114429822A (en) Medical record quality inspection method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant