CN111160557A - Knowledge representation learning method based on double-agent reinforcement learning path search - Google Patents

Knowledge representation learning method based on double-agent reinforcement learning path search Download PDF

Info

Publication number
CN111160557A
CN111160557A CN201911376444.4A CN201911376444A CN111160557A CN 111160557 A CN111160557 A CN 111160557A CN 201911376444 A CN201911376444 A CN 201911376444A CN 111160557 A CN111160557 A CN 111160557A
Authority
CN
China
Prior art keywords
agent
entity
hop
relations
relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911376444.4A
Other languages
Chinese (zh)
Other versions
CN111160557B (en
Inventor
陈岭
崔军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201911376444.4A priority Critical patent/CN111160557B/en
Publication of CN111160557A publication Critical patent/CN111160557A/en
Application granted granted Critical
Publication of CN111160557B publication Critical patent/CN111160557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/288Entity relationship models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Devices For Executing Special Programs (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a knowledge representation learning method based on double-agent reinforcement learning path search, which comprises the following steps: (1) deleting redundant relations in the knowledge base, and pre-training vectors of entities and relations; (2) the path searcher searches a plurality of multi-hop relations between entity pairs of each triple in the knowledge base according to the vectors of the entities and the relations, and a relation agent and an entity agent which consider state and historical information are used for making decisions in the searching process; (3) and learning vectors of the entities and the relations according to the multi-hop relations among the entities and the multi-hop relations obtained by searching, and measuring the weight of each multi-hop relation by using an attention mechanism. The knowledge representation learning method can introduce high-quality multi-hop relationships.

Description

Knowledge representation learning method based on double-agent reinforcement learning path search
Technical Field
The invention relates to the field of knowledge representation learning, in particular to a knowledge representation learning method based on double-agent reinforcement learning path search.
Background
Currently, knowledge bases containing large amounts of structured knowledge are important components of many applications, such as knowledge reasoning, question answering, etc. Thus, in recent years, large knowledge bases have been constructed by many enterprises and organizations, such as Freebase, DBpedia, YAGO, and the like. Knowledge in the knowledge base is represented in the form of triples (head, relationship, tail) which may be abbreviated as (h, r, t). Although the existing knowledge base already contains a great deal of knowledge, the relations among a plurality of entities are still lost, so that the completion of the knowledge base becomes a research hotspot.
And (4) realizing the completion of the knowledge base, and modeling the knowledge base firstly. Symbolic representation is a knowledge base modeling method that treats entities and relationships in a knowledge base as symbols. Symbolic representations have the disadvantages of low computational efficiency and data sparsity and cannot be adapted to the knowledge base with the capacity gradually increasing nowadays. Knowledge representation is also a knowledge base modeling method, and the entities and the relations in the knowledge base are embedded into a low-dimensional vector space, and the semantics of the entities and the relations are mapped into corresponding vectors, so that the problems of low calculation efficiency and data sparsity are solved, and the method can be applied to a large knowledge base.
Translation-based models are a typical class of knowledge representation learning methods that treat relationships in a triplet as translation operations between head and tail entities. When the relation between the entities is missing, the corresponding relation vector can be calculated through the difference between the vector of the tail entity and the vector of the head entity, so that the relation is completed. Most of the existing translation-based models only consider single-hop relationships, but not multi-hop relationships, i.e. relationship paths formed by multiple relationships between entities.
Some translation-based models consider multi-hop relationships, but have the following problems:
(1) the multi-hop relationship is obtained in a traversal mode, so that the time is consumed, and the quality of the multi-hop relationship is low;
(2) the weights assigned to each multi-hop relationship are based on their static characteristics, and the model cannot learn these weights during the training process.
In recent years, some work of introducing reinforcement learning into knowledge base completion is emerging, and a high-quality multi-hop relation is obtained by constructing a reinforcement learning model. However, these models have the following problems:
(1) the information considered in the process of searching the multi-hop relationship is not comprehensive enough, and only the selection of the relationship is considered and the selection of the entity is ignored;
(2) the setting of the reward is too simple and does not take various factors into comprehensive consideration.
Disclosure of Invention
The technical problem to be solved by the invention is how to search and introduce high-quality multi-hop relationship in the knowledge representation learning process.
In order to solve the above problems, the present invention provides a knowledge representation learning method based on dual-agent reinforcement learning path search, comprising the following steps:
(1) deleting redundant relations in the knowledge base, and pre-training vectors of entities and relations;
(2) the path searcher searches a plurality of multi-hop relations between entity pairs of each triple in the knowledge base according to the vectors of the entities and the relations, and a relation agent and an entity agent which consider state and historical information are used for making decisions in the searching process;
(3) and learning vectors of the entities and the relations according to the multi-hop relations among the entities and the multi-hop relations obtained by searching, and measuring the weight of each multi-hop relation by using an attention mechanism.
Compared with the prior art, the invention has the beneficial effects that:
compared with the traditional method for obtaining the multi-hop relationship through traversal, the multi-hop relationship searched by the path searcher has higher quality, and the weight given to the multi-hop relationship is more reasonable; compared with the existing method based on reinforcement learning, the method has the advantages that the decision is made by using two agents, the state and the information can be utilized more comprehensively, and the reward in the model is set more reasonably. The method is mainly applied to knowledge base completion.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is an overall flowchart of a knowledge representation learning method based on a dual-agent reinforcement learning path search according to an embodiment of the present invention;
FIG. 2 is a flow chart of data preprocessing provided by an embodiment of the present invention;
FIG. 3 is a flowchart of a path search according to an embodiment of the present invention;
FIG. 4 is a flow chart of knowledge representation learning provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is an overall flowchart of a knowledge representation learning method based on a dual-agent reinforcement learning path search according to an embodiment of the present invention. Referring to fig. 1, the embodiment provides a knowledge representation learning method based on a dual-agent reinforcement learning path search, which includes three stages of data preprocessing, path search and knowledge representation learning.
Data preprocessing stage
In the data preprocessing stage, the redundant relationship in the knowledge base, the pre-training entity and the relationship vector are mainly deleted, as shown in fig. 2, the specific process is as follows:
step 1-1: and inputting a knowledge base KB and deleting the redundancy relation.
Knowledge in the knowledge base KB is represented in the form of triples (h, r, t), where h represents the head entity, r represents the relationship, and t represents the tail entity. h and t belong to an entity set E, R belongs to a relation set R, the triple (h, R, t) reflects the existence of the relation R between the entity h and the entity t, and the redundant relation in the knowledge base KB is deleted to obtain the processed knowledge base.
Step 1-2: vectors of entities and relationships in the knowledge base KB are pre-trained using an existing translation-based model (e.g., TransE).
The path searcher needs to utilize vectors of entities and relationships, and therefore the vectors of entities and relationships in the knowledge base KB are pre-trained using a translation-based model.
Taking TransE as an example: TransE learns a vector for each entity and relation in the knowledge base, and in a triplet (h, r, t), the vectors h, r and t corresponding to the head entity h, the relation r and the tail entity t should satisfy:
h+r=t (1)
and the vectors of the entities and the relations are learned by taking the vectors as targets.
Path search phase
The path search stage mainly realizes searching a plurality of multi-hop relationships between the entity pairs of each triple in the knowledge base according to the vectors of the entities and the relationships, and transmits the multi-hop relationships finally reaching the tail entity to the knowledge representation learning stage, as shown in fig. 3, the specific flow is as follows:
step 2-1: the triples in the knowledge base KB are divided into batches.
The invention trains the path searcher in a batch processing mode. Triples in KB are randomly divided into batches according to a predefined defined batch size.
Step 2-2: taking one batch, searching the multi-hop relationship between the entity pairs of each triple in the batch through a path searcher.
The path searcher comprises a relation agent and an entity agent, starting from the head entity of the given triple, the relation agent calculates the probability distribution of all relations contained in the current entity, and selects one relation; and the entity agent calculates the probability distribution of all tail entities corresponding to the current entity and the selected relation and selects one entity. This process is continued until the tail entity of a given triplet is reached or the maximum number of steps is reached.
The path searcher is based on an enhanced learning model, and is composed of two agents, called a relational agent and an entity agent. The process of searching for a multi-hop relationship between (h, t) in (h, r, t) is as follows: starting from a head entity h, at the t step, the relation agent starts from a current entity etIncluding selection of one relation r from all relationstEntity proxy slave etAnd rtAnd selecting one entity from all corresponding tail entities, and carrying out the process until the tail entity t is reached or the step number reaches a preset maximum step number.
The environment of the path searcher can be viewed as a Markov decision process, represented by a four-tuple (S, A, T, R), where S represents a set of states, A represents a set of actions, T represents a transition, and R represents a reward.
At step t, the status of the relational agent is denoted Srel,t=(etR, t) in which etIs the current entity etR is a vector representation of the relationship r in the triplet, the vector representation of the tail entity t in the t triplet; the state of the entity agent is denoted Sent,t=(et,r,t,rt),rtRelationship r being a selection of a relationship agenttIs represented by a vector of (a).
At step t, the action set of the relational agent is the current entity etAll relationships contained, denoted Arel,t={r|(etR, e) belongs to KB }; the action set of the entity proxy is the current entity etRelationship r with relationship agent selectiontAll corresponding tail entities, denoted Aent,t={e|(et,rt,e)∈KB}。
At step t, the status of the relational agent is from (e)tR, t) to (e)t+1R, T), the transition of the relational agent is denoted Trel((et,r,t),rt)=(et+1R, t); state of entity agent from (e)t,r,t,rt) Become (e)t+1,r,t,rt+1) The transfer of the physical agent is denoted as Tent((et,r,t,rt),et+1)=(et+1,r,t,rt+1)。
One multi-hop relationship p ═ (r)1,r2,…,rn) The reward of (1) is composed of two parts of overall precision and path weight. Wherein the overall accuracy Rg(p) is represented by:
Figure BDA0002341105460000061
path weight Rw(p) is represented by:
Figure BDA0002341105460000062
where W is the weight matrix and p is the vector representation of the multihop relationship p:
Figure BDA0002341105460000063
the total reward for the multi-hop relationship p is then expressed as:
R(p)=Rg(p)+Rw(p) (5)
both the relational agent and the entity agent compute a probability distribution over the decision network for performing each action. The input to the decision network contains both historical information and status. Vector d for history information at t-th steptThat the present invention obtains d by training an RNNt
dt=RNN(dt-1,[et-1,rt-1]) (6)
Wherein [,]representing the concatenation of two vectors. The inputs of the decision networks corresponding to the relational agent and the entity agent are respectively represented as Xrel,t=[dt,Srel,t]And Xent,t=[dt,Sent,t]。
The structure of the decision network is a fully-connected neural network comprising two hidden layers, and each hidden layer is connected with a ReLU nonlinear layer.
The output of the decision network corresponding to the relation agent and the entity agent is Arel,tAnd Aent,tProbability distribution of each action in (1):
Prel(Xrel,t)=softmax(Arel,tOrel,t) (7)
Pent(Xent,t)=softmax(Aent,tOent,t) (8)
wherein A isrel,tAnd Aent,tRespectively represent by Arel,t、Aent,tA matrix formed by vectors of all the relations and entities; o isrel,tAnd Oent,tRespectively representing the outputs of the second ReLU layer of the decision network corresponding to the relational and entity agents. The relationship agent and the entity agent, when selecting an entity or a relationship, will make a random selection based on the calculated probability distribution.
For each triplet in a batch, several multi-hop relationships are searched using the path searcher described above.
Step 2-3: and updating the parameters and the weight matrixes of the relation agents and the entity agents by utilizing the multi-hop relations searched in the batch.
The relevant parameters of the path search phase are updated by maximizing the expected cumulative reward, the parameters including the parameters of the two decision networks, the parameters of the RNN calculating the history information and the weight matrix W. The desired jackpot is defined as:
Figure BDA0002341105460000071
wherein
Figure BDA0002341105460000072
Represents the state StAnd action atReward of P (a | X)t(ii) a θ) represents X at a given inputtProbability of time action a, the present invention updates the parameters by a monte carlo gradient, the gradient of J (θ) is expressed as:
Figure BDA0002341105460000073
for a searched multi-hop relationship p, when updating parameters, each of the processes of searching the multi-hop relationship
Figure BDA0002341105460000074
Are all equal to R (p).
Step 2-4: step 2-2 and step 2-3 are repeated until all batches in KB are processed.
And repeating the step 2-2 and the step 2-3, searching the multi-hop relationship between the entity pairs of all the triples in the KB in batch, and updating the related parameters in the path searching stage.
Knowledge representation learning phase
In the knowledge representation learning stage, a single-hop relationship and a multi-hop relationship are simultaneously utilized to learn the entity and the relationship vector, as shown in fig. 4, the specific process is as follows:
step 3-1: the knowledge base divides the triples in KB into batches.
The invention trains a knowledge representation learning model in a batch processing mode, and randomly divides triples in the KB into a plurality of batches according to a preset defined batch size.
Step 3-2: taking one batch, and calculating the weight of all multi-hop relations of each triple.
Given a triplet (h, r, t), the set of all multi-hop relationships is { p }1,…,pK}, a multihop relationship piThe weight of (d) is defined as:
Figure BDA0002341105460000081
wherein:
ηi=tanh(Wpi) (12)
wherein W is a weight matrix which is the same matrix as the weight matrix in the path search stage reward.
A multi-hop relationship as used herein is a multi-hop relationship that ultimately reaches the tail entity, searched in the path search phase.
And calculating the weight of all multi-hop relations of each triplet in the batch according to the formula.
Step 3-3: and calculating energy functions and losses of all the triples in the batch by using the single-hop relation and the multi-hop relation, and updating the vectors and the weight matrix of the entities and the relations.
Given a triplet (h, r, t), the set of all multi-hop relationships is { p }1,…,pK-knowledge-representation learning phase energy function is defined as:
Figure BDA0002341105460000082
Figure BDA0002341105460000091
from the energy function, a loss function can be defined for the knowledge representation learning phase:
Figure BDA0002341105460000092
wherein γ is a predefined boundary [ ·]+Represents 0 and [. cndot]The value in the rule is the maximum value, T is a positive sample set, namely the set of all triples in the knowledge base; t is-Is a set of negative examples, expressed as:
T-={(h,r′,t)|r′∈R},(h,r,t)∈T (15)
the negative examples are obtained by replacing the relationship r in the triples with another relationship r' in the knowledge base.
The loss of all triples in the batch is calculated and the vector of entities and relationships and the weight matrix W are updated by minimizing the loss.
Step 3-4: step 3-2 and step 3-3 are repeated until all batches in KB are processed.
And repeating the step 3-2 and the step 3-3, calculating the weight, the energy function and the loss function of the multi-hop relation corresponding to all the triples in the KB in batches, and updating the related parameters of the knowledge representation learning stage.
Step 3-5: if the iteration reaches the preset maximum times, outputting the vectors of the entities and the relations; otherwise, go to step 2-2.
And iteratively performing a path searching stage and a knowledge representation learning stage until iteration reaches a preset maximum number, and outputting vectors of the entities and the relations.
In the knowledge representation learning method based on the double-agent reinforcement learning path search, the path searcher searches high-quality multi-hop relationships among entities by using the entities and the relationship vectors trained by the knowledge representation learning model, and uses two agents to make decisions in the searching process, so that the state and the information can be considered more comprehensively; knowledge representation the learning model learns the vectors of entities and relationships using both single-hop relationships and the searched multi-hop relationships, and uses an attention mechanism to weigh the weight of each multi-hop relationship. The rewarding in the path searcher and the weight in the knowledge representation model share a part of parameters, so that the part of parameters can not only measure the weight of the multi-hop relationship, but also guide the path searcher to search the multi-hop relationship which is more useful for the knowledge representation learning process.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A knowledge representation learning method based on double-agent reinforcement learning path search comprises the following steps:
(1) deleting redundant relations in the knowledge base, and pre-training vectors of entities and relations;
(2) the path searcher searches a plurality of multi-hop relations between entity pairs of each triple in the knowledge base according to the vectors of the entities and the relations, and a relation agent and an entity agent which consider state and historical information are used for making decisions in the searching process;
(3) and learning vectors of the entities and the relations according to the multi-hop relations among the entities and the multi-hop relations obtained by searching, and measuring the weight of each multi-hop relation by using an attention mechanism.
2. The knowledge representation learning method based on the dual-agent reinforcement learning path search as claimed in claim 1, wherein in step (1), the vectors of the entities and the relations in the knowledge base are pre-trained by using the translation-based model, and in a triplet (h, r, t), the vectors h, r and t corresponding to the head entity h, the relation r and the tail entity t should satisfy:
h+r=t
and the vectors of the entities and the relations are learned by taking the vectors as targets.
3. The knowledge representation learning method based on dual-agent reinforcement learning path search as claimed in claim 1, wherein in step (2), the path searcher is based on reinforcement learningThe learning model, which consists of two agents, called a relational agent and an entity agent, searches for a multi-hop relationship between (h, t) in (h, r, t) as follows: starting from a head entity h, at the t step, the relation agent starts from a current entity etIncluding selection of one relation r from all relationstEntity proxy slave etAnd rtAnd selecting one entity from all corresponding tail entities, and carrying out the process until the tail entity t is reached or the step number reaches a preset maximum step number.
4. The method of claim 3, wherein the path searcher' S environment is considered a Markov decision process and is represented by four tuples (S, A, T, R), where S represents a set of states, A represents a set of actions, T represents transitions, and R represents rewards.
At step t, the status of the relational agent is denoted Srel,t=(etR, t) in which etIs the current entity etR is a vector representation of the relationship r in the triplet, the vector representation of the tail entity t in the t triplet; the state of the entity agent is denoted Sent,t=(et,r,t,rt),rtRelationship r being a selection of a relationship agenttIs represented by a vector of (a).
At step t, the action set of the relational agent is the current entity etAll relationships contained, denoted Arel,t={r|(etR, e) belongs to KB }; the action set of the entity proxy is the current entity etRelationship r with relationship agent selectiontAll corresponding tail entities, denoted Aent,t={e|(et,rt,e)∈KB};
At step t, the status of the relational agent is from (e)tR, t) to (e)t+1R, T), the transition of the relational agent is denoted Trel((et,r,t),rt)=(et+1R, t); state of entity agent from (e)t,r,t,rt) Become (e)t+1,r,t,rt+1) The transfer of the physical agent is denoted as Tent((et,r,t,rt),et+1)=(et+1,r,t,rt+1);
One multi-hop relationship p ═ (r)1,r2,...,rn) The reward of (2) is composed of an overall precision and a path weight, wherein the overall precision Rg(p) is represented by:
Figure FDA0002341105450000021
path weight Rw(p) is represented by:
Figure FDA0002341105450000022
where W is the weight matrix and p is the vector representation of the multihop relationship p:
Figure FDA0002341105450000023
the total reward for the multi-hop relationship p is then expressed as:
R(p)=Rg(p)+Rw(p) (5)
5. the method of claim 4, wherein the relational agent and the entity agent calculate the probability distribution for each action through a decision network, the input of the decision network comprises two parts of history information and state, and the history information at the t step is represented by a vector dtTo express, d is obtained by training an RNNt
dt=RNN(dt-1,[et-1,rt-1]) (6)
Wherein [,]the inputs of decision networks corresponding to a relational agent and an entity agent representing the connection of two vectors are respectively represented as Xrel,t=[dt,Srel,t]And Xent,t=[dt,Sent,t];
The decision network is structurally a fully-connected neural network comprising two hidden layers, and a ReLU nonlinear layer is connected behind each hidden layer;
the output of the decision network corresponding to the relation agent and the entity agent is Arel,tAnd Aent,tProbability distribution of each action in (1):
Prel(Xrel,t)=softmax(Arel,tOrel,t) (7)
Pent(Xent,t)=softmax(Aent,tOent,t) (8)
wherein A isrel,tAnd Aent,tRespectively represent by Arel,t、Aent,tA matrix formed by vectors of all the relations and entities; o isrel,tAnd Oent,tThe outputs of the second ReLU layers of the decision networks corresponding to the relational agent and the entity agent are respectively represented; the relationship agent and the entity agent, when selecting an entity or a relationship, will make a random selection based on the calculated probability distribution.
6. The knowledge representation learning method based on the dual-agent reinforcement learning path search as claimed in claim 5, wherein the parameters and weight matrices of the relational agents and the entity agents are updated by using the searched multi-hop relationship, and the specific process is as follows:
the relevant parameters of the path search phase are updated by maximizing the expected cumulative reward, which includes the parameters of the two decision networks, the parameter of the RNN calculating the history information and the weight matrix W, and is defined as:
Figure FDA0002341105450000041
wherein
Figure FDA0002341105450000042
Represents the state StAnd action atReward of P (a | X)tα) indicates that at a given input XtSummary of time actions aThe present invention updates the parameters by a monte carlo gradient, the gradient of J (θ) is expressed as:
Figure FDA0002341105450000043
for a searched multi-hop relationship p, when updating parameters, each of the processes of searching the multi-hop relationship
Figure FDA0002341105450000044
Are all equal to R (p).
7. The knowledge representation learning method based on the dual-agent reinforcement learning path search as claimed in claim 5, wherein the specific process of step (3) is:
(3-1) calculating the weight of all multi-hop relations of each triple;
and (3-2) calculating energy functions and losses of all the triples in the batch by using the single-hop relation and the multi-hop relation, and updating vectors and weight matrixes of the entities and the relations.
8. The knowledge representation learning method based on dual-agent reinforcement learning path search as claimed in claim 7, wherein in step (3-1), a triplet (h, r, t) is given, and all multi-hop relationship sets thereof are { p } p1,...,pK}, a multihop relationship piThe weight of (d) is defined as:
Figure FDA0002341105450000045
wherein:
ηi=tanh(Wpi) (12)
where W is the weight matrix.
9. The knowledge representation learning method based on the dual-agent reinforcement learning path search as claimed in claim 7, wherein in the step (3-2), a triplet (h,r, t) with a set of all multi-hop relationships { p }1,...,pK-knowledge-representation learning phase energy function is defined as:
Figure FDA0002341105450000051
from the energy function, a loss function is defined:
Figure FDA0002341105450000052
wherein γ is a predefined boundary [ ·]+Represents 0 and [. cndot]The value in the rule is the maximum value, T is a positive sample set, namely the set of all triples in the knowledge base; t is-Is a set of negative examples, expressed as:
T-={(h,r′,t)|r′∈R},(h,r,t)∈T (15)
the negative sample is obtained by replacing the relation r in the triple with another relation r' in the knowledge base;
the loss of all triples in the batch is calculated and the vector of entities and relationships and the weight matrix W are updated by minimizing the loss.
CN201911376444.4A 2019-12-27 2019-12-27 Knowledge representation learning method based on double-agent reinforcement learning path search Active CN111160557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911376444.4A CN111160557B (en) 2019-12-27 2019-12-27 Knowledge representation learning method based on double-agent reinforcement learning path search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911376444.4A CN111160557B (en) 2019-12-27 2019-12-27 Knowledge representation learning method based on double-agent reinforcement learning path search

Publications (2)

Publication Number Publication Date
CN111160557A true CN111160557A (en) 2020-05-15
CN111160557B CN111160557B (en) 2023-04-18

Family

ID=70558468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911376444.4A Active CN111160557B (en) 2019-12-27 2019-12-27 Knowledge representation learning method based on double-agent reinforcement learning path search

Country Status (1)

Country Link
CN (1) CN111160557B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2028258A (en) * 2020-09-03 2021-08-17 Shandong Artificial Intelligence Inst Attention-lstm-based method for knowledge reasoning of reinforcement learning agent

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530457A (en) * 2013-10-10 2014-01-22 南京邮电大学 Modeling and construction method of complex relation chain of internet of things based on multiple tuples
US20170024476A1 (en) * 2012-01-05 2017-01-26 Yewno, Inc. Information network with linked information nodes
CN107885760A (en) * 2016-12-21 2018-04-06 桂林电子科技大学 It is a kind of to represent learning method based on a variety of semantic knowledge mappings
CN109885627A (en) * 2019-02-13 2019-06-14 北京航空航天大学 The method and device of relationship between a kind of neural metwork training entity
CN110046262A (en) * 2019-06-10 2019-07-23 南京擎盾信息科技有限公司 A kind of Context Reasoning method based on law expert's knowledge base

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170024476A1 (en) * 2012-01-05 2017-01-26 Yewno, Inc. Information network with linked information nodes
CN103530457A (en) * 2013-10-10 2014-01-22 南京邮电大学 Modeling and construction method of complex relation chain of internet of things based on multiple tuples
CN107885760A (en) * 2016-12-21 2018-04-06 桂林电子科技大学 It is a kind of to represent learning method based on a variety of semantic knowledge mappings
CN109885627A (en) * 2019-02-13 2019-06-14 北京航空航天大学 The method and device of relationship between a kind of neural metwork training entity
CN110046262A (en) * 2019-06-10 2019-07-23 南京擎盾信息科技有限公司 A kind of Context Reasoning method based on law expert's knowledge base

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XING TANG.ET.: "Knowledge representation learning with entity descriptions,hierarchical types,and textual relations" *
王子涵等: "基于实体相似度信息的知识图谱补全算法" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2028258A (en) * 2020-09-03 2021-08-17 Shandong Artificial Intelligence Inst Attention-lstm-based method for knowledge reasoning of reinforcement learning agent

Also Published As

Publication number Publication date
CN111160557B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
Wang et al. Evolutionary extreme learning machine ensembles with size control
CN113190688B (en) Complex network link prediction method and system based on logical reasoning and graph convolution
CN112699247A (en) Knowledge representation learning framework based on multi-class cross entropy contrast completion coding
Dong et al. MOEA/D with a self-adaptive weight vector adjustment strategy based on chain segmentation
JP7381814B2 (en) Automatic compression method and platform for pre-trained language models for multitasking
CN112232511B (en) Automatic compression method and platform for pre-training language model for multiple tasks
CN113132232B (en) Energy route optimization method
CN109886389B (en) Novel bidirectional LSTM neural network construction method based on Highway and DC
CN114329232A (en) User portrait construction method and system based on scientific research network
Chen et al. Rlpath: a knowledge graph link prediction method using reinforcement learning based attentive relation path searching and representation learning
CN110851566A (en) Improved differentiable network structure searching method
CN110909172B (en) Knowledge representation learning method based on entity distance
CN114564596A (en) Cross-language knowledge graph link prediction method based on graph attention machine mechanism
CN109510610A (en) A kind of kernel adaptive filtering method based on soft projection Weighted Kernel recurrence least square
CN113962358A (en) Information diffusion prediction method based on time sequence hypergraph attention neural network
Lei et al. Integrated scheduling algorithm based on an operation relationship matrix table for tree-structured products
CN114969367B (en) Cross-language entity alignment method based on multi-aspect subtask interaction
CN111160557B (en) Knowledge representation learning method based on double-agent reinforcement learning path search
CN114817571A (en) Method, medium, and apparatus for predicting achievement quoted amount based on dynamic knowledge graph
CN106960101A (en) A kind of build-up tolerance optimization method based on mass loss and cost minimization
CN115599918B (en) Graph enhancement-based mutual learning text classification method and system
CN116226547A (en) Incremental graph recommendation method based on stream data
Haupt Introduction to genetic algorithms
CN108665056A (en) A method of the Intelligent transfer robot based on NRL predicts task status
CN112464104B (en) Implicit recommendation method and system based on network self-cooperation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant