CN112463987A - Chinese classical garden knowledge graph completion and cognitive reasoning method - Google Patents

Chinese classical garden knowledge graph completion and cognitive reasoning method Download PDF

Info

Publication number
CN112463987A
CN112463987A CN202011447930.3A CN202011447930A CN112463987A CN 112463987 A CN112463987 A CN 112463987A CN 202011447930 A CN202011447930 A CN 202011447930A CN 112463987 A CN112463987 A CN 112463987A
Authority
CN
China
Prior art keywords
entity
relation
state
stack
completion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011447930.3A
Other languages
Chinese (zh)
Inventor
陈进勇
王亚弟
张宝鑫
费晓飞
刘冰
谢帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Bayi Space Information Engineering Co ltd
Beijing Preparatory Office Of Museum Of Chinese Gardens And Landscape Architecture
Original Assignee
Beijing Bayi Space Information Engineering Co ltd
Beijing Preparatory Office Of Museum Of Chinese Gardens And Landscape Architecture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bayi Space Information Engineering Co ltd, Beijing Preparatory Office Of Museum Of Chinese Gardens And Landscape Architecture filed Critical Beijing Bayi Space Information Engineering Co ltd
Priority to CN202011447930.3A priority Critical patent/CN112463987A/en
Publication of CN112463987A publication Critical patent/CN112463987A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a Chinese classical garden knowledge graph completion and cognitive inference method, which comprises the following steps: 1. and executing the function of the system 1 in the cognitive science double-channel theory, namely a perception system, and realizing the information extraction of the entity and the relationship. 2. And (3) executing a system 2 in the cognitive science double-channel theory, namely analyzing the functions of the system, reasoning, firstly judging whether to continue to perform knowledge graph completion or not, and if the completion is completed and the completion is not continued, ending the algorithm flow. The technical scheme of the invention mainly has the following technical advantages: 1. the system has the capability of third-generation artificial intelligence cognitive inference; 2. the utilization rate and the execution efficiency of the information are improved; 3. can be widely applied to national classical gardens.

Description

Chinese classical garden knowledge graph completion and cognitive reasoning method
Technical Field
The invention relates to the technical field of processing natural language data, information retrieval and database structures thereof, in particular to a Chinese classical garden knowledge graph completion and cognitive reasoning method.
Background
The world that ancient Chinese gardens enjoy the reputation with its exquisite gardening skills and profound cultural connotation is an important component of the traditional Chinese culture. The modern information technology is applied to construct the Chinese classical garden knowledge map, and the method has important practical significance for protection and inheritance of the Chinese classical garden knowledge map.
Knowledge-graphs represent collections of factual information in the form of relational triples. Each relationship triplet may be represented as (e)1,r,e2) Wherein e is1And e2Is an entity, r is e1And e2The relationship between them. The most common knowledge graph representation is a multiple relationship graph, with each triplet (r, e)1,e2) Denotes from e1To e2Directed edges and labels r. Knowledge graphs are used for a variety of downstream tasks.
However, due to the fact that knowledge-graphs are often incomplete when automatically mined from text, it is not possible to manually compile all of the vast quantities, and inaccuracies often occur during the extraction process, which can lead to performance degradation for various downstream tasks. Therefore, there is a need to study knowledge-graph completions, the goal of which is to automatically add new facts, such as (e), without the need for additional knowledge1R,? ) Such a query. The completion of the knowledge graph is actually any two elements in a given triple, and an attempt is made to deduce the missing other element, wherein specific tasks comprise link prediction, entity prediction, relationship prediction, attribute prediction and the like.
An effective method for complementing knowledge maps is to obtain new knowledge through knowledge map cognitive inference.
Traditional knowledge inference methods, including ontology inference methods, can be used for knowledge inference oriented to knowledge graphs. Deductive reasoning occupies an important position in traditional knowledge reasoning, especially in ontology reasoning, but inductive reasoning becomes a main method in knowledge-graph-oriented knowledge reasoning. Deductive reasoning is widely applied to traditional knowledge reasoning because the deduced conclusion is certain to be reliable as long as the premise is effective. However, in the knowledge graph, due to the fact that the number of the instances is large, the related contents are often wide, a large number of logic rules are needed, the complexity of deductive reasoning time at the instance level is high, the deductive reasoning at the abstract concept level faces a large number of instantiation problems, the abstract concept is replaced by a concrete entity, the cost is high, and the reasoning rules at the concept level covering a wide range are difficult to obtain. In recent years, knowledge-graph-oriented knowledge reasoning has developed a unique reasoning method along with the popularity of technologies such as distributed representation, neural networks and the like, and the reasoning method is divided into single-step reasoning and multi-step reasoning according to reasoning types. Each class is further divided according to methods, and includes rule-based reasoning, distributed representation-based reasoning, neural network-based reasoning, and hybrid reasoning.
Early methods of knowledge-graph inference were based on symbolic description logic and rules. The obvious advantage of describing logic is that it has an inference mechanism, which can implement automatic inference between knowledge. Inference rules are interpretable, and may provide insight into the outcome of an inference. The tokenized inference rules can also be combined with machine learning to handle uncertainty, known as statistical relationship learning. Many methods of learning first-order logic rules using neural networks have also been proposed. Although logic rules are easy to understand, they are sensitive to noise and therefore have poor generalization performance, and are later replaced by methods based on distributed vector representation. The knowledge-graph inference method based on distributed vector representation is also called knowledge-graph embedding. In knowledge-graph embedding, entities and relationships are represented as continuous vectors in hidden space. Based on the continuous vectors, various scoring functions are defined to compute a triplet (e)s,r,e0). While the knowledge-graph embedding approach has achieved excellent results over several knowledge-graph embedded datasets, some studies have shown that they are prone to large errors in modeling multi-hop relationships. Whereas multi-hop relationships are unavoidable for more complex inference tasks. In addition to this, because these methods all operate on implicit space, their predictions are not interpretable. Recent work has combined multihop inference and distributed representationTo explicitly model multi-step paths with deep learning. This approach can enjoy both the generalization ability of distributed representations and the interpretability of logical rules.
Neural networks, an important machine learning algorithm, basically mimic the human brain for perception and cognition. It is widely used in the field of natural language processing and has obvious effect. The neural network has strong feature capture capability, converts the real distribution of input data from an original space to another feature space through nonlinear transformation, and automatically learns feature representation. Therefore, the method is suitable for complex tasks such as knowledge reasoning and the like.
In the single-step reasoning, the neural network-based reasoning directly models knowledge graph fact tuples by using the neural network to obtain vector representation of the fact tuple elements for further reasoning. The method is still a method based on a score function, and is different from other methods, the whole network forms a score function, and the output of the neural network is the score value.
The Neural Tensor Network (NTN) replaces the traditional Neural Network layer with a bilinear Tensor layer, and under different dimensions, the head entity and the tail entity are connected, so that the complex semantic connection between the entities is described. The vector representation of the entity is obtained by averaging the word vectors, and the word vectors are fully utilized to construct the entity representation. Specifically, each triplet is learned by using a neural network with a specific relationship, head and tail entities serve as input, a bilinear tensor product is formed by the head and tail entities and the relationship tensor, three-order interaction is carried out, and meanwhile, second-order interaction of the head and tail entities and the relationship is modeled. Finally, the model returns the confidence of the triples, i.e.: if the specific relationship exists between the head and tail entities, returning a high score; otherwise, a low score is returned. In particular, each slice of the relationship-specific third-order tensor corresponds to a different semantic type. A relationship of multiple slices can better model different semantic relationships between different entities under the relationship. A similar neural tensor network model is introduced to predict new relations in the knowledge graph, and even the relation of entities which do not appear in the knowledge graph can be predicted by initializing the entity representation promotion model by word vectors which are learned from unsupervised texts.
The shared variable neural network model ProjE combines known parts (head entity to relation or tail entity to relation) of triples to establish a target vector space through simple combination operation, maps candidate sets of unknown parts (tail entity or head entity) to the same space, and learns the model by using the rank loss based on candidates. The combining operation reduces a large number of parameters compared to the commonly used transition matrix. And further processing the large-scale knowledge graph through candidate sampling.
And (3) carrying out entity and relation prediction in the Knowledge map by using a Description-acquired Knowledge Representation Learning model (DKRL). The model uses two encoders comprising a continuous bag of words and a deep convolutional neural network, and not only can structural information of a triple be obtained by learning description contents of an entity, but also keywords in the description contents of the entity and text information hidden in a word sequence can be obtained.
On the basis of a Recurrent Neural Network (RNN), a method suitable for knowledge graph reasoning is also provided by part of researchers. Path-RNN is a method for reasoning multi-hop connection relations in a non-atomic manner by using a recurrent neural network, and different paths are searched for each relation type by using a Path ordering algorithm, and then an embedded expression of a binary relation is used as an input vector. It outputs a vector in the semantic neighborhood of the relationship between the first and last entity of the path. Triples are not natural languages, and a fixed expression (h, r, t) is used to model complex structures. Such short sequences may be representatively insufficient to provide sufficient information for reasoning. At the same time, it is expensive and difficult to construct a useful long sequence from a large number of paths. To address the above issues, the depth sequence model for knowledge-graph completion DSKG uses multi-layered RNNs to handle entities and relationships. In particular, DSKG uses separate RNN units for the physical layer and the relational layer, since this architecture, which is specifically designed for knowledge-graphs, enables better performance in situations where relationships are diverse and complex. The model can not only predict entities but can also infer triples. In addition, the RNN model is used for solving the complex reasoning problem of entities and relations in text and large-scale knowledge bases by combining rich multi-step reasoning of symbolic logic reasoning with the generalization capability of a neural network. Relationships, entities and entity types can be inferred jointly, multiple paths are modeled and fused by utilizing a neural network attention mechanism, and a single RNN model is used for representing logic composition among all the relationships. The method comprises the steps of modeling a path by using a neural network, fully learning vector representation of a multi-step path, associating a score function with the similarity of the representation of the path and the direct relation representation, and expecting that the similarity corresponding to a positive case is large, namely a product is large and a negative case is small.
The method is characterized in that the strong learning capacity of the neural network is utilized to simulate the knowledge storage and processing mode of a computer or a human brain, a storage structure is used to simulate the storage memory of the human brain, a controller is used to simulate the control processing center of the human brain, and the neural network is expected to have the reasoning capacity of the human brain and reason out a new triplet through the learning memory of the known triplet in the knowledge map.
A Differentiable Neural Computer (DNC) comprises an LSTM Neural network controller and an external memory matrix that can be read and written. During training, the DNC takes knowledge map triple vectors as input, reads and writes an external storage matrix through a neural network, simulates human brain to learn and infer new knowledge by using the existing empirical knowledge, and updates the existing knowledge. During testing, fields corresponding to triples needing inference prediction are left empty (for example, head entities are predicted, and then head entity fields are left empty), trained DNC is input, the controller continuously interacts with an external storage matrix, multi-step inference is carried out, and finally complementary triples are output.
An implicit inference network (IRN) performs multi-step inference implicitly in neural space through a controller and shared memory. The known partial vectors of the IRN splicing triplets are input into an RNN neural network controller, and the controller judges whether the current state vector codes enough information. If not, generating the next state according to the current state vector and the attention vector obtained from the shared memory through the attention mechanism, and realizing multi-step reasoning; otherwise, stopping reasoning, generating an output vector, comparing the output vector with the target vector, updating the parameters and sharing the memory in a gradient manner, and performing model learning. During testing, the generated output vector is used to find the entity vector with large similarity as the prediction result.
The deep learning inference framework CogKR based on cognitive computation simulates a double-process theory in cognitive science and can access a knowledge graph to perform multi-hop relational inference. Specifically, the CogKR consists of an extension module and an inference module, and a cognitive map is constructed by coordinating the two modules, so that inference can be carried out based on a subgraph on the cognitive map instead of a path, and a more complex inference scene is adapted. Through dynamic interaction and end-to-end training among the models, the CogKR can combine the models into a unified structure and jointly optimize the unified structure to perform knowledge-graph reasoning.
The inference method based on the neural network utilizes the strong learning ability of the neural network to represent the triples in the knowledge graph, thereby obtaining better inference ability. However, in the knowledge graph reasoning task, the problem that the neural network model is difficult to interpret still exists, and how to interpret the reasoning ability of the neural network is worth studying. So far, the research of inference methods based on neural networks is increasing day by day, but the algorithms put into practical application are few, and the inference methods have strong expression capability and outstanding expression in other fields, so that the inference methods have wide development prospects, and the inference methods expand the existing neural network methods to the knowledge graph inference field and are worthy of exploration.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a Chinese classical garden knowledge graph completion and cognitive reasoning method.
The theory of dual channels in cognitive science states that there are two systems in the cognitive system of the human brain: system 1 and system 2. The system 1 is an intuitive system, and can search answers through intuitive matching of people on related information, so that the system is very quick and simple; the system 2 is an analysis system, and the answer is found by making a decision through certain reasoning and logic.
Artificial intelligence three development stages: from past machine intelligence, to present perceptual intelligence, cognitive intelligence is evolving.
The current mainstream research work is in the intelligent sensing stage, and mainly realizes the function of the system 1 in a centralized manner.
On the basis of constructing the classical garden knowledge graph, cognitive inference is researched, and the cognitive intelligence stage is achieved from the perception intelligence stage by taking knowledge graph completion as an entry point.
The information extraction algorithm designed in the work of the earlier stage of the subject changes an output layer from a CRF (conditional random access memory) to a state conversion layer, and changes a sequence marking problem into a problem of generating a directed graph through state conversion, and fully utilizes the associated information between entities and relations in the processing process, so that the information extraction of the entities and the relations can be realized simultaneously, and the utilization rate and the execution efficiency of the information can be improved. It is therefore contemplated to continue to maintain information utilization while performing completions of entities and relationships. By projecting the relationship from the edge space to the node space of the entity, the aggregation operation and inference prediction are uniformly carried out on the entity node and the relationship edge, and the knowledge utilization rate and algorithm execution efficiency of knowledge graph completion are improved. And actively implement the reasoning functions of the system 2.
In order to achieve the purpose, the invention adopts the following technical scheme: a Chinese classical garden knowledge graph completion and cognitive reasoning method comprises the following steps:
1. and executing the function of the system 1 in the cognitive science double-channel theory, namely a perception system, and realizing the information extraction of the entity and the relationship.
Step 1 comprises 4 substeps: step 1.1, obtaining a word vector embedding sequence according to input calculation, step 1.2, carrying out Bi-LSTM encoding on the sequence, namely bidirectional long-short term memory encoding, step 1.3, executing state conversion, judging that if the state reaches a final state, entities and relations are extracted, and ending; otherwise, according to probability calculation, entering the next step, step 1.4, selecting an entity extraction state conversion action or selecting a relation extraction state conversion action, returning to step 1.3 after execution is finished, finally completing entity and relation extraction, and entering the next step of the algorithm.
2. And (3) executing a system 2 in the cognitive science double-channel theory, namely analyzing the functions of the system, reasoning, firstly judging whether to continue to perform knowledge graph completion or not, and if the completion is completed and the completion is not continued, ending the algorithm flow.
Otherwise, execute 4 small steps: and 2.1, carrying out entity aggregation, 2.2, completing relation aggregation, 2.3, executing inference prediction, 2.4, returning to the beginning judgment of 2 after completing the completion of the knowledge graph completion, namely, firstly judging whether to continue the knowledge graph completion or not until the completion is not continued any more, and finishing the algorithm.
The detailed steps of the system 1, namely the function of the perception system, for extracting the information of the entities and the relationships in step 1 of the present patent are as follows:
step 1.1: embedding a word vector;
for each input token, vector embedding is calculated using the following equation:
Figure BDA0002825505900000091
wherein, wiIs the word vector that is learned and,
Figure BDA0002825505900000092
is a fixed word vector and V is a concatenated matrix of two vectors.
Calculating to obtain a vector embedded sequence:
x=(x1,x2,......,xi,......xn)
step 1.2: Bi-LSTM encoding:
Bi-LSTM (bidirectional Long-short term memory) encoding of the sequence x obtained in step 1.1, first according to the order of x1To xnIn order of forward LSTM encoding
Figure BDA0002825505900000093
Then according to the following from xnTo x1Sequentially backward LSTM encoding
Figure BDA0002825505900000094
For the current inputxtAnd hidden state h passed by last statet-1The stitching training yields four states, three of which are gated states zf,zi,zoAfter the splicing vector is multiplied by the weight matrix, the value is converted into a value between 0 and 1 through a sigmoid activation function to serve as a gating state. And the other state z is the result converted into a value between-1 and 1 by a tanh activation function, not the gating signal;
by state zfPerforming matrix multiplication as forgetting gating to control long-term memory of last state ct-1Which are left, important, unimportant, calculated as:
zf⊙ct-1
by state ziAs a gating signal, a matrix multiplication is performed, controlling the state z and thus the input xtAnd (4) carrying out selection memory, important recording and unimportant short recording, and calculating according to the following formula:
zi⊙z
performing matrix addition on the control results of the last two steps to obtain long-term memory c transmitted to the next statetCalculated as follows:
ct=zf⊙ct-1⊕zi⊙z
passing state zoControl and c obtained by tanh activation functiontZooming to obtain short-term memory htCalculated as follows:
ht=zo⊙tanh(ct)
final passage htVariation to give an output ytCalculated as follows:
yt=σ(W’ht)
forward LSTM encoding
Figure BDA0002825505900000101
The results are reported as
Figure BDA0002825505900000102
Backward LSTM encoding
Figure BDA0002825505900000103
The results are reported as
Figure BDA0002825505900000104
The two results are concatenated and are recorded as
Figure BDA0002825505900000105
Represents the Bi-LSTM encoding result;
step 1.3: and (3) state conversion:
and (3) performing state transition on the Bi-LSTM encoding result:
defining a six-tuple (sigma, delta, E, beta, E, R) representing the state at each moment, wherein sigma is a stack for storing generated entities, delta is a stack for storing entities which are pushed again after being temporarily popped from sigma, E is used for storing partial entity blocks which are being processed, beta is a buffer containing unprocessed words, E is used for storing a generated entity set, and R is used for storing a generated relation set;
the following information extraction task can be expressed as the initial state
Figure BDA0002825505900000108
To the final state (σ, δ],[]The state transition process of [ E ], R ], wherein [ 2 ]]The indication is that the stack is empty,
Figure BDA0002825505900000107
representing an empty set;
for the state at time t:
mt=max{0,W[st;bt;pt;et;at]+d}
calculating the probability by
Figure BDA0002825505900000111
Predicting the state transition action to be selected at the moment t, switching to the step 1.4 or the step 1.5 according to the prediction result, and returning to the step 1.3 after executing one state transition until the final state is reached;
given an input w, the probability of any reasonable sequence of state transition actions z can be expressed as:
Figure BDA0002825505900000112
therefore, there are:
Figure BDA0002825505900000113
when E and beta in the state six-tuple are empty stacks, the final state is reached, the state conversion is finished, and the sets E and R at the moment are respectively extracted entities and relations and can be output to the system 2 to carry out reasoning;
step 1.4: and (3) entity extraction:
three state transition actions related to entity identification are provided, and after one state transition is selected to be executed according to the step 1.3, the step 1.3 is returned;
firstly, judging that if the currently processed word j is not in the entity set E and the entity block E being processed is an empty stack and indicates that the currently processed word j is not the target information to be extracted, deleting the word j from the buffer area beta to be processed;
judging whether the word j currently processed is not in the entity set E but is selected to be further operated, and transferring the word j from the buffer area beta to be processed to the entity block E being processed;
if the word j currently processed is not in the entity set E and the entity block E being processed is not an empty stack, marking the j and then moving the j back to the buffer area beta to be processed, and merging the new entity j into the entity set E;
step 1.4: and (3) extracting the relation:
seven state conversion actions related to the relation extraction are selected, and after one state conversion is selected to be executed according to the step 1.3, the step 1.3 is returned;
firstly, judging whether a left-hand relation is found, merging the relation into a relation set R, and popping a relation end point entity i from a generated entity stack sigma;
if the right-direction relation is found, merging the relation into a relation set R, and transferring a relation end point entity j into a generated entity stack sigma;
if the relation is not extracted, transferring the entity j to the generated entity stack sigma;
judging whether the relation is extracted or not, and popping the entity i from the generated entity stack sigma;
if the left-hand relation is found, merging the relation into a relation set R, popping the relation end point entity i from the generated entity stack sigma, and then stacking the relation end point entity i into a temporary stack delta;
if the right-direction relation is found, merging the relation into a relation set R, popping the relation starting point entity i from the generated entity stack sigma, and then stacking the relation starting point entity i into a temporary stack delta;
after the state transition is selected and executed, the entity i is directly popped from the generated entity stack sigma and then is pushed to the temporary stack delta.
The system 2 described in step 2 of this patent, i.e. the function of the analysis system, realizes the detailed steps of 4 small steps as follows:
step 2.1: entity aggregation:
on the basis of the entity set E and the relation set R obtained in the previous step, for each relation R, R belongs to R, respectively generating an adjacent entity point set under the relation through iterative computation of the following formula;
Figure BDA0002825505900000131
Figure BDA0002825505900000132
wherein (u, r) epsilon N (v) is a set of adjacent points of the entity node v under the relation edge r, WrIs a projection matrix for the relation r;
step 2.2: relation aggregation:
to unify the pairThe aggregation operation is carried out on the entity nodes and the relation edges, the edges need to be projected to a node space from an edge space, and all the edges are firstly used as a group of basis vectors { v }1,v2,......,vBWeighting and expressing, and then carrying out iterative computation according to the following formula to realize relationship aggregation;
Figure BDA0002825505900000133
er k+1=Wrel er k
wherein, WrelIs a projection matrix from edge space to node space;
step 2.3: reasoning and forecasting:
the scoring function is first calculated and,
f(eh,r,et)=vr T tanh(vh T Wrvt+Wr,1vh+Wr,2vt+br)
then, the logistic regression distribution is calculated,
Figure BDA0002825505900000141
Figure BDA0002825505900000142
taking the probability value as a prediction result;
step 2.4: and (3) spectrum completion:
let evRepresenting predicted new entity node, rvwRepresenting predicted connected entities evAnd ewDefining the message function as a multi-layer perceptron MLP function:
M(ev,ew,rvw)=MLP(ev,ew,rvw)
then computes the neighbor message aggregation sent to the node,
mv t+1=AGGN(v)(M(ev t,ew t,rvw t))
finally, the entity and the relation are updated by the following formula to complete completion,
ev t+1=UPD(mv t+1,ev t)
the algorithmic processes of steps 2.1 to 2.4 are iterated repeatedly until all are completed.
The technical scheme of the invention mainly has the following technical advantages.
1. Capability of third generation artificial intelligence cognitive reasoning
The theory of dual channels in cognitive science states that there are two systems in the cognitive system of the human brain: system 1 and system 2. The system 1 is an intuitive system, and can search answers through intuitive matching of people on related information, so that the system is very quick and simple; the system 2 is an analysis system, and the answer is found by making a decision through certain reasoning and logic.
Artificial intelligence three development stages: from past machine intelligence, to present perceptual intelligence, cognitive intelligence is evolving.
The current mainstream research work is in the intelligent sensing stage, and mainly realizes the function of the system 1 in a centralized manner.
On the basis of constructing the classical garden knowledge graph, cognitive inference is researched, knowledge graph completion is taken as an entry point, an algorithm executes the information extraction function of the system 1 and the inference function of the system 2 in the cognitive science double-channel theory, the third-generation artificial intelligence cognitive inference capability is realized, and the cognitive intelligence stage is realized from the perception intelligence stage.
2. Improving the utilization rate and the execution efficiency of information
The algorithm system 1 realizes information extraction, converts an output layer from a CRF into a state conversion layer, converts a sequence marking problem into a problem of generating a directed graph through state conversion, fully utilizes the associated information between entities and relations in the processing process, can realize the information extraction of the entities and the relations at the same time, and can improve the utilization rate and the execution efficiency of the information. The algorithm system 2 realizes cognitive inference, and performs aggregation operation and inference prediction on entity nodes and relationship edges in a unified manner by projecting the relationship from an edge space to a node space of an entity, and completes the entity and the relationship at the same time, thereby improving the knowledge utilization rate and algorithm execution efficiency of knowledge map completion.
3. Can be widely applied to national classical gardens
The Chinese classical garden information extraction algorithm is scientific in design, strict in structure and standard in format, is used as a research result of 'construction of a classical garden knowledge graph based on multi-source data fusion and research and application of service technology' of a key research and development plan topic in Beijing city, and has a solid work foundation.
The invention is continuously developed and fully applied and verified in the series of working processes. Therefore, the invention can be widely applied to national classical gardens.
Detailed Description
A Chinese classical garden knowledge graph completion and cognitive reasoning method comprises the following steps:
1. and executing the function of the system 1 in the cognitive science double-channel theory, namely a perception system, and realizing the information extraction of the entity and the relationship.
Step 1 comprises 4 substeps: step 1.1, obtaining a word vector embedding sequence according to input calculation, step 1.2, carrying out Bi-LSTM encoding on the sequence, namely bidirectional long-short term memory encoding, step 1.3, executing state conversion, judging that if the state reaches a final state, entities and relations are extracted, and ending; otherwise, according to probability calculation, entering the next step, step 1.4, selecting an entity extraction state conversion action or selecting a relation extraction state conversion action, returning to step 1.3 after execution is finished, finally completing entity and relation extraction, and entering the next step of the algorithm.
2. And (3) executing a system 2 in the cognitive science double-channel theory, namely analyzing the functions of the system, reasoning, firstly judging whether to continue to perform knowledge graph completion or not, and if the completion is completed and the completion is not continued, ending the algorithm flow.
Otherwise, execute 4 small steps: and 2.1, carrying out entity aggregation, 2.2, completing relation aggregation, 2.3, executing inference prediction, 2.4, returning to the beginning judgment of 2 after completing the completion of the knowledge graph completion, namely, firstly judging whether to continue the knowledge graph completion or not until the completion is not continued any more, and finishing the algorithm.
The detailed steps of the system 1, namely the function of the perception system, for extracting the information of the entities and the relationships in step 1 of the present patent are as follows:
step 1.1: embedding a word vector;
for each input token, vector embedding is calculated using the following equation:
Figure BDA0002825505900000171
wherein, wiIs the word vector that is learned and,
Figure BDA0002825505900000172
is a fixed word vector and V is a concatenated matrix of two vectors.
Calculating to obtain a vector embedded sequence:
x=(x1,x2,......,xi,......xn)
step 1.2: Bi-LSTM encoding:
Bi-LSTM (bidirectional Long-short term memory) encoding of the sequence x obtained in step 1.1, first according to the order of x1To xnIn order of forward LSTM encoding
Figure BDA0002825505900000173
Then according to the following from xnTo x1Sequentially backward LSTM encoding
Figure BDA0002825505900000174
For the current input xtAnd hidden state h passed by last statet-1The stitching training yields four states, three of which are gated states zf,zi,zoIs formed by multiplying the splicing vector by a weight matrixAnd then, converting the value into a value between 0 and 1 through a sigmoid activation function to be used as a gating state. And the other state z is the result converted into a value between-1 and 1 by a tanh activation function, not the gating signal;
by state zfPerforming matrix multiplication as forgetting gating to control long-term memory of last state ct-1Which are left, important, unimportant, calculated as:
zf⊙ct-1
by state ziAs a gating signal, a matrix multiplication is performed, controlling the state z and thus the input xtAnd (4) carrying out selection memory, important recording and unimportant short recording, and calculating according to the following formula:
zi⊙z
performing matrix addition on the control results of the last two steps to obtain long-term memory c transmitted to the next statetCalculated as follows:
ct=zf⊙ct-1⊕zi⊙z
passing state zoControl and c obtained by tanh activation functiontZooming to obtain short-term memory htCalculated as follows:
ht=zo⊙tanh(ct)
final passage htVariation to give an output ytCalculated as follows:
yt=σ(W’ht)
forward LSTM encoding
Figure BDA0002825505900000181
The results are reported as
Figure BDA0002825505900000182
Backward LSTM encoding
Figure BDA0002825505900000183
The results are reported as
Figure BDA0002825505900000184
The two results are concatenated and are recorded as
Figure BDA0002825505900000185
Represents the Bi-LSTM encoding result;
step 1.3: and (3) state conversion:
and (3) performing state transition on the Bi-LSTM encoding result:
defining a six-tuple (sigma, delta, E, beta, E, R) representing the state at each moment, wherein sigma is a stack for storing generated entities, delta is a stack for storing entities which are pushed again after being temporarily popped from sigma, E is used for storing partial entity blocks which are being processed, beta is a buffer containing unprocessed words, E is used for storing a generated entity set, and R is used for storing a generated relation set;
the following information extraction task can be expressed as the initial state
Figure BDA0002825505900000188
To the final state (σ, δ],[]The state transition process of [ E ], R ], wherein [ 2 ]]The indication is that the stack is empty,
Figure BDA0002825505900000187
representing an empty set;
for the state at time t:
mt=max{0,W[st;bt;pt;et;at]+d}
calculating the probability by
Figure BDA0002825505900000191
Predicting the state transition action to be selected at the moment t, switching to the step 1.4 or the step 1.5 according to the prediction result, and returning to the step 1.3 after executing one state transition until the final state is reached;
given an input w, the probability of any reasonable sequence of state transition actions z can be expressed as:
Figure BDA0002825505900000192
therefore, there are:
Figure BDA0002825505900000193
when E and beta in the state six-tuple are empty stacks, the final state is reached, the state conversion is finished, and the sets E and R at the moment are respectively extracted entities and relations and can be output to the system 2 to carry out reasoning;
step 1.4: and (3) entity extraction:
three state transition actions related to entity identification are provided, and after one state transition is selected to be executed according to the step 1.3, the step 1.3 is returned;
firstly, judging that if the currently processed word j is not in the entity set E and the entity block E being processed is an empty stack and indicates that the currently processed word j is not the target information to be extracted, deleting the word j from the buffer area beta to be processed;
judging whether the word j currently processed is not in the entity set E but is selected to be further operated, and transferring the word j from the buffer area beta to be processed to the entity block E being processed;
if the word j currently processed is not in the entity set E and the entity block E being processed is not an empty stack, marking the j and then moving the j back to the buffer area beta to be processed, and merging the new entity j into the entity set E;
step 1.4: and (3) extracting the relation:
seven state conversion actions related to the relation extraction are selected, and after one state conversion is selected to be executed according to the step 1.3, the step 1.3 is returned;
firstly, judging whether a left-hand relation is found, merging the relation into a relation set R, and popping a relation end point entity i from a generated entity stack sigma;
if the right-direction relation is found, merging the relation into a relation set R, and transferring a relation end point entity j into a generated entity stack sigma;
if the relation is not extracted, transferring the entity j to the generated entity stack sigma;
judging whether the relation is extracted or not, and popping the entity i from the generated entity stack sigma;
if the left-hand relation is found, merging the relation into a relation set R, popping the relation end point entity i from the generated entity stack sigma, and then stacking the relation end point entity i into a temporary stack delta;
if the right-direction relation is found, merging the relation into a relation set R, popping the relation starting point entity i from the generated entity stack sigma, and then stacking the relation starting point entity i into a temporary stack delta;
after the state transition is selected and executed, the entity i is directly popped from the generated entity stack sigma and then is pushed to the temporary stack delta.
The system 2 described in step 2 of this patent, i.e. the function of the analysis system, realizes the detailed steps of 4 small steps as follows:
step 2.1: entity aggregation:
on the basis of the entity set E and the relation set R obtained in the previous step, for each relation R, R belongs to R, respectively generating an adjacent entity point set under the relation through iterative computation of the following formula;
Figure BDA0002825505900000211
Figure BDA0002825505900000212
wherein (u, r) epsilon N (v) is a set of adjacent points of the entity node v under the relation edge r, WrIs a projection matrix for the relation r;
step 2.2: relation aggregation:
in order to uniformly perform aggregation operation on entity nodes and relationship edges, the edges need to be projected from an edge space to a node space, and all the edges are firstly used as a group of basis vectors { v }1,v2,......,vBIs represented by a weight, then is superimposed byCalculating to realize relation aggregation;
Figure BDA0002825505900000213
er k+1=Wrel er k
wherein, WrelIs a projection matrix from edge space to node space;
step 2.3: reasoning and forecasting:
the scoring function is first calculated and,
f(eh,r,et)=vr T tanh(vh T Wrvt+Wr,1vh+Wr,2vt+br)
then, the logistic regression distribution is calculated,
Figure BDA0002825505900000221
Figure BDA0002825505900000222
taking the probability value as a prediction result;
step 2.4: and (3) spectrum completion:
let evRepresenting predicted new entity node, rvwRepresenting predicted connected entities evAnd ewDefining the message function as a multi-layer perceptron MLP function:
M(ev,ew,rvw)=MLP(ev,ew,rvw)
then computes the neighbor message aggregation sent to the node,
mv t+1=AGGN(v)(M(ev t,ew t,rvw t))
finally, the entity and the relation are updated by the following formula to complete completion,
ev t+1=UPD(mv t+1,ev t)
the algorithmic processes of steps 2.1 to 2.4 are iterated repeatedly until all are completed.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention in other forms, and any person skilled in the art may apply the above modifications or changes to the equivalent embodiments with equivalent changes, without departing from the technical spirit of the present invention, and any simple modification, equivalent change and change made to the above embodiments according to the technical spirit of the present invention still belong to the protection scope of the technical spirit of the present invention.

Claims (3)

1. A Chinese classical garden knowledge graph completion and cognitive inference method is characterized by comprising the following steps:
1) and executing the system 1 in the cognitive science double-channel theory, namely the function of a perception system, and realizing the information extraction of the entity and the relationship:
step 1) comprises 4 substeps: step 1.1, obtaining a word vector embedding sequence according to input calculation, step 1.2, carrying out Bi-LSTM encoding on the sequence, namely bidirectional long-short term memory encoding, step 1.3, executing state conversion, judging that if the state reaches a final state, entities and relations are extracted, and ending; otherwise, according to probability calculation, entering the next step, step 1.4, selecting an entity extraction state conversion action or selecting a relation extraction state conversion action, returning to step 1.3 after execution is finished, finally completing entity and relation extraction, and entering the next step of the algorithm when the final state is reached;
2) executing a system 2 in the cognitive science double-channel theory, namely analyzing the function of the system, reasoning, firstly judging whether to continue to perform knowledge graph completion, and if the completion of the knowledge graph completion does not continue, ending the algorithm flow;
otherwise, execute 4 small steps: and 2.1, carrying out entity aggregation, 2.2, completing relation aggregation, 2.3, executing inference prediction, 2.4, returning to the beginning judgment of 2) after completion of the knowledge graph completion, namely, firstly judging whether to continue the knowledge graph completion until the completion is not continued any more, and finishing the algorithm.
2. The method for complementing Chinese classical garden knowledge profiles and cognizing inference according to claim 1, wherein the system 1, i.e. the function of perception system, in step 1 of the present patent, comprises the following detailed steps of extracting information of entities and relationships:
step 1.1: embedding a word vector;
for each input token, vector embedding is calculated using the following equation:
Figure FDA0002825505890000021
wherein, wiIs the word vector that is learned and,
Figure FDA0002825505890000022
is a fixed word vector, and V is a concatenated matrix of two vectors;
calculating to obtain a vector embedded sequence:
x=(x1,x2,......,xi,......xn)
step 1.2: Bi-LSTM encoding:
Bi-LSTM (bidirectional Long-short term memory) encoding of the sequence x obtained in step 1.1, first according to the order of x1To xnIn order of forward LSTM encoding
Figure FDA0002825505890000023
Then according to the following from xnTo x1Sequentially backward LSTM encoding
Figure FDA0002825505890000024
For the current input xtAnd hidden state h passed by last statet-1Chinese pinTraining results in four states, three of which are gatedf,zi,zoAfter the splicing vector is multiplied by a weight matrix, the splicing vector is converted into a value between 0 and 1 through a sigmoid activation function to serve as a gating state, and the other state z is obtained by converting the result into a value between-1 and 1 through a tanh activation function instead of a gating signal;
by state zfPerforming matrix multiplication as forgetting gating to control long-term memory of last state ct-1Which are left, important, unimportant, calculated as:
zf⊙ct-1
by state ziAs a gating signal, a matrix multiplication is performed, controlling the state z and thus the input xtAnd (4) carrying out selection memory, important recording and unimportant short recording, and calculating according to the following formula:
zi⊙z
performing matrix addition on the control results of the last two steps to obtain the length transmitted to the next state
Term memory ctCalculated as follows:
Figure FDA0002825505890000031
passing state zoControl and c obtained by tanh activation functiontZooming to obtain short-term memory htCalculated as follows:
ht=zo⊙tanh(ct)
final passage htVariation to give an output ytCalculated as follows:
yt=σ(W’ht)
forward LSTM encoding
Figure FDA0002825505890000038
The results are reported as
Figure FDA0002825505890000032
Backward LSTM encoding
Figure FDA0002825505890000033
The results are reported as
Figure FDA0002825505890000034
The two results are concatenated and are recorded as
Figure FDA0002825505890000035
Represents the Bi-LSTM encoding result;
step 1.3: and (3) state conversion:
and (3) performing state transition on the Bi-LSTM encoding result:
defining a six-tuple (sigma, delta, E, beta, E, R) representing the state at each moment, wherein sigma is a stack for storing generated entities, delta is a stack for storing entities which are pushed again after being temporarily popped from sigma, E is used for storing partial entity blocks which are being processed, beta is a buffer containing unprocessed words, E is used for storing a generated entity set, and R is used for storing a generated relation set;
the following information extraction task can be expressed as the initial state
Figure FDA0002825505890000036
To the final state (σ, δ],[]The state transition process of [ E ], R ], wherein [ 2 ]]The indication is that the stack is empty,
Figure FDA0002825505890000037
representing an empty set;
for the state at time t:
mt=max{0,W[st;bt;pt;et;at]+d}
calculating the probability by
Figure FDA0002825505890000041
Predicting the state transition action to be selected at the moment t, switching to the step 1.4 or the step 1.5 according to the prediction result, and returning to the step 1.3 after executing one state transition until the final state is reached;
given an input w, the probability of any reasonable sequence of state transition actions z can be expressed as:
Figure FDA0002825505890000042
therefore, there are:
Figure FDA0002825505890000043
when E and beta in the state six-tuple are empty stacks, the final state is reached, the state conversion is finished, and the sets E and R at the moment are respectively extracted entities and relations and can be output to the system 2 to carry out reasoning;
step 1.4: and (3) entity extraction:
three state transition actions related to entity identification are provided, and after one state transition is selected to be executed according to the step 1.3, the step 1.3 is returned;
firstly, judging that if the currently processed word j is not in the entity set E and the entity block E being processed is an empty stack and indicates that the currently processed word j is not the target information to be extracted, deleting the word j from the buffer area beta to be processed;
judging whether the word j currently processed is not in the entity set E but is selected to be further operated, and transferring the word j from the buffer area beta to be processed to the entity block E being processed;
if the word j currently processed is not in the entity set E and the entity block E being processed is not an empty stack, marking the j and then moving the j back to the buffer area beta to be processed, and merging the new entity j into the entity set E;
step 1.4: and (3) extracting the relation:
seven state conversion actions related to the relation extraction are selected, and after one state conversion is selected to be executed according to the step 1.3, the step 1.3 is returned;
firstly, judging whether a left-hand relation is found, merging the relation into a relation set R, and popping a relation end point entity i from a generated entity stack sigma;
if the right-direction relation is found, merging the relation into a relation set R, and transferring a relation end point entity j into a generated entity stack sigma;
if the relation is not extracted, transferring the entity j to the generated entity stack sigma;
judging whether the relation is extracted or not, and popping the entity i from the generated entity stack sigma;
if the left-hand relation is found, merging the relation into a relation set R, popping the relation end point entity i from the generated entity stack sigma, and then stacking the relation end point entity i into a temporary stack delta;
if the right-direction relation is found, merging the relation into a relation set R, popping the relation starting point entity i from the generated entity stack sigma, and then stacking the relation starting point entity i into a temporary stack delta;
after the state transition is selected and executed, the entity i is directly popped from the generated entity stack sigma and then is pushed to the temporary stack delta.
3. The method for complementing Chinese classical garden knowledge profiles and cognizing inference according to claim 1, wherein the system 2, i.e. the function of the analysis system, in the step 2 of the patent is implemented by the following detailed steps of 4 small steps:
step 2.1: entity aggregation:
on the basis of the entity set E and the relation set R obtained in the previous step, for each relation R, R belongs to R, respectively generating an adjacent entity point set under the relation through iterative computation of the following formula;
Figure FDA0002825505890000061
Figure FDA0002825505890000062
wherein (u, r) epsilon N (v) is a set of adjacent points of the entity node v under the relation edge r, WrIs a projection matrix for the relation r;
step 2.2: relation aggregation:
in order to uniformly perform aggregation operation on entity nodes and relationship edges, the edges need to be projected from an edge space to a node space, and all the edges are firstly used as a group of basis vectors { v }1,v2,......,vBWeighting and expressing, and then carrying out iterative computation according to the following formula to realize relationship aggregation;
Figure FDA0002825505890000063
er k+1=Wreler k
wherein, WrelIs a projection matrix from edge space to node space;
step 2.3: reasoning and forecasting:
the scoring function is first calculated and,
f(eh,r,et)=vr Ttanh(vh TWrvt+Wr,1vh+Wr,2vt+br)
then, the logistic regression distribution is calculated,
Figure FDA0002825505890000064
Figure FDA0002825505890000065
taking the probability value as a prediction result;
step 2.4: and (3) spectrum completion:
let evRepresenting predicted new entity node, rvwRepresentation predictionDerived connectivity entity evAnd ewDefining the message function as a multi-layer perceptron MLP function:
M(ev,ew,rvw)=MLP(ev,ew,rvw)
then computes the neighbor message aggregation sent to the node,
mv t+1=AGGN(v)(M(ev t,ew t,rvw t))
finally, the entity and the relation are updated by the following formula to complete completion,
ev t+1=UPD(mv t+1,ev t)
the algorithmic processes of steps 2.1 to 2.4 are iterated repeatedly until all are completed.
CN202011447930.3A 2020-12-09 2020-12-09 Chinese classical garden knowledge graph completion and cognitive reasoning method Pending CN112463987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011447930.3A CN112463987A (en) 2020-12-09 2020-12-09 Chinese classical garden knowledge graph completion and cognitive reasoning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011447930.3A CN112463987A (en) 2020-12-09 2020-12-09 Chinese classical garden knowledge graph completion and cognitive reasoning method

Publications (1)

Publication Number Publication Date
CN112463987A true CN112463987A (en) 2021-03-09

Family

ID=74800622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011447930.3A Pending CN112463987A (en) 2020-12-09 2020-12-09 Chinese classical garden knowledge graph completion and cognitive reasoning method

Country Status (1)

Country Link
CN (1) CN112463987A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360664A (en) * 2021-05-31 2021-09-07 电子科技大学 Knowledge graph complementing method
CN113837454A (en) * 2021-09-09 2021-12-24 武汉大学 Hybrid neural network model prediction method and system for three degrees of freedom of ship
CN114496234A (en) * 2022-04-18 2022-05-13 浙江大学 Cognitive-atlas-based personalized diagnosis and treatment scheme recommendation system for general patients
CN115964459A (en) * 2021-12-28 2023-04-14 北方工业大学 Multi-hop inference question-answering method and system based on food safety cognitive map

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360664A (en) * 2021-05-31 2021-09-07 电子科技大学 Knowledge graph complementing method
CN113360664B (en) * 2021-05-31 2022-03-25 电子科技大学 Knowledge graph complementing method
CN113837454A (en) * 2021-09-09 2021-12-24 武汉大学 Hybrid neural network model prediction method and system for three degrees of freedom of ship
CN113837454B (en) * 2021-09-09 2024-04-23 武汉大学 Ship three-degree-of-freedom hybrid neural network model prediction method and system
CN115964459A (en) * 2021-12-28 2023-04-14 北方工业大学 Multi-hop inference question-answering method and system based on food safety cognitive map
CN115964459B (en) * 2021-12-28 2023-09-12 北方工业大学 Multi-hop reasoning question-answering method and system based on food safety cognition spectrum
CN114496234A (en) * 2022-04-18 2022-05-13 浙江大学 Cognitive-atlas-based personalized diagnosis and treatment scheme recommendation system for general patients
CN114496234B (en) * 2022-04-18 2022-07-19 浙江大学 Cognitive-atlas-based personalized diagnosis and treatment scheme recommendation system for general patients

Similar Documents

Publication Publication Date Title
Zhou et al. Parameter adaptation-based ant colony optimization with dynamic hybrid mechanism
CN113053115B (en) Traffic prediction method based on multi-scale graph convolution network model
CN112463987A (en) Chinese classical garden knowledge graph completion and cognitive reasoning method
Wang et al. ADRL: An attention-based deep reinforcement learning framework for knowledge graph reasoning
CN113065587B (en) Scene graph generation method based on hyper-relation learning network
CN113780002A (en) Knowledge reasoning method and device based on graph representation learning and deep reinforcement learning
CN113962358A (en) Information diffusion prediction method based on time sequence hypergraph attention neural network
Wang et al. A novel discrete firefly algorithm for Bayesian network structure learning
Xia et al. Iterative rule-guided reasoning over sparse knowledge graphs with deep reinforcement learning
Chen et al. Rule mining over knowledge graphs via reinforcement learning
CN112148891A (en) Knowledge graph completion method based on graph perception tensor decomposition
CN115577872A (en) Structured data prediction optimization method based on multi-energy intelligent agent deep reinforcement learning
CN116757283A (en) Knowledge graph link prediction method
CN115080795A (en) Multi-charging-station cooperative load prediction method and device
Liu et al. Efficient hyperparameters optimization through model-based reinforcement learning and meta-learning
CN117954081A (en) Intelligent medical inquiry method and system based on graph transducer
Chu et al. A data-driven meta-learning recommendation model for multi-mode resource constrained project scheduling problem
CN108470212B (en) Efficient LSTM design method capable of utilizing event duration
CN111444316B (en) Knowledge graph question-answering-oriented compound question analysis method
Rasekh et al. EDNC: Evolving differentiable neural computers
CN116010621B (en) Rule-guided self-adaptive path generation method
CN117033793A (en) Interpretable recommendation method based on reinforcement learning and path reasoning
CN115761654B (en) Vehicle re-identification method
CN115982373A (en) Knowledge graph recommendation method combining multi-level interactive contrast learning
Tamura et al. Recurrent type ANFIS using local search technique for time series prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination