CN113688217B - Intelligent question and answer method oriented to search engine knowledge base - Google Patents

Intelligent question and answer method oriented to search engine knowledge base Download PDF

Info

Publication number
CN113688217B
CN113688217B CN202110972592.3A CN202110972592A CN113688217B CN 113688217 B CN113688217 B CN 113688217B CN 202110972592 A CN202110972592 A CN 202110972592A CN 113688217 B CN113688217 B CN 113688217B
Authority
CN
China
Prior art keywords
question
answer
search engine
knowledge base
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110972592.3A
Other languages
Chinese (zh)
Other versions
CN113688217A (en
Inventor
舒明雷
刘浩
周书旺
高天雷
许继勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Original Assignee
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology, Shandong Institute of Artificial Intelligence filed Critical Qilu University of Technology
Priority to CN202110972592.3A priority Critical patent/CN113688217B/en
Publication of CN113688217A publication Critical patent/CN113688217A/en
Application granted granted Critical
Publication of CN113688217B publication Critical patent/CN113688217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Abstract

An intelligent question-answering method facing a search engine knowledge base is based on the search engine knowledge base, constructs a reasoning path with symbol Markov property through dynamic search path reasoning and deep reinforcement learning, realizes path information coding and calculation of action space probability distribution based on an LSTM and a feedforward neural network, and sets judgment conditions according to vector representation of the search reasoning path. Based on the judgment condition and the reasoning path when searching the answers of the questions, the intelligent question answering of the search engine based on the knowledge base is realized. The method does not need to define rules and limit the length of the reasoning path, can be used for a complex question-answer reasoning process based on a search engine knowledge base, and realizes efficient and accurate search engine intelligent question-answer.

Description

Intelligent question and answer method oriented to search engine knowledge base
Technical Field
The invention relates to the field of question answering only of search engines, in particular to an intelligent question answering method oriented to a search engine knowledge base.
Background
The intelligent question-answering of the search engine, namely a natural language question sentence of a given search engine, searches corresponding entities from the existing knowledge base to be used as answers of the question sentence. Specifically, entity recognition and relation extraction are carried out on the question sentence, the question sentence is linked to the corresponding entity and relation in the knowledge base, candidate answers are inquired, matched and deduced, and the target answer is obtained. Nowadays, the following problems are mainly faced in the reasoning process of intelligent question answering in the field of search engines:
1) the traditional semantic analysis and information retrieval methods need to manually write a large number of templates or define a large number of rules.
2) The partial deep learning method can only process simple problems, cannot be applied to a complex reasoning process, and consumes a large amount of computing resources.
Disclosure of Invention
In order to overcome the defects of the technologies, the invention provides an efficient and accurate intelligent question-answering method facing a search engine knowledge base.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
an intelligent question-answering method oriented to a search engine knowledge base comprises the following steps:
a) setting the question searched in the search engine and the corresponding answer, and obtaining the entity e of the question through a natural language processing toolsAnd query relation r of questionqSetting the answer corresponding to the question as eoEntity e for processing Embedding problem by using natural languagesQuerying the relationship rqAnd the answer eoMapping into dense low latitude vector space, resulting in a vector representation epsilon for the entity in questionsQuery vector representation of relationships gammaqAnd vector representation epsilon of the entity of the answero
b) Entity e extracted from search questionssAnd relation rqDefining the path of reasoning question answers based on the search engine knowledge base as ((r)0,es),(r1,e1),…,(rn,en) Wherein e) isi1, n denotes the ith entity in the path, n is the maximum inference path length, riN denotes the ith relationship in the inference path, r0For the introduced redundancy relationship, r0Entity e associated with the questionsForming actions together, defining a tuple (e, r) formed by an entity e and a relation r traversed in the search process of a search engine knowledge base as an action a, and defining a set of all actions as an action space A;
c) passing the action a of the search path in the search engine knowledge base through natureThe Embedding technique of language processing is mapped to a dense low-dimensional vector space, and a vector representation alpha corresponding to the action a is obtained as (gamma; epsilon), wherein; "is the operation of vector splicing, gamma is the vector representation of the relation r, epsilon is the vector representation of the entity e, and t is the vector alpha of the action a corresponding to the time steptInputting into long and short term memory network LSTM, and processing by formula ht=LSTM(ht-1t) Obtaining a vector representation h of historical memory information corresponding to the time step ttIn the formula ht-1Vector representation of historical memory information corresponding to the t-1 time step;
d) by the formula
Figure GDA0003548812410000021
Calculating to obtain an action space A corresponding to the t time steptCorresponding action fraction piθ(at) In the formula
Figure GDA0003548812410000022
Is an action space AtVector representation, W, through natural language processing Embedding mapping1And W2For the weights of the network model, Relu (. cndot.) is the ReLU function, softmax (. cndot.) is the softmax function,
Figure GDA0003548812410000023
is a matrix product, εtSelecting the motion corresponding to the maximum motion score as the vector representation corresponding to the t-th time of the path of the reasoning question answers of the search engine knowledge base
Figure GDA0003548812410000024
Wherein
Figure GDA0003548812410000025
The relationship corresponding to the action a with the maximum action score,
Figure GDA0003548812410000026
the entity corresponding to the action a with the maximum action score;
e) repeating steps c) through c) based on the search engine knowledge baseStep d), formally defining the obtained reasoning search path as
Figure GDA0003548812410000027
Wherein
Figure GDA0003548812410000028
The action tuple corresponding to the maximum action score is i at the time step, i is 1.
Figure GDA0003548812410000029
Is the relation corresponding to the maximum action score of the ith time step,
Figure GDA00035488124100000210
the entity corresponding to the maximum action score of the ith time step;
f) setting the reward value R of the reasoning answer corresponding to the current reasoning searching path based on the searching path of the searching engine knowledge basetIf the answer is inferred
Figure GDA00035488124100000211
Equal to the answer e corresponding to the questionoThen award value Rt1, if the answer is inferred
Figure GDA00035488124100000212
Not equal to answer e corresponding to the questionoThen by the formula
Figure GDA0003548812410000031
Calculating a reward value RtWherein d is a cosine similarity function,
Figure GDA0003548812410000032
is composed of
Figure GDA0003548812410000033
The vector representation obtained by Embedding,
Figure GDA0003548812410000034
is composed of
Figure GDA0003548812410000035
Vector representation obtained by Embedding;
g) by the formula
Figure GDA0003548812410000036
Calculating the maximum reward value R on the path of the reasoning question answers based on the whole search engine knowledge baseu
h) The parameter gradient is defined as
Figure GDA0003548812410000037
Wherein
Figure GDA0003548812410000038
For the purpose of graduating the network model parameter θ, where R ═ RuThe parameter optimization of the LSTM network and the feedforward neural network is realized through inverse gradient propagation;
i) repeating the steps a) to h) on the data set of the whole question and the corresponding answer based on a search engine knowledge base to complete model training and obtain a multi-hop inference model with prediction and inference capabilities on the question;
j) inputting a certain question in the multi-hop reasoning model, and obtaining the reasoning question answer through the steps a) to f) if the question has a definite question answer
Figure GDA0003548812410000039
Judging and reasoning out answers to questions
Figure GDA00035488124100000310
Whether the answer is equal to the answer of the real question or not, if the question does not have the answer of the question compared with the answer of the real question, the judgment condition of the search reasoning path on the answer of the question is set as
Figure GDA00035488124100000311
Wherein lambda is a hyperparameter, if the judgment condition is satisfied, predicting the answer entity vector epsilontCorresponding predicted answer entity etFor the answer to the search question, the reasoning process is exited.
Further, the natural language processing tool in step a) is a HanLP natural language processing tool or a deep natural language processing tool.
Preferably, λ in step j) is 0.5.
The invention has the beneficial effects that: based on a search engine knowledge base, an inference path with symbol Markov property is constructed through dynamic search path inference and deep reinforcement learning, path information coding and action space probability distribution calculation are realized based on an LSTM and a feedforward neural network, and meanwhile, judgment conditions are set according to vector representation of the search inference path. Based on the judgment condition and the reasoning path when searching the answers of the questions, the intelligent question answering of the search engine based on the knowledge base is realized. The method does not need to define rules and limit the length of the reasoning path, can be used for a complex question-answer reasoning process based on a search engine knowledge base, and realizes efficient and accurate search engine intelligent question-answer.
Drawings
FIG. 1 is a flow diagram of a multi-hop inference model of the present invention.
Detailed Description
The invention is further described below with reference to fig. 1.
An intelligent question-answering method oriented to a search engine knowledge base comprises the following steps:
a) setting the question searched in the search engine and the corresponding answer, and obtaining the entity e of the question through a natural language processing toolsAnd query relation r of questionqSetting the answer corresponding to the question as eoEntity e for processing Embedding problem by using natural languagesQuerying the relationship rqAnd the answer eoMapping into dense low latitude vector space, resulting in a vector representation epsilon for the entity in questionsQuery vector representation of relationships gammaqAnd vector representation epsilon of the entity of the answero
b) Entity e extracted from search questionssAnd relation rqDefining a path of reasoning question answers based on a search engine knowledge baseIs ((r)0,es),(r1,e1),…,(rn,en) Wherein e) isiWhere i is 1, …, n denotes the ith entity in the path, n is the maximum inference path length, riN denotes the ith relationship in the inference path, r0For the introduced redundancy relationship, r0Entity e associated with the questionsForming actions together, defining a tuple (e, r) formed by an entity e and a relation r traversed in the search process of a search engine knowledge base as an action a, and defining a set of all actions as an action space A;
c) mapping the action a of the search path in the search engine knowledge base to a dense low-dimensional vector space through an Embedding technology of natural language processing to obtain a vector representation alpha (gamma; epsilon), where "; "is the operation of vector splicing, gamma is the vector representation of the relation r, epsilon is the vector representation of the entity e, and t is the vector alpha of the action a corresponding to the time steptInputting into long and short term memory network LSTM, and processing by formula ht=LSTM(ht-1t) Obtaining a vector representation h of historical memory information corresponding to the time step ttIn the formula ht-1Vector representation of historical memory information corresponding to the t-1 time step;
d) by the formula
Figure GDA0003548812410000051
Calculating to obtain an action space A corresponding to the t time steptCorresponding action fraction piθ(at) In the formula
Figure GDA0003548812410000052
Is an action space AtVector representation, W, through natural language processing Embedding mapping1And W2For the weights of the network model, Relu (. cndot.) is the ReLU function, softmax (. cndot.) is the softmax function,
Figure GDA0003548812410000053
is a matrix product, εtSelecting a maximum action for a vector representation corresponding to the tth time of a path of reasoning question answers to a search engine knowledge baseThe corresponding actions of the score are recorded as
Figure GDA0003548812410000054
Wherein
Figure GDA0003548812410000055
The relationship corresponding to the action a with the maximum action score,
Figure GDA0003548812410000056
the entity corresponding to the action a with the maximum action score;
e) based on the search engine knowledge base, repeating the steps c) to d) to formally define the obtained inference search path as
Figure GDA0003548812410000057
Wherein
Figure GDA0003548812410000058
The action tuple corresponding to the maximum action score is i at the time step, i is 1.
Figure GDA0003548812410000059
Is the relation corresponding to the maximum action score of the ith time step,
Figure GDA00035488124100000510
the entity corresponding to the maximum action score of the ith time step;
f) setting the reward value R of the reasoning answer corresponding to the current reasoning searching path based on the searching path of the searching engine knowledge basetIf the answer is inferred
Figure GDA00035488124100000511
Equal to the answer e corresponding to the questionoThen award value Rt1, if the answer is inferred
Figure GDA00035488124100000512
Not equal to answer e corresponding to the questionoThen by the formula
Figure GDA00035488124100000513
Calculating a reward value RtWherein d is a cosine similarity function,
Figure GDA00035488124100000514
is composed of
Figure GDA00035488124100000515
The vector representation obtained by Embedding,
Figure GDA00035488124100000516
is composed of
Figure GDA00035488124100000517
Vector representation obtained by Embedding;
g) by the formula
Figure GDA00035488124100000518
Calculating the maximum reward value R on the path of the reasoning question answers based on the whole search engine knowledge baseuWherein R ═ RuT is u, and T is the timestamp corresponding to the maximum reward value;
h) the parameter gradient is defined as
Figure GDA00035488124100000519
Wherein
Figure GDA00035488124100000520
For the purpose of graduating the network model parameter θ, where R ═ RuThe parameter optimization of the LSTM network and the feedforward neural network is realized through inverse gradient propagation;
i) repeating the steps a) to h) on the data set of the whole question and the corresponding answer based on a search engine knowledge base to complete model training and obtain a multi-hop inference model with prediction and inference capabilities on the question;
j) inputting a certain question in the multi-hop inference model, and obtaining the question through steps a) to f) if the question has a definite question answerAnswers to questions of reasoning
Figure GDA0003548812410000061
Judging and reasoning out answers to questions
Figure GDA0003548812410000062
Whether the answer is equal to the answer of the real question or not, if the question does not have the answer of the question compared with the answer of the real question, the judgment condition of the search reasoning path on the answer of the question is set as
Figure GDA0003548812410000063
Wherein lambda is a hyperparameter, if the judgment condition is satisfied, predicting the answer entity vector epsilontCorresponding predicted answer entity etFor the answer to the search question, the reasoning process is exited.
Based on a search engine knowledge base, an inference path with symbol Markov property is constructed through dynamic search path inference and deep reinforcement learning, path information coding and action space probability distribution calculation are realized based on an LSTM and a feedforward neural network, and meanwhile, judgment conditions are set according to vector representation of the search inference path. Based on the judgment condition and the reasoning path when searching the answers of the questions, the intelligent question answering of the search engine based on the knowledge base is realized. The method does not need to define rules and limit the length of the reasoning path, can be used for a complex question-answer reasoning process based on a search engine knowledge base, and realizes efficient and accurate search engine intelligent question-answer.
Further, the natural language processing tool in step a) is a HanLP natural language processing tool or a deep natural language processing tool.
Preferably, λ in step j) is 0.5.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. An intelligent question-answering method oriented to a search engine knowledge base is characterized by comprising the following steps:
a) setting the question searched in the search engine and the corresponding answer, and obtaining the entity e of the question through a natural language processing toolsAnd query relation r of questionqSetting the answer corresponding to the question as eoEntity e for processing Embedding problem by using natural languagesQuerying the relationship rqAnd the answer eoMapping into dense low latitude vector space, resulting in a vector representation epsilon for the entity in questionsQuery vector representation of relationships gammaqAnd vector representation epsilon of the entity of the answero
b) Entity e extracted from search questionssAnd relation rqDefining the path of reasoning question answers based on the search engine knowledge base as ((r)0,es),(r1,e1),…,(rn,en) Wherein e) isi1, n denotes the ith entity in the path, n is the maximum inference path length, riN denotes the ith relationship in the inference path, r0For the introduced redundancy relationship, r0Entity e associated with the questionsForming actions together, defining a tuple (e, r) formed by an entity e and a relation r traversed in the search process of a search engine knowledge base as an action a, and defining a set of all actions as an action space A;
c) mapping the action a of the search path in the search engine knowledge base to a dense low-dimensional vector space through an Embedding technology of natural language processing to obtain a vector representation alpha (gamma; epsilon), wheretInputting into long and short term memory network LSTM, and processing by formula ht=LSTM(ht-1t) Obtaining a vector representation of historical memory information corresponding at time step thtIn the formula ht-1Vector representation of historical memory information corresponding to the t-1 time step;
d) by the formula
Figure FDA0003548812400000012
Calculating to obtain an action space A corresponding to the t time steptCorresponding action fraction piθ(at) In the formula
Figure FDA0003548812400000013
Is an action space AtVector representation, W, through natural language processing Embedding mapping1And W2For the weights of the network model, Relu (. cndot.) is the ReLU function, softmax (. cndot.) is the softmax function,
Figure FDA0003548812400000014
is a matrix product, εtSelecting the motion corresponding to the maximum motion score as the vector representation corresponding to the t-th time of the path of the reasoning question answers of the search engine knowledge base
Figure FDA0003548812400000021
Wherein
Figure FDA0003548812400000022
The relationship corresponding to the action a with the maximum action score,
Figure FDA0003548812400000023
the entity corresponding to the action a with the maximum action score;
e) based on the search engine knowledge base, repeating the steps c) to d) to formally define the obtained inference search path as
Figure FDA0003548812400000024
Wherein
Figure FDA0003548812400000025
To be at timeStep i is the action tuple corresponding to the maximum action score, i is 1,.. n,
Figure FDA0003548812400000026
is the relation corresponding to the maximum action score of the ith time step,
Figure FDA0003548812400000027
the entity corresponding to the maximum action score of the ith time step;
f) setting the reward value R of the reasoning answer corresponding to the current reasoning searching path based on the searching path of the searching engine knowledge basetIf the answer is inferred
Figure FDA0003548812400000028
Equal to the answer e corresponding to the questionoThen award value Rt1, if the answer is inferred
Figure FDA0003548812400000029
Not equal to answer e corresponding to the questionoThen by the formula
Figure FDA00035488124000000210
Calculating a reward value RtWherein d is a cosine similarity function,
Figure FDA00035488124000000211
is composed of
Figure FDA00035488124000000212
The vector representation obtained by Embedding,
Figure FDA00035488124000000213
is composed of
Figure FDA00035488124000000214
Vector representation obtained by Embedding;
g) by the formula
Figure FDA00035488124000000215
Calculating the maximum reward value R on the path of the reasoning question answers based on the whole search engine knowledge baseu
h) The parameter gradient is defined as
Figure FDA00035488124000000216
Wherein
Figure FDA00035488124000000217
For the purpose of graduating the network model parameter θ, where R ═ RuThe parameter optimization of the LSTM network and the feedforward neural network is realized through inverse gradient propagation;
i) repeating the steps a) to h) on the data set of the whole question and the corresponding answer based on a search engine knowledge base to complete model training and obtain a multi-hop inference model with prediction and inference capabilities on the question;
j) inputting a certain question in the multi-hop reasoning model, and obtaining the reasoning question answer through the steps a) to f) if the question has a definite question answer
Figure FDA00035488124000000218
Judging and reasoning out answers to questions
Figure FDA00035488124000000219
Whether the answer is equal to the answer of the real question or not, if the question does not have the answer of the question compared with the answer of the real question, the judgment condition of the search reasoning path on the answer of the question is set as
Figure FDA00035488124000000220
Wherein lambda is a hyperparameter, if the judgment condition is satisfied, predicting the answer entity vector epsilontCorresponding predicted answer entity etFor the answer to the search question, the reasoning process is exited.
2. The intelligent question-answering method oriented to the search engine knowledge base according to claim 1, characterized in that: the natural language processing tool in the step a) is a HanLP natural language processing tool or a deep natural language processing tool.
3. The intelligent question-answering method oriented to the search engine knowledge base according to claim 1, characterized in that: λ ═ 0.5 in step j).
CN202110972592.3A 2021-08-24 2021-08-24 Intelligent question and answer method oriented to search engine knowledge base Active CN113688217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110972592.3A CN113688217B (en) 2021-08-24 2021-08-24 Intelligent question and answer method oriented to search engine knowledge base

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110972592.3A CN113688217B (en) 2021-08-24 2021-08-24 Intelligent question and answer method oriented to search engine knowledge base

Publications (2)

Publication Number Publication Date
CN113688217A CN113688217A (en) 2021-11-23
CN113688217B true CN113688217B (en) 2022-04-22

Family

ID=78581700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110972592.3A Active CN113688217B (en) 2021-08-24 2021-08-24 Intelligent question and answer method oriented to search engine knowledge base

Country Status (1)

Country Link
CN (1) CN113688217B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102039397B1 (en) * 2018-01-30 2019-11-01 연세대학교 산학협력단 Visual Question Answering Apparatus for Explaining Reasoning Process and Method Thereof
US11631009B2 (en) * 2018-05-23 2023-04-18 Salesforce.Com, Inc Multi-hop knowledge graph reasoning with reward shaping
US20200117742A1 (en) * 2018-10-15 2020-04-16 Microsoft Technology Licensing, Llc Dynamically suppressing query answers in search
US11068942B2 (en) * 2018-10-19 2021-07-20 Cerebri AI Inc. Customer journey management engine
CN110232113B (en) * 2019-04-12 2021-03-26 中国科学院计算技术研究所 Method and system for improving question and answer accuracy of knowledge base
CN111581343B (en) * 2020-04-24 2022-08-30 北京航空航天大学 Reinforced learning knowledge graph reasoning method and device based on graph convolution neural network
CN111506722B (en) * 2020-06-16 2024-03-08 平安科技(深圳)有限公司 Knowledge graph question-answering method, device and equipment based on deep learning technology
CN112116069A (en) * 2020-09-03 2020-12-22 山东省人工智能研究院 Attention-LSTM-based reinforcement learning Agent knowledge inference method
CN112818137B (en) * 2021-04-19 2022-04-08 中国科学院自动化研究所 Entity alignment-based multi-source heterogeneous knowledge graph collaborative reasoning method and device

Also Published As

Publication number Publication date
CN113688217A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN110647619B (en) General knowledge question-answering method based on question generation and convolutional neural network
CN109800437A (en) A kind of name entity recognition method based on Fusion Features
CN111368058B (en) Question-answer matching method based on transfer learning
CN112818676A (en) Medical entity relationship joint extraction method
CN112306494A (en) Code classification and clustering method based on convolution and cyclic neural network
CN112463944B (en) Search type intelligent question-answering method and device based on multi-model fusion
CN111274790A (en) Chapter-level event embedding method and device based on syntactic dependency graph
CN116303971A (en) Few-sample form question-answering method oriented to bridge management and maintenance field
CN115858750A (en) Power grid technical standard intelligent question-answering method and system based on natural language processing
CN111145914A (en) Method and device for determining lung cancer clinical disease library text entity
CN111581365B (en) Predicate extraction method
Hakimov et al. Evaluating architectural choices for deep learning approaches for question answering over knowledge bases
CN112035629B (en) Method for implementing question-answer model based on symbolized knowledge and neural network
CN113590779A (en) Intelligent question-answering system construction method for knowledge graph in air traffic control field
Chen et al. Question answering over knowledgebase with attention-based LSTM networks and knowledge embeddings
CN116167353A (en) Text semantic similarity measurement method based on twin long-term memory network
CN113688217B (en) Intelligent question and answer method oriented to search engine knowledge base
CN116226404A (en) Knowledge graph construction method and knowledge graph system for intestinal-brain axis
CN114841148A (en) Text recognition model training method, model training device and electronic equipment
CN114781375A (en) Military equipment relation extraction method based on BERT and attention mechanism
Xiao et al. Deep knowledge tracking based on exercise semantic information
CN110909547A (en) Judicial entity identification method based on improved deep learning
CN111767388B (en) Candidate pool generation method
CN115982338B (en) Domain knowledge graph question-answering method and system based on query path sorting
Riyanto et al. Plant-Disease Relation Model through BERT-BiLSTM-CRF Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant