CN113837386B - Retrieval method and device based on multi-hop inference - Google Patents

Retrieval method and device based on multi-hop inference Download PDF

Info

Publication number
CN113837386B
CN113837386B CN202111150192.0A CN202111150192A CN113837386B CN 113837386 B CN113837386 B CN 113837386B CN 202111150192 A CN202111150192 A CN 202111150192A CN 113837386 B CN113837386 B CN 113837386B
Authority
CN
China
Prior art keywords
inference
inference path
path
current
paragraph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111150192.0A
Other languages
Chinese (zh)
Other versions
CN113837386A (en
Inventor
赵天成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Linker Technology Co ltd
Honglong Technology Hangzhou Co ltd
Original Assignee
Hangzhou Linker Technology Co ltd
Honglong Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Linker Technology Co ltd, Honglong Technology Hangzhou Co ltd filed Critical Hangzhou Linker Technology Co ltd
Publication of CN113837386A publication Critical patent/CN113837386A/en
Application granted granted Critical
Publication of CN113837386B publication Critical patent/CN113837386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a retrieval method and a device based on multi-hop inference, wherein the method comprises the following steps: s1, giving a problem as an initial current inference path; s2, generating a search query from the current inference path by using a retriever; s3, successively adding a paragraph which is not in the inference path to the current inference path by the reader to obtain a temporary inference path, and then searching answers of the questions in the temporary inference path; s4, jumping to S6 if at least one answer is found, otherwise, jumping to S5; s5, the re-sequencer scores the paragraphs, attaches the paragraph with the highest rank to the current inference path, and jumps to S2 if the total number of paragraphs of the current inference path does not reach a threshold value; and S6, predicting the answer with the highest answering score. According to the scheme, the optimal reasoning route can be automatically searched for any text knowledge base by utilizing deep reinforcement learning under the scene of not depending on manual labeling of the reasoning route.

Description

Retrieval method and device based on multi-hop inference
Technical Field
The invention relates to the field of intelligent retrieval and query, in particular to a retrieval method and a retrieval device based on multi-hop reasoning.
Background
Open domain Question Answering (QA) is an important means we can exploit knowledge in large text corpora and can make multiple queries without building knowledge patterns in advance. Enabling such systems to perform multi-step reasoning can further expand our ability to explore knowledge in these corpora.
Open-domain question-answering has made great progress, driven by the large-scale QA datasets that have recently been proposed. One prior art approach is to search for relevant content of the question and then read the paragraphs returned by the Information Retrieval (IR) component to arrive at the final answer. Since then, this "rank and read" approach has been adopted and extended in various open-domain quality assurance systems, but one premise of such systems is limited to answering questions/multi-step reasoning that does not require multiple hops. This is because for many multi-hop problems not all relevant contexts are available in a single retrieval step.
Disclosure of Invention
The invention mainly aims at the problem that answers cannot be obtained in a single retrieval step, and provides a retrieval method and a retrieval device based on multi-hop reasoning.
The invention mainly solves the technical problems through the following technical scheme: a retrieval method based on multi-hop inference comprises the following steps:
s1, giving a problem, wherein an initial current inference path only comprises the problem per se;
s2, generating a search query from the current inference path by using a retriever, wherein the search query comprises a new query word;
s3, the reader adds a paragraph which is not in the inference path to the current inference path to obtain a temporary inference path, then searches for an answer of the question in the temporary inference path through search query, and repeats the process until traversing all paragraphs which are not in the current inference path;
s4, if at least one answer is found in the step S3, jumping to a step S6, and if one answer cannot be found, jumping to a step S5;
s5, the re-sequencer scores each paragraph which is not in the current inference path according to the current inference path, attaches the paragraph with the highest rank to the current inference path to form a new current inference path, stops the retrieval process if the total number of paragraphs of the inference path reaches a threshold value, provides the updated current inference path to the retriever if the total number of paragraphs of the inference path does not reach the threshold value, and jumps to the step S2;
and S6, predicting the answer with the highest answering score.
The retrieval method based on the multi-hop inference further comprises a training method:
assuming that the external environment of the system is a fixed full-text index, a reward r is obtained after each query retrieval t T represents the current round; the expected return value is expressed as:
Figure BDA0003286716280000021
wherein gamma belongs to [0,1] is a breaking factor, and at most T rounds are carried out in each query; theta refers to the parameter of the model, and E is an operation for taking an expected value;
add a baseline bias:
Figure BDA0003286716280000022
b is a preset constant, and Rt is a result after a base line is added;
the final loss function for reinforcement learning RL is:
Figure BDA0003286716280000023
p is probability, x j Is the jth character output, K is the new paragraph added by the inference path, K is the inference path, theta is the model parameter,
Figure BDA0003286716280000024
representing the derivation operation.
Preferably, the retriever generates a binary output p (x) for each character at each step of generating the search query i =1 < p) is the first action and the resequencer ranks the candidate segments is the second action; the retrieval index and the reader judge whether the current reasoning path contains all paragraphs to be found according to the question or the correct answer; each time a correct paragraph is successfully found, R =5, and each time a round of inquiry is added, R = -1.
The scheme is configured with a supervision learning SL: the ratio of reinforcement learning RL controls the ratio of reinforcement learning and supervised learning to achieve the best results.
Preferably, the respondence score is the likelihood of the most likely positive answer versus the predicted knowledge of no answer; the paragraph score is a score obtained by multiplying the hidden representation of the first output character of the model by a linear transformation.
A retrieval apparatus based on multi-hop inference, comprising:
a retriever generating a natural language search query by rotating words from the inference path;
the reader extracts answers from the reasoning paths, and if the confidence coefficient is not high enough, the right is discarded;
a re-ranker that assigns a scalar score to each retrieved paragraph as a potential continuation of the current inference path.
The method has the substantial effect that the optimal reasoning route can be automatically searched for aiming at any text knowledge base by utilizing deep reinforcement learning under the scene of not depending on manual labeling of the reasoning route.
Drawings
Fig. 1 is a flow chart of a multi-hop inference based retrieval method of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b): the advent of multi-hop challenge data sets has led to academic interest in multi-hop QA. Their design is more challenging than SQuAD-like datasets, which are characterized by requiring multiple contexts for question recording answers, testing the ability of the QA system to infer answers in the presence of multiple pieces of evidence, and to efficiently find evidence in a large number of candidate documents. However, since these data sets are still relatively new, most of the existing research has focused on a small set of documents, where a relatively small number of contextual documents are given and these documents are guaranteed to contain the "correct" evidence documents. In real use, this assumption is hardly likely to be true.
In the scheme, a multiQ algorithm is provided, and an optimal reasoning route is automatically searched for any text knowledge base by utilizing deep reinforcement learning under the scene of not depending on manually marking a reasoning route. Our system is based on the basic architecture of iterative search, but unlike the past system which relies on artificial rules to obtain the question query corresponding to each jump, multiQ only uses this kind of expert rules as pre-training data. After the problem generation model is obtained preliminarily through supervised learning, the potential reasoning route is explored by the MultiQ through a simulation model, and the problem generation model is updated on line through a Policy Gradient-based deep reinforcement learning algorithm, so that a better query generation model is obtained. In addition, a more stable training algorithm is provided, and the problems of model training collapse and the like can be effectively avoided through iterative supervised learning and reinforcement learning.
The MultiQ algorithm is tested on the hotspot data set, and experimental results show that an iterative search engine with performance far superior to that obtained by training based on only artificial rules can be obtained by the MultiQ algorithm. 90% of recall rate is achieved on the Hotpot data set, and the recall rate is improved by 58% for the first-level searched segment.
Inspired by a series of TREC quality inspection competitions, the machine reading understanding model is combined with a retrieval system to realize the work of open question answering. For example, a simple inverted index lookup is built using TF-IDF and the top 5 results are retrieved using the question as a query for the reader model to generate answers. Recent work on open-domain question answering has largely followed this search and reading approach and has emphasized information retrieval components with question-answering capabilities in terms of considerations. However, these single-step retrieval and reading methods are fundamentally inadequate to solve the problem of requiring multi-hop reasoning, especially where the necessary evidence cannot be easily retrieved from the problem.
Qangoro and HOTPOTQA are the largest multi-hop QA datasets to date. The former is built around knowledge bases and knowledge patterns therein, while the latter employs a free-form problem generation process in crowd-sourcing and span-based evaluation. Both data sets are designed with supporting facts for manual annotation and a small set of interfering documents to reduce the computational burden. However, researchers have shown that this sometimes leads to a gaming environment, and therefore the multi-hop reasoning capabilities of the model are not always tested. Therefore, in this work we focus on the full Wiki setting of hottpotqa, which has a truly open domain setting, containing more different problems than the dozens of distractors.
On a broader level, the IR community has clearly recognized the need for multi-step search, query task decomposition and sub-task extraction, but until recently with the release of large-scale datasets, intensive research has been conducted on multi-hop QA. Many studies focus on developing multi-hop inference models, such as by modeling entity graphs or scoring candidate answers for context, in a small number of document environments. However, these approaches suffer from scalability problems when the number of supporting documents and/or candidate answers exceeds a few tens. One has applied entity graph modeling to hototqa, expanding a small entity graph from the question to reach the context of the QA model. However, centered around the entity name, this model may lose purely descriptive clues in the problem. One existing neural retriever, trained under remote supervision, biases paragraphs containing answers to a given question, which are then used in a multi-step reader-reasoner framework. However, this does not fundamentally solve the discoverability problem of open domain multi-hop retrieval, as not all evidence is usually directly available through the problem. Furthermore, the neural search model lacks interpretability, which is important in practical applications. Instead, multi-hop questions can be answered on a large scale by breaking the question down into sub-questions and performing iterative search and question answering, with similar motivation to our work. However, the problem studied in this work is based on a fixed pattern of logic, which provides additional supervision for problem resolution, but limits the variety of problems. A similar idea is applied to HOTPOTQA, but this approach similarly requires decomposed manual annotation data, and the authors do not apply it to iterative retrieval.
First, we formally define the problem we need to solve and the relevant core variables. Given a query statement q and an initial inference path p, the goal of the system is to generate a next virtual query statement q' by continually using q and p, get a related paragraph D through the search engine, and then include an article in D into p, and go back and forth until all related paragraphs are contained in p, which means that the search is successful.
In particular, given a problem q, the initial inference path contains only the problem itself, i.e. p0= [ q ]]We attempt to use a retriever to generate a search query therefrom to retrieve a set of related documents D 1 D, which may help answer this question, or reveal clues about the next evidence we need to answer q. The reader model then attempts to do so at D 1 To which each document is attached to answer the question. If multiple answers can be found from these inference paths, we will predict the answer with the highest answerability score. If no answer is found, the Reranker scores each retrieved paragraph according to the current inference path and attaches the highest ranked paragraph to the current inference path, i.e., p i+1 =p i +[argmax d∈D1 reranker(p i ,d]The updated inference path is then provided to the retriever to generate a new search query. This iterative process is repeated until an answer is predicted from one of the inference paths, or until the inference paths total K documents, where K is the maximum number of iterations allowed by the system. The whole process is shown in fig. 1.
To reduce computational cost, we build a multitask model building on a pre-trained Transormer model that performs all three subtasks. At a high level, it consists of a Transormer encoder that takes as input the question and all paragraphs retrieved so far (inference path p) and provides each one with a set of task-specific parameters, supplying the three subtasks of retrieval, rearrangement and reading. In particular, a retriever (ranker) generates a natural language search query by selecting words from an inference path, a reader (reader) extracts answers from the inference path, discards the right if its confidence is not high enough, and a re-ranker (ranker) assigns a scalar score to each retrieved paragraph as a potential continuation of the current inference path.
Searcher (ranker): the purpose of the retriever is to generate a natural language query to retrieve relevant documents from a ready-made text based retrieval engine. This enables the system to perform open domain quality checks in an interpretable and controllable manner, with the user easily understanding the behavior of the model and intervening if necessary. We propose to extract the search query from the current inference path, i.e. the original question and all paragraphs we have retrieved. This approach stems from the following observation: there is usually a strong semantic overlap between the inference path and the next segment to be retrieved. Rather than strictly limiting the query string to always be a substring of the inference path. In this context, we relax the constraints and instead allow the search query to be any subsequence of the inference path, thus allowing more flexible search phrase combinations to be used.
To predict these search queries from the inference path, we apply a token-based binary classifier on top of the shared Transormer encoder model to determine whether each token is contained in the final query. In training, we derive a supervisory signal to train these classifiers with binary cross entropy loss; at test time, we select a cutoff threshold for the query term to be included from the inference path. In practice, we find that enhancing the model to predict more query terms is beneficial to increase the recall of target paragraphs in the search.
We utilize the full-text search engine in AIbase to realize the search task for Wikipedia data in the subsequent experiment.
Reader (reader): the reader attempts to find an answer given an inference path consisting of the question and the retrieved paragraph, and simultaneously assigns an answer score to the inference path to assess the likelihood of finding the original question answer. Since not all inference paths can yield a final answer, we train a classifier based on the first output character ([ CLS ] token) Transormer encoder representation to determine if an answer should be predicted given an inference path. A span answer is predicted from the context using a span start classifier and a span end classifier. To support a special non-decimated answer in HotpotQA (e.g., "Yes/No"), we further include these as classifier options to make predictions in a 4-way classifier.
The responsiveness score is used to select the best answer from all candidate inference paths and to act as a stopping criterion for the system. We define the allegedness as the log-likelihood ratio between the most likely positive ANSWER and the NO ANSWER prediction. For the NO ANSWER example, we further train the span classifier to predict the [ CLS ] token as a "span", so if the positive ANSWER is a span, we will also include the likelihood ratio between the best span and the [ CLS ] span. Such likelihood ratio formulas are less affected by sequence length than predictive probability, so it is easier to assign a global threshold across inference paths of different length to stop further retrieval.
Reorder (reranker): when the reader cannot find an answer from the inference path, the re-ranker selects one of the retrieved paragraphs to expand it so that the searcher can generate a new search query to retrieve a new context to answer the question. To achieve this, we assign a score to each potential extended inference path by multiplying the hidden representation of the [ CLS ] token with a linear transformation, and then select the extension with the highest score. In training, we normalize the reranker scores of all retrieved paragraphs and maximize the logarithmic likelihood of selecting a gold support paragraph from among the retrieved paragraphs. To avoid that the linear computation cost of the Transormer encoder scales linearly with the number of paragraphs acquired, we use Negative Sampling to estimate this probability distribution. Specifically we obtain negative samples by random sampling from the whole bank and similar articles, and then train the model by the negative samples in each batch and the cross entropy loss function.
According to the above setting, how to train the question generation module and the reorderer in the retriever is the biggest problem. Because the specific inference path is unknown, we must somehow guess or get the best inference path.
Generating expert opinions through rules: and generating marking data for supervised training in an expert rule mode.
Specifically, the "best" inference path can be obtained by:
1. all words in p that repeat in the final correct paragraph are found.
2. If partial words appear continuously in q, they are combined to form N candidate span, s i
3. Respectively calculating the importance degree of each span, wherein the specific calculation formula is as follows:
I(s i )=Rank(t,{s j≠i })-Rank(t,s j )
that is, it is more important to be able to independently pull up the span of the correct paragraph rank.
4. And obtaining the first M high-probability query sentences q' according to the importance degree of each span.
5. For each candidate q', we retrieve it one by one, leaving the highest ranked data point.
Therefore, the method and the system can see the return after long-term investment only by depending on the final effect.
REINFORCE: the Reinforcement Learning (RL) model is based on a Markov Decision Process (MDP) or a hidden markov decision process (POMDP). MDP is a tuple (S, a, P, γ, R) whose state is a set of states; a is a set of actions; p defines the transition probability P (s 0| s, a); r defines the desired instant prize R (s, a); γ ∈ [0, 1) is the discount factor. The goal of reinforcement learning is to find the best strategy pi to maximize the expected cumulative return. We assume that the external environment of the system is a fixed full-text index, and after each query retrieval, our agent can obtain a reward rewarder of this t round t . I can then express the expected return value as
Figure BDA0003286716280000091
Wherein gamma is equal to [0,1] is a refraction factor, and at most T rounds are carried out in each query. Also, to reduce the variance, a baseline is typically added:
Figure BDA0003286716280000092
based on the above settings, the core elements of the reinforcement learning system are summarized as follows:
action A: ranker generates a query in each step with a binary output p (x) for each token i =1 dark p) is the first action. Second, ranking the N candidate articles is the second action.
E, environment: and judging whether the current p contains all paragraphs to be found according to the p or the correct answer by the retrieval index and the reader.
R reward: each time a correct paragraph R =5 is successfully found, one round of query R = -1 is added each time. In this way, i encourage the system to find a shorter, i.e. more efficient, inference path.
The final loss function for RL is:
Figure BDA0003286716280000093
during training, we configure SL: the RL ratio controls the proportion of reinforcement learning and supervised learning, and aims to prevent the model from eventually becoming uncontrollable during the training process.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Although terms such as retriever, reader, inference path, etc. are used more herein, the possibility of using other terms is not excluded. These terms are used merely to more conveniently describe and explain the nature of the present invention; they are to be construed as being without limitation to any additional limitations that may be imposed by the spirit of the present invention.

Claims (5)

1. A retrieval method based on multi-hop inference is characterized by comprising the following steps:
s1, giving a problem, wherein an initial current inference path only comprises the problem per se;
s2, generating a search query from the current inference path by using a retriever, wherein the search query comprises new query terms;
s3, the reader adds a paragraph which is not in the inference path to the current inference path to obtain a temporary inference path, then searches answers of the questions in the temporary inference path through search query, and repeats the process until all paragraphs which are not in the current inference path are traversed;
s4, if at least one answer is found in the step S3, jumping to a step S6, and if one answer cannot be found, jumping to a step S5;
s5, the re-sequencer scores each paragraph which is not in the current inference path according to the current inference path, attaches the paragraph with the highest rank to the current inference path to form a new current inference path, stops the retrieval process if the total number of paragraphs of the current inference path reaches a threshold value, provides the updated current inference path to the retriever if the total number of paragraphs of the current inference path does not reach the threshold value, and jumps to the step S2;
s6, predicting answers with the highest answering score;
a retrieval method based on multi-hop inference further comprises a training method:
assuming that the external environment of the system is a fixed full-text index, a reward r is obtained after each query retrieval t And t represents the current round; the expected return value is expressed as:
Figure FDA0003797806210000011
wherein gamma belongs to [0,1] is a depreciation factor, and T is the upper limit of the round of each query;
add one baseline:
Figure FDA0003797806210000012
b is a preset constant, and Rt is a result after a base line is added;
the final loss function for reinforcement learning RL is:
Figure FDA0003797806210000021
p θ is the probability, x j Is the jth character output, K is the new paragraph added by the inference path, p is the inference path, K is the longest inference path length allowed by the system, θ is the model parameter,
Figure FDA0003797806210000022
representing the derivation operation.
2. A multi-hop inference based retrieval method as claimed in claim 1, characterized in that the retriever generates the binary output p (x) for each character at each step of query retrieval i =1 < p) is the first action and the resequencer ranks the candidate segments is the second action; the retrieval index and the reader judge whether the current reasoning path contains all paragraphs to be found according to the question or the correct answer; each time a correct paragraph is successfully found, R =5, and each time a round of inquiry is added, R = -1.
3. The multi-hop inference based retrieval method of claim 2, wherein by configuring a supervised learning SL: the rate of reinforcement learning RL controls the proportion of reinforcement learning and supervised learning.
4. The multi-hop inference based retrieval method of claim 1, wherein the respondence score is a comparison likelihood number of the most likely positive answer and the no answer prediction insight; the paragraph score is a score obtained by multiplying the hidden representation of the first output character of the model by a linear transformation.
5. A retrieval apparatus based on multi-hop inference, characterized in that, the retrieval method based on multi-hop inference of claim 1 is operated, comprising:
a retriever generating a natural language search query by rotating words from a current inference path;
the reader extracts answers from the temporary reasoning paths, and if the confidence coefficient is not high enough, the right is abandoned;
and a re-sequencer that assigns a scalar score to each retrieved paragraph as a potential continuation of the current inference path.
CN202111150192.0A 2021-02-09 2021-09-29 Retrieval method and device based on multi-hop inference Active CN113837386B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110181938 2021-02-09
CN2021101819388 2021-02-09

Publications (2)

Publication Number Publication Date
CN113837386A CN113837386A (en) 2021-12-24
CN113837386B true CN113837386B (en) 2022-12-13

Family

ID=78967640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111150192.0A Active CN113837386B (en) 2021-02-09 2021-09-29 Retrieval method and device based on multi-hop inference

Country Status (1)

Country Link
CN (1) CN113837386B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611361A (en) * 2020-04-01 2020-09-01 西南电子技术研究所(中国电子科技集团公司第十研究所) Intelligent reading, understanding, question answering system of extraction type machine

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140108321A1 (en) * 2012-10-12 2014-04-17 International Business Machines Corporation Text-based inference chaining
US11030997B2 (en) * 2017-11-22 2021-06-08 Baidu Usa Llc Slim embedding layers for recurrent neural language models
CN108415977B (en) * 2018-02-09 2022-02-15 华南理工大学 Deep neural network and reinforcement learning-based generative machine reading understanding method
CN111538819B (en) * 2020-03-27 2024-02-20 深圳乐读派科技有限公司 Method for constructing question-answering system based on document set multi-hop reasoning
CN112131370B (en) * 2020-11-23 2021-03-12 四川大学 Question-answer model construction method and system, question-answer method and device and trial system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611361A (en) * 2020-04-01 2020-09-01 西南电子技术研究所(中国电子科技集团公司第十研究所) Intelligent reading, understanding, question answering system of extraction type machine

Also Published As

Publication number Publication date
CN113837386A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN111353310B (en) Named entity identification method and device based on artificial intelligence and electronic equipment
CN108875074A (en) Based on answer selection method, device and the electronic equipment for intersecting attention neural network
Haller et al. Survey on automated short answer grading with deep learning: from word embeddings to transformers
CN108763529A (en) A kind of intelligent search method, device and computer readable storage medium
CN112328800A (en) System and method for automatically generating programming specification question answers
Rücklé et al. Representation learning for answer selection with LSTM-based importance weighting
CN110378489A (en) Representation of knowledge learning model based on the projection of entity hyperplane
Glazkova A comparison of synthetic oversampling methods for multi-class text classification
CN109002519A (en) Answer selection method, device and electronic equipment based on convolution loop neural network
CN111966810A (en) Question-answer pair ordering method for question-answer system
CN109543176A (en) A kind of abundant short text semantic method and device based on figure vector characterization
Thomas et al. Chatbot using gated end-to-end memory networks
CN107679124B (en) Knowledge graph Chinese question-answer retrieval method based on dynamic programming algorithm
CN113420552B (en) Biomedical multi-event extraction method based on reinforcement learning
CN110502613A (en) A kind of model training method, intelligent search method, device and storage medium
CN110222737A (en) A kind of search engine user satisfaction assessment method based on long memory network in short-term
CN116720519B (en) Seedling medicine named entity identification method
CN113837386B (en) Retrieval method and device based on multi-hop inference
CN116403231A (en) Multi-hop reading understanding method and system based on double-view contrast learning and graph pruning
CN107329951A (en) Build name entity mark resources bank method, device, storage medium and computer equipment
CN113869034B (en) Aspect emotion classification method based on reinforced dependency graph
Zhang et al. Mining classification rule with artificial fish swarm
CN115795018A (en) Multi-strategy intelligent searching question-answering method and system for power grid field
CN113626574B (en) Information query method, system and device and medium
CN113468311A (en) Knowledge graph-based complex question and answer method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221025

Address after: 310000 Room 303, building 3, No. 399, Qiuyi Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Honglong Technology (Hangzhou) Co.,Ltd.

Applicant after: HANGZHOU LINKER TECHNOLOGY CO.,LTD.

Address before: 310000 room 31191, 3 / F, building 1, No. 88, Puyan Road, Puyan street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Honglong Technology (Hangzhou) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant