CN111813907A - Question and sentence intention identification method in natural language question-answering technology - Google Patents

Question and sentence intention identification method in natural language question-answering technology Download PDF

Info

Publication number
CN111813907A
CN111813907A CN202010557964.1A CN202010557964A CN111813907A CN 111813907 A CN111813907 A CN 111813907A CN 202010557964 A CN202010557964 A CN 202010557964A CN 111813907 A CN111813907 A CN 111813907A
Authority
CN
China
Prior art keywords
question
input
paragraph
layer
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010557964.1A
Other languages
Chinese (zh)
Inventor
李伟
卢心陶
郭佳月
张奎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010557964.1A priority Critical patent/CN111813907A/en
Publication of CN111813907A publication Critical patent/CN111813907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides a question and sentence intention identification method in a natural language question and answer technology, which comprises the following steps: an end-to-end executable revision problem generation model comprises an input layer, a coding layer, a matching layer and a decoding layer, and generates revised problems by combining sentence generation, machine reading understanding and a copying mechanism; and a method for automatically generating training data by article compression. The method can improve the accuracy of machine reading answers when the input questions of the conventional question-answering system are too short to identify the answers, and can select the questions matched with the question intentions of the questioners in the candidate set by giving the corrected question candidate set even if the input questions are long enough, so that the answers with higher accuracy are obtained in the question-answering system.

Description

Question and sentence intention identification method in natural language question-answering technology
Technical Field
The invention discloses a question and sentence intention identification method in a natural language question and answer technology, and relates to the fields of natural language processing, question and answer systems and artificial intelligence.
Background
With the development of mobile internet, voice devices such as smart phones, voice robots, and smart speakers are becoming more popular, and there is an increasing demand for a question and answer technique for answering user questions based on artificial intelligence. In recent years, particular attention has been paid to a question answering based on machine reading understanding, which is a question answering technique enabling a system to read and understand a natural language document or passage and to search and extract information of answers from the passage, thereby enabling question answering.
Although the problem solution based on machine reading understanding can achieve higher answer precision at present, the problem still exists in the practical use. In an actual question-answering system, when a questioner inputs a question intended to be ambiguous or a brief question in which information required by the system is insufficient, it is difficult to achieve high answer accuracy. For example, in a real scenario, interaction between machine services and a questioner, a questioner who is not familiar with relevant business knowledge may sometimes ask a machine to service some questions with an ambiguous intent. Therefore, a mechanism with the functions of guessing and identifying and confirming the intentions of questioners is introduced into the current question-answering system based on machine reading, and accurate answers to fuzzy questions are necessary to be realized.
Disclosure of Invention
In order to overcome the defects of the prior art for reading and understanding the question answering by a machine, the invention aims to provide a question and sentence intention identification method in the natural language question answering technology.
In order to achieve the purpose, the invention adopts the following technical scheme:
a question and sentence intent recognition method in natural language question-answering technology, said method using an end-to-end executable modified question generation model by combining a sentence generation model, a machine reading model and a replication mechanism, said method comprising the steps of:
step 1, when an input question and a corresponding question paragraph are input into a generation model, the model reads and extracts relevant information in the contents of the input question and the paragraph by using a machine reading model;
step 2, the model extracts information and generates a corrected problem set by a sentence generation model based on an attention observation and supervision mechanism;
step 3, then, the copying mechanism will copy the input question and the important words and expression mode in the paragraph into the revised question set, so as to generate the revised final question according to the content of the paragraph;
the structure of the correction problem generation model comprises the following four parts:
an input layer: representing the input question sentences and question paragraphs as a single-hot coded sequence vector model;
and (3) coding layer: converting a paragraph mark sequence and a question sentence of an input layer into a vector of continuous values by using a word embedding model, and simultaneously creating a vector sequence by considering the context relationship of the paragraph and the question sentence;
matching layer: capturing the correlation between words in the paragraph and input question sentences, and modeling the vector sequence of the paragraph;
a decoding layer: using a GRU-RNN gated cyclic unit recurrent neural network with an attention mechanism and two replication mechanisms, word marker sequences are generated that constitute the correction problem.
Further, in the decoding layer, the structure of the GRU-RNN gated cyclic unit recurrent neural network is as follows:
activation function: the method has the advantages that the PReLu is selected from the GRU-RNN to serve as an activation function which serves as a correction linear unit, slopes of all negative values in an input vector can be changed according to specific training data, and the rest values are unchanged, so that the distinguishing contrast is increased, and the model learning efficiency is improved;
and (3) selecting neurons: the gated cyclic unit GRU is a neuron in RNN, and the retention of information is controlled by two gate structures in a hidden layer, so that important and unimportant information can be screened out;
other parameters: the model is formed by a GRU layer with an attention mechanism and a softmax layer, and the learning rate is 0.0007 by an adaptive gradient descent method.
And further, a method for automatically generating training data by paragraph sentence compression is adopted, and brief question sentences with fuzzy intentions are generated from a machine reading corpus question library through sentence compression, so that a question model is automatically corrected.
The invention has the beneficial effects that: the mechanism with the presumed questioner intention is introduced into the current questioning and answering system based on machine reading, so that the machine reading answer accuracy can be improved when the input questions of the existing questioning and answering system are too short to identify answers, and meanwhile, the questions matched with the questioner question intention can be selected from the candidate set by giving the corrected question candidate set even if the input questions are long enough, so that the answers with higher accuracy can be obtained in the questioning and answering system.
Drawings
FIG. 1 is a schematic diagram of a machine-readable question-answering system incorporating a modified question generation model;
FIG. 2 is a schematic diagram of a problem-correcting generative model according to the present invention;
FIG. 3 is a diagram of a residual road network core architecture;
FIG. 4 is an expanded view of a GRU-RNN single-layer structure.
FIG. 5 is a basic structure diagram of a GRU unit to which the present invention pertains
Detailed Description
It should be noted that the following description is exemplary and is intended to provide further explanation of the invention claimed. Unless defined otherwise, all scientific and technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is also to be noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting, according to the exemplary embodiments of the present application, which will be further described with reference to the accompanying drawings and examples.
Referring to fig. 1 to 4, a question and sentence intent recognition method in natural language question-answering technology, which employs an end-to-end executable modified question generation model that combines a paragraph generation model, a machine reading model and a replication mechanism, the task of the model is to take an input question and a corresponding paragraph as input and output a modified question with a related intent, and a questioner can select a modified question that meets its own correct intent from a plurality of modified question candidate sets, the method comprising the following steps:
step 1, when an input question and a corresponding question paragraph are input into a generation model as shown in fig. 1, the model reads and extracts relevant answer information in the contents of the input question and paragraph by using a machine reading model;
the model takes as input the word token sequence of the input question and the word token sequence of the passage and estimates from the passage the positions of the start and end points where the answer may exist, which is output as a word token sequence with the range of the estimated answer extracted from the passage. The input question sentence is first participled and part-of-speech labeled through morphological and grammatical analysis and expressed as single heat vector coding sequence q ═ { q ═ q1,q2,…qjThe single heat vector is a V-dimensional vector, wherein the dimension value corresponding to the index of the word dictionary is 1, and the other dimension values are 0; paragraph processing is similar to the input problem, where the number of words in a paragraph is large, and the number of words is determined by a single heat vector x ═ x1,x2,…,xTAnd the paragraph should contain relevant information about the question to be answered.
Step 2, the model extracts information and generates a corrected problem set by a paragraph generation model based on an attention observation and supervision mechanism;
step 3, then, the copying mechanism will copy the input question and the important words and expression in the paragraph into the revised question set, so as to generate the revised final question according to the content of the paragraph. The revised final question is a sentence in which the content of the input question is embodied as y ═ y { (y)1,y2,…,yK}。
Further, as shown in fig. 2, the structure of the problem-correcting generation model specifically includes:
an input layer: representing the input question sentences and question paragraphs as a single-hot coded sequence vector model;
and (3) coding layer: converting a paragraph mark sequence and a question sentence of an input layer into a vector of continuous values by using a word embedding model, and simultaneously creating a vector sequence by considering the context relationship of the paragraph and the question sentence;
x and q of the input layer become inputs of the coding layer, are expressed as V-dimensional one-hot coded vectors, and first, the one-hot coded vectors are converted into continuous value vectors expressed in V-dimension. For converting words, a trained vector transformation matrix is used
Figure BDA0002545170210000041
The converted vector finally obtains a paragraph and a problem vector sequence through two layers of residual error road networks respectively, and the paragraph and the problem vector sequence are respectively expressed as
Figure BDA0002545170210000042
And
Figure BDA0002545170210000043
as shown in fig. 3, when a vector x is input at the beginning, the residual road network performs weight addition according to a conventional neural network, and then activates the function ReLu, and after weight addition, superimposes input information and output at that time, and then activates the function.
The method includes the steps of using a GRU unit layer, wherein the corrected linear unit value of the GRU dynamically changes along with the paragraph and the input question in the training process, and using bidirectional GRU to obtain the context matrix of the paragraph in order to prevent the input vector from being affected little in the training process
Figure BDA0002545170210000044
With context matrix of input questions
Figure BDA0002545170210000045
Where d is the hidden layer dimension.
The RNN state in the encoding layer is also used to determine the initial state of the RNN in the decoding layer, which is calculated as follows:
Figure BDA0002545170210000046
this calculation formula places attention on the context matrix of the input problem, where
Figure BDA0002545170210000047
Wherein
Figure BDA0002545170210000048
Representing the final state of the input problem context matrix.
Matching layer: capturing the correlation between words in the paragraph and input question sentences, and modeling the vector sequence of the paragraph;
in the matching layer, a paragraph in the coding layer is matched with the input question, a region related to the input question is found from the paragraph, a bidirectional attention flow (BiDAF) model is used in the matching layer to obtain the relation between the paragraph and the input question, the bidirectional attention flow calculates an attention value from two directions of the paragraph and the input question, and finally a context matrix is created for the paragraph according to the input question.
In the BiDAF, first, a similarity matrix is calculated according to the following equation using a context matrix H for transfer and a context matrix U for an input question
Figure BDA0002545170210000049
Figure BDA00025451702100000410
Wherein, ω issAs learning parameters, [;]representing a row vector concatenation operation. Next, bidirectional attention values from the paragraph to the input question are calculated in both directions based on the similarity matrix. Calculating a weighting for a word in the input question for a word in the passage from the passage to the attention value of the input questionVector, attention vector of the t-th word in paragraph
Figure BDA0002545170210000051
The calculation method is as follows:
at=softmaxj(St)
Figure BDA0002545170210000052
in the process of inputting questions to paragraphs, the matrix
Figure BDA0002545170210000053
Vector of
Figure BDA0002545170210000054
The sequence length T of the paragraph is calculated according to the weighting of the words in the input question that are closely related to the paragraph:
Figure BDA0002545170210000055
then calculating a bidirectional attention vector between each word in the paragraph and the input question;
Figure BDA0002545170210000056
finally, the bidirectional attention vector G is input into the first layer bidirectional GRU, and the context matrix M is obtained.
A decoding layer: using a GRU-RNN gated cyclic unit recurrent neural network with an attention mechanism and two replication mechanisms, word marker sequences are generated that constitute the correction problem.
In this layer, the corrected problem is generated from the information in the coding layer and the matching layer. Eventually integrated into a network that combines RNN-based language generation models with replication mechanisms for paragraphs and input questions.
Further, the RNN language generation model includes:
generatingThe model consists of a GRU layer and a softmax layer with attention mechanism, and when a revised word sequence y is input, { y ═ y1,…,ysH, the next word probability distribution P of the revised problem output by the generative modelgCan be expressed as:
Figure BDA0002545170210000057
wherein WgAnd bgDenotes a learning parameter, VgRepresenting the number of words generated by the generative model, and V > Vg. The words generated by the generative model are only high-frequency words, and the low-frequency words are extracted by a replication mechanism, so that the size of the generative model can be reduced, and the learning speed h can be increaseds+1Is a hidden layer of GRU, and is composed of
Figure BDA0002545170210000058
Figure BDA0002545170210000059
And (6) updating.
Previous word y output from generative modelsDetermining the input vector of a GRU as follows
Figure BDA00025451702100000510
Like the coding layer, after one-time single hot coding vectorization, the word embedding layer and the two residual error road network layers carry out ysConversion to continuous vectors
Figure BDA0002545170210000061
Followed by the use of esHidden state h with previous GRUsComputing vectors
Figure BDA0002545170210000062
For calculation of attention values.
The attention value α of the paragraph is then calculatedstAttention value beta to input questionsj
Figure BDA0002545170210000063
Figure BDA0002545170210000064
Finally, the input value of GRU
Figure BDA0002545170210000065
Can be expressed as follows:
Figure BDA0002545170210000066
wherein the content of the first and second substances,
Figure BDA0002545170210000067
and
Figure BDA0002545170210000068
to learn the parameters, f is the PReLu nonlinear activation function.
Further, the replication mechanism includes:
the mechanism is a mechanism for generating a sentence with consistency by copying a part of an input word in a sentence generation process in the context of sentences and dialogs, aiming at generating a revised question from the contents of an input question or paragraph, both of which are used as copy sources, by which a generation probability distribution of words is calculated:
Pcp(ys+1|y≤s,x,q)=∑tf(ys+1=xt(s+1)t
Pcq(ys+1|y≤s,x,q)=∑jf(ys+1=qj(s+1)q
wherein, f (y)s+1=xt) Is a function when ys=xtValue 1, otherwise 0, f (y)s+1=qj) The same is true.
Further, the RNN-based language generation model of the decoding layer and the replication mechanism network for paragraphs and input questions comprise:
the weighted sum of the probability of generation for each word calculated by the generative model can be used to determine the word probability distribution P for the final output word:
P(ys+1|y≤s,x,q)=λsPg(ys+1|y≤s,x,q)
sPcp(ys+1|y≤s,x,q)
sPcq(ys+1|y≤s,x,q);
wherein λsμsυsIs a weight parameter, λsμsυs∈[0,1]And lambdasssThe value of the weight parameter is output by the softmax layer shown in the following equation at 1
Figure BDA0002545170210000071
Determining:
Figure BDA0002545170210000072
λs=γs0s=γs1s=γs2
wherein, Highway2() Representing two-layer residual road networks, WcAnd bcRepresenting weights and biases in the learning parameters. Then, according to the probability distribution generated by the decoding layer, a bundle search is introduced in the generation process, and a plurality of correction problem candidate sets are generated by searching a generation range through a bundle width b.
In the training process of the problem correction generation model, the error L is calculated by using negative log-likelihood loss, the error function is minimized in a parameter updating mode, the generation model is continuously optimized, and the error function is defined as:
Figure BDA0002545170210000073
where N is the batch training size and i is the index value of the ith sample in each batch of data.
Further, the decoding layer of the GRU-RNN gated cyclic unit recurrent neural network structure includes:
activation function: the method has the advantages that the PReLu is selected from the GRU-RNN to serve as an activation function which serves as a correction linear unit, slopes of all negative values in an input vector can be changed according to specific training data, and the rest values are unchanged, so that the distinguishing contrast is increased, and the model learning efficiency is improved;
and (3) selecting neurons: as shown in fig. 4, an expanded graph of a GRU-RNN single-layer structure is based on a time-series length model, and can effectively capture the relevant information between contexts, a gated round-robin unit GRU is a neuron in the RNN, and the retention of information is controlled by two gate structures in a hidden layer, so that important and unimportant information can be screened out, as shown in fig. 5, the GRU unit structure is as follows:
the input of the sequence t is xtThe input of the previous layer of the hidden layer is ht-1Representing less important information in the previous data, the two inputs being passed through a reset gate rtUpdating the door ztAnd an output gate htForming an output h of the celltAnd hidden layer output
Figure BDA0002545170210000074
Wz,WrW is a weight matrix;
and (4) updating the door: z is a radical oft=σ(Wz·[ht-1,xt])
Resetting a gate: r ist=σ(Wr·[ht-1,xt])
Hiding the layer:
Figure BDA0002545170210000075
an output gate:
Figure BDA0002545170210000076
other parameters: the model is formed by a GRU layer with an attention mechanism and a softmax layer, and the learning rate is 0.0007 by an adaptive gradient descent method.
Further, a method for paragraph and sentence compression and automatic generation of training data comprises the following steps:
the compression method is an unsupervised sentence compression method based on a syntax dependence structure and integer programming, firstly, the syntax dependence structure of an input sentence is obtained, then index numbers are sequentially distributed to words in the sentence from the beginning of the sentence, and an integer programming problem is formulated based on the index numbers, wherein an integer programming expression is defined as follows:
Figure BDA0002545170210000081
when a isiWhen 1, select the ith word in the sentence, when aiWhen the word is 0, the word is not selected, and the word is deleted by sentence compression. L is the length of the sentence before compression, L is the maximum number of words of the sentence after compression, parent (i) displays the word corresponding to the parent node of the ith word in syntactical dependence, wiRepresents the weight of a word, consisting of
Figure BDA0002545170210000082
Calculated as F is the total frequency of all words in the training corpus, F (a)i) Is the frequency of occurrence of a single word. In the paragraph sentence compression process, the compressed length is defined by the constraint condition sigmai≤LaiThe method of the invention aims at finding the best intention of the problem, and therefore to suppress to some extent the words considered to be of greater importance in the system,
Figure BDA0002545170210000083
the constraint is to prevent the word with the largest weight from being selected all the time, thereby ensuring that the generation intention is relatively accurate.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is only a preferred embodiment of the present invention, and it should be understood that any modifications, equivalents and changes that can be made without inventive changes based on the technical solutions of the present invention should be included in the scope of the claims of the present invention.

Claims (3)

1. A question and sentence intent recognition method in natural language question-answering technology, characterized in that the method employs an end-to-end executable modified question generation model by combining a sentence generation model, a machine reading model and a replication mechanism, the method comprising the steps of:
step 1, when an input question and a corresponding question paragraph are input into a generation model, the model reads and extracts relevant information in the contents of the input question and the paragraph by using a machine reading model;
step 2, the model extracts information and generates a corrected problem set by a sentence generation model based on an attention observation and supervision mechanism;
step 3, then, the copying mechanism will copy the input question and the important words and expression mode in the paragraph into the revised question set, so as to generate the revised final question according to the content of the paragraph;
the structure of the correction problem generation model comprises the following four parts:
an input layer: representing the input question sentences and question paragraphs as a single-hot coded sequence vector model;
and (3) coding layer: converting a paragraph mark sequence and a question sentence of an input layer into a vector of continuous values by using a word embedding model, and simultaneously creating a vector sequence by considering the context relationship of the paragraph and the question sentence;
matching layer: capturing the correlation between words in the paragraph and input question sentences, and modeling the vector sequence of the paragraph;
a decoding layer: using a GRU-RNN gated cyclic unit recurrent neural network with an attention mechanism and two replication mechanisms, word marker sequences are generated that constitute the correction problem.
2. The method of claim 1, wherein in the decoding layer, the GRU-RNN gated cyclic unit recurrent neural network structure is as follows:
activation function: the method has the advantages that the PReLu is selected from the GRU-RNN to serve as an activation function which serves as a correction linear unit, slopes of all negative values in an input vector can be changed according to specific training data, and the rest values are unchanged, so that the distinguishing contrast is increased, and the model learning efficiency is improved;
and (3) selecting neurons: the gated cyclic unit GRU is a neuron in RNN, and the retention of information is controlled by two gate structures in a hidden layer, so that important and unimportant information can be screened out;
other parameters: the model is formed by a GRU layer with an attention mechanism and a softmax layer, and the learning rate is 0.0007 by an adaptive gradient descent method.
3. The method for identifying the question intention in the natural language question answering technology according to claim 1 or 2, characterized in that a method for automatically generating training data by paragraph sentence compression is adopted, and a brief question with a fuzzy intention is generated from a machine-read corpus question bank by sentence compression, so that a question model is automatically corrected.
CN202010557964.1A 2020-06-18 2020-06-18 Question and sentence intention identification method in natural language question-answering technology Pending CN111813907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010557964.1A CN111813907A (en) 2020-06-18 2020-06-18 Question and sentence intention identification method in natural language question-answering technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010557964.1A CN111813907A (en) 2020-06-18 2020-06-18 Question and sentence intention identification method in natural language question-answering technology

Publications (1)

Publication Number Publication Date
CN111813907A true CN111813907A (en) 2020-10-23

Family

ID=72846263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010557964.1A Pending CN111813907A (en) 2020-06-18 2020-06-18 Question and sentence intention identification method in natural language question-answering technology

Country Status (1)

Country Link
CN (1) CN111813907A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541356A (en) * 2020-12-21 2021-03-23 山东师范大学 Method and system for recognizing biomedical named entities
CN113255344A (en) * 2021-05-13 2021-08-13 淮阴工学院 Keyword generation method fusing topic information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180728A1 (en) * 2014-12-23 2016-06-23 International Business Machines Corporation Managing answer feasibility
CN110134771A (en) * 2019-04-09 2019-08-16 广东工业大学 A kind of implementation method based on more attention mechanism converged network question answering systems
CN111061851A (en) * 2019-12-12 2020-04-24 中国科学院自动化研究所 Given fact-based question generation method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180728A1 (en) * 2014-12-23 2016-06-23 International Business Machines Corporation Managing answer feasibility
CN110134771A (en) * 2019-04-09 2019-08-16 广东工业大学 A kind of implementation method based on more attention mechanism converged network question answering systems
CN111061851A (en) * 2019-12-12 2020-04-24 中国科学院自动化研究所 Given fact-based question generation method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541356A (en) * 2020-12-21 2021-03-23 山东师范大学 Method and system for recognizing biomedical named entities
CN112541356B (en) * 2020-12-21 2022-12-06 山东师范大学 Method and system for recognizing biomedical named entities
CN113255344A (en) * 2021-05-13 2021-08-13 淮阴工学院 Keyword generation method fusing topic information

Similar Documents

Publication Publication Date Title
CN108733792B (en) Entity relation extraction method
CN111738003B (en) Named entity recognition model training method, named entity recognition method and medium
CN110609891A (en) Visual dialog generation method based on context awareness graph neural network
CN110222163A (en) A kind of intelligent answer method and system merging CNN and two-way LSTM
CN112559702B (en) Method for generating natural language problem in civil construction information field based on Transformer
CN111966812B (en) Automatic question answering method based on dynamic word vector and storage medium
CN111241807B (en) Machine reading understanding method based on knowledge-guided attention
CN113033189B (en) Semantic coding method of long-short term memory network based on attention dispersion
CN111400461B (en) Intelligent customer service problem matching method and device
WO2022041294A1 (en) Method of generating questions by combining triple and entity type in knowledge base
CN111125333B (en) Generation type knowledge question-answering method based on expression learning and multi-layer covering mechanism
CN113297364A (en) Natural language understanding method and device for dialog system
CN114443827A (en) Local information perception dialogue method and system based on pre-training language model
CN114492441A (en) BilSTM-BiDAF named entity identification method based on machine reading understanding
CN114398976A (en) Machine reading understanding method based on BERT and gate control type attention enhancement network
CN114925195A (en) Standard content text abstract generation method integrating vocabulary coding and structure coding
CN112347269A (en) Method for recognizing argument pairs based on BERT and Att-BilSTM
CN111145914B (en) Method and device for determining text entity of lung cancer clinical disease seed bank
CN111813907A (en) Question and sentence intention identification method in natural language question-answering technology
CN111914553A (en) Financial information negative subject judgment method based on machine learning
CN114510576A (en) Entity relationship extraction method based on BERT and BiGRU fusion attention mechanism
CN112307179A (en) Text matching method, device, equipment and storage medium
CN116522165A (en) Public opinion text matching system and method based on twin structure
CN115964475A (en) Dialogue abstract generation method for medical inquiry
CN113239678B (en) Multi-angle attention feature matching method and system for answer selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023

RJ01 Rejection of invention patent application after publication