CN112131364B - Question answering method and device, electronic equipment and storage medium - Google Patents

Question answering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112131364B
CN112131364B CN202011005282.6A CN202011005282A CN112131364B CN 112131364 B CN112131364 B CN 112131364B CN 202011005282 A CN202011005282 A CN 202011005282A CN 112131364 B CN112131364 B CN 112131364B
Authority
CN
China
Prior art keywords
text
question
word
information description
words
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011005282.6A
Other languages
Chinese (zh)
Other versions
CN112131364A (en
Inventor
贾弼然
顾文剑
蔡巍
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Original Assignee
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd filed Critical Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority to CN202011005282.6A priority Critical patent/CN112131364B/en
Publication of CN112131364A publication Critical patent/CN112131364A/en
Application granted granted Critical
Publication of CN112131364B publication Critical patent/CN112131364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The present disclosure relates to a question answering method, a question answering device, an electronic device and a storage medium, wherein the method includes: acquiring an information description text, aiming at a question text corresponding to the information description text, determining a question text based on a question intention corresponding to the information description text according to the information description text and the question text, acquiring a target text extraction model corresponding to the question intention, extracting a plurality of key texts from the information description text through the target text extraction model, and determining answer texts matched with the question text from the plurality of key texts through a pre-trained text matching model according to the plurality of key texts and the question text. According to the method and the device for obtaining the answer text, the question text is understood by combining the information description text and the question text to determine the question intention, and the accurate answer text is obtained based on the question intention, so that a user can accurately obtain information required by the user, and the intelligence of solving the question is improved.

Description

Question answering method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of intelligent question answering, in particular to a question answering method, a question answering device, electronic equipment and a storage medium.
Background
In recent years, with the rise of artificial intelligence technology, an intelligent question-answering system, which is one of important research directions of artificial intelligence technology, has been widely used in various fields such as search, advertisement, and medical treatment. The intelligent question-answering system can search the appointed document according to the questions presented by the user and return answers of the intelligent question-answering system to the questions to the user. For example, in the field of medical application, a doctor can input a medical record of a patient into an intelligent question-answering system and input corresponding questions, and the intelligent question-answering system can answer the questions, so that the doctor can quickly acquire information required by the doctor, and labor cost is reduced. In the related art, an intelligent question-answering system mainly searches a specified document based on keywords corresponding to a question to acquire an answer corresponding to the question. However, the intelligent question-answering system cannot accurately understand the questions presented by the user based on the keywords corresponding to the questions, which may cause the intelligent question-answering system to give inaccurate answers, so that the user cannot acquire the information required by the user, and the intelligent of the questions is low.
Disclosure of Invention
In order to solve the problems in the related art, the present disclosure provides a question answering method, apparatus, electronic device, and storage medium.
To achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided a question-answering method, including:
acquiring an information description text and a problem text corresponding to the information description text;
according to the information description text and the question text, determining a question intention corresponding to the question text based on the information description text;
acquiring target text extraction models corresponding to the questioning intents, wherein different questioning intents correspond to different text extraction models;
extracting a plurality of key texts from the information description text through the target text extraction model;
and determining answer texts matched with the question texts through a pre-trained text matching model according to the plurality of key texts and the question texts.
Optionally, the determining, according to the information description text and the question text, the question text based on the question intention corresponding to the information description text includes:
acquiring a plurality of first words corresponding to the information description text and a plurality of second words corresponding to the problem text;
determining the questioning intention through a pre-trained intention analysis model according to a plurality of the first words and a plurality of the second words.
Optionally, the determining the question intention according to the first words and the second words through a pre-trained intention analysis model includes:
the first words and the second words are used as input of the intention analysis model, and the questioning intention is obtained; the first words are input to the intent analysis model according to the word order of each first word in the information description text, and the second words are input to the intent analysis model according to the word order of each second word in the question text.
Optionally, the intent analysis model includes a bi-directional encoding characterizing Bert layer, a first attention layer, a second attention layer, and an output layer; the obtaining the question intention by using the first words and the second words as input of the intention analysis model comprises:
extracting features of the plurality of first words and the plurality of second words which are input through the Bert layer respectively to obtain word feature vectors corresponding to the first words and word feature vectors corresponding to the second words;
For each first word, determining a first target feature vector corresponding to the first word by using a preset first attention mechanism through the first attention layer according to the word feature vector corresponding to the first word, and determining the word feature vector corresponding to each second word and a first history feature vector, wherein the first history feature vector is a first target feature vector corresponding to the last first word of the first word;
for each first word, determining a second target feature vector corresponding to the first word by the second attention layer according to a first target feature vector corresponding to the first word and a word feature vector corresponding to each first word by using a preset second attention mechanism;
and determining the questioning intention by the output layer according to the second target feature vector corresponding to each first word by using a preset pointer network.
Optionally, the extracting, by the target text extraction model, a plurality of key texts from the information description text includes:
and sequentially taking each information description sentence in a plurality of information description sentences included in the information description text as the input of the target text extraction model to obtain the key text corresponding to each information description sentence, wherein each information description sentence corresponds to at least one key text.
Optionally, determining, according to the plurality of key texts and the question text, answer text matched with the question text from the plurality of key texts through a pre-trained text matching model includes:
determining target texts matched with the question texts from the plurality of key texts through the text matching model according to the plurality of key texts and the question texts;
and determining answer text matched with the question text according to the target text.
Optionally, determining, according to the plurality of key texts and the question text, a target text matched with the question text from the plurality of key texts through the text matching model includes:
splicing the question text and each key text to obtain a text to be processed corresponding to each key text;
sequentially taking each text to be processed as the input of the text matching model to obtain an output label corresponding to each text to be processed; the output labels comprise a first output label and a second output label, the first output label is used for representing that the key text corresponding to the text to be processed is matched with the question text, and the second output label is used for representing that the key text corresponding to the text to be processed is not matched with the question text;
And taking the key text corresponding to the first output label as the target text.
According to a second aspect of embodiments of the present disclosure, there is provided a question-answering apparatus, the apparatus including:
the acquisition module is used for acquiring the information description text and the problem text corresponding to the information description text;
the determining module is used for determining the question text based on the question intention corresponding to the information description text according to the information description text and the question text;
the acquisition module is further used for acquiring target text extraction models corresponding to the questioning intents, and different questioning intents correspond to different text extraction models;
the extraction module is used for extracting a plurality of key texts from the information description text through the target text extraction model;
the determining module is further configured to determine, according to the plurality of key texts and the question text, answer text that matches the question text through a text matching model that is trained in advance.
Optionally, the determining module includes:
the acquisition sub-module is used for acquiring a plurality of first words corresponding to the information description text and a plurality of second words corresponding to the problem text;
The first determining submodule is used for determining the questioning intention through a pre-trained intention analysis model according to a plurality of first words and a plurality of second words.
Optionally, the first determining submodule is configured to:
the first words and the second words are used as input of the intention analysis model, and the questioning intention is obtained; the first words are input to the intent analysis model according to the word order of each first word in the information description text, and the second words are input to the intent analysis model according to the word order of each second word in the question text.
Optionally, the intent analysis model includes a bi-directional encoding characterizing Bert layer, a first attention layer, a second attention layer, and an output layer; the first determination submodule is used for:
extracting features of the plurality of first words and the plurality of second words which are input through the Bert layer respectively to obtain word feature vectors corresponding to the first words and word feature vectors corresponding to the second words;
for each first word, determining a first target feature vector corresponding to the first word by using a preset first attention mechanism through the first attention layer according to the word feature vector corresponding to the first word, and determining the word feature vector corresponding to each second word and a first history feature vector, wherein the first history feature vector is a first target feature vector corresponding to the last first word of the first word;
For each first word, determining a second target feature vector corresponding to the first word by the second attention layer according to a first target feature vector corresponding to the first word and a word feature vector corresponding to each first word by using a preset second attention mechanism;
and determining the questioning intention by the output layer according to the second target feature vector corresponding to each first word by using a preset pointer network.
Optionally, the extracting module is configured to:
and sequentially taking each information description sentence in a plurality of information description sentences included in the information description text as the input of the target text extraction model to obtain the key text corresponding to each information description sentence, wherein each information description sentence corresponds to at least one key text.
Optionally, the determining module includes:
the second determining submodule is used for determining target texts matched with the question texts from the plurality of key texts through the text matching model according to the plurality of key texts and the question texts;
and the third determining submodule is used for determining answer texts matched with the question texts according to the target texts.
Optionally, the second determining submodule is configured to:
splicing the question text and each key text to obtain a text to be processed corresponding to each key text;
sequentially taking each text to be processed as the input of the text matching model to obtain an output label corresponding to each text to be processed; the output labels comprise a first output label and a second output label, the first output label is used for representing that the key text corresponding to the text to be processed is matched with the question text, and the second output label is used for representing that the key text corresponding to the text to be processed is not matched with the question text;
and taking the key text corresponding to the first output label as the target text.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the question-answering method provided by the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the question-answering method provided in the first aspect.
According to the technical scheme, firstly, the information description text and the question text corresponding to the information description text are obtained, the question text is determined based on the question intention corresponding to the information description text according to the information description text and the question text, then the target text extraction model corresponding to the question intention is obtained, wherein different question intents correspond to different text extraction models, a plurality of key texts are extracted from the information description text through the target text extraction model, and finally answer texts matched with the question text are determined from the plurality of key texts through a pre-trained text matching model according to the plurality of key texts and the question text. According to the method and the device for obtaining the answer text, the question text is understood by combining the information description text and the question text to determine the question intention, and the accurate answer text is obtained based on the question intention, so that a user can accurately obtain information required by the user, and the intelligence of solving the question is improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a question-answering method according to an exemplary embodiment;
FIG. 2 is a schematic diagram of an intent analysis model, shown in accordance with an exemplary embodiment;
FIG. 3 is a flow chart of one step 105 shown in the embodiment of FIG. 1;
FIG. 4 is a block diagram of a question and answer apparatus according to an example embodiment;
FIG. 5 is a block diagram of one determination module shown in the embodiment of FIG. 4;
FIG. 6 is a block diagram of another determination module shown in the embodiment of FIG. 4;
fig. 7 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Before introducing the question-answering method, the question-answering device, the electronic equipment and the storage medium provided by the disclosure, an application scene involved in each embodiment of the disclosure is first introduced, and the application scene can be a scene that a user performs intelligent question-answering with an intelligent question-answering system so as to acquire information required by the user. In this scenario, the user may input information description text and question text to the intelligent question-answering system so that the intelligent question-answering system responds according to the information description text and the question text. Wherein the intelligent question and answer system may be provided on a terminal (e.g., may be provided on an automatic question and answer robot) or a server (e.g., may be provided on an intelligent question and answer service platform). When the intelligent question-answering system is arranged on the terminal, a user can directly input information description text and question text to the intelligent question-answering system through the terminal, and when the intelligent question-answering system is arranged on the server, the user can communicate with the server provided with the intelligent question-answering system through the terminal so as to input the information description text and the question text to the intelligent question-answering system. The terminal may be a mobile terminal such as a smart phone, a tablet computer, a smart watch, a smart bracelet, a PDA (personal digital assistant, english), or a stationary terminal such as a desktop computer. The server may include, but is not limited to: entity servers, server clusters, cloud servers, and the like.
Fig. 1 is a flow chart illustrating a question-answering method according to an exemplary embodiment. As shown in fig. 1, the method comprises the steps of:
step 101, acquiring an information description text and a question text corresponding to the information description text.
For example, when the user needs to obtain the information required by the user from the information description text, the user can determine the question text corresponding to the information description text according to the information required by the user. Then, the user can input the information description text and the question text into the intelligent question-answering system respectively, the intelligent question-answering system searches the information description text to acquire a text containing information required by the user, and the text is returned to the user. The question text can be understood as one or more questions presented by a user aiming at the information description text, each question corresponds to one question text, the information required by the user can be understood as an answer aiming at the questions presented by the user, and the question text can be directly input by the user or can be selected from a plurality of questions stored in the intelligent question-answering system.
Taking doctor obtain patient's illness state from patient's case through intelligent question-answering system as an example (patient's illness state is doctor's own required information), information description text can be: { the family members complain about cough after 13 days before hospitalization and contact with the cold sister, single-sound cough is caused, phlegm is difficult to expectorate, wheeze is caused before 7 days, the children are fizzy, and the children are fizzy, so that the children can further diagnose and treat the clinic of the hospital, the erythromycin is still 5 days, the pulmonary cough is orally taken for 5 days, the prim is atomized, the coughing can be combined with the primordial, the fortune is carried out for 4 days, and the cough and the wheeze are not improved. The clinic receives me department with "pneumonia", the question text may include: question text 1 "what is the causative factor of cough in children? "and question text 2" what are the initial symptoms of the child? ".
Step 102, according to the information description text and the question text, determining the question text based on the question intention corresponding to the information description text.
For example, after the information description text and the question text are obtained, the intelligent question-answering system may first split the information description text and the question text respectively to obtain a plurality of first words corresponding to the information description text and a plurality of second words corresponding to the question text. The method for splitting the information description text and the question text is different according to the different language types of the information description text and the question text. For example, when the language types of the information description text and the question text are Chinese, the information description text and the question text may be split into individual Chinese characters, i.e., the first word and the second word are individual Chinese characters. When the language types of the information description text and the question text are English, the information description text and the question text can be split into individual English words, namely the first word and the second word are English words. The intelligent question-answering system can then determine a question intent from the first plurality of terms and the second plurality of terms via a pre-trained intent analysis model. The question intents are used to characterize the direction of the answer to which the question text is directed to the information descriptive text, e.g., when the question text is "what is the causative factor of the infant cough," the question intents may be "causes.
And step 103, acquiring target text extraction models corresponding to the questioning intents, wherein different questioning intents correspond to different text extraction models.
Specifically, in order to enable the user to accurately acquire information required by the user (that is, accurately acquire an answer to a question posed by the user), a text extraction model corresponding to each question intention may be set in advance in the intelligent question-answering system for each question intention. After the intelligent question-answering system determines the question intention, a target text extraction model can be determined according to the determined question intention by utilizing a preset corresponding relation, wherein the preset corresponding relation is the corresponding relation between the question intention and the text extraction model. For example, the text extraction models corresponding to the question intentions "cause", "symptom" and "disease" are the first text extraction model, the second text extraction model and the third text extraction model, respectively, and if the intelligent question answering system determines that the question intentions corresponding to the question text are "symptom", the second text extraction model is taken as the target text extraction model.
And 104, extracting a plurality of key texts from the information description texts through a target text extraction model.
In this step, the text extraction model is used to extract key text having a certain attribute feature from the information description text. After determining the target text extraction model, the intelligent question-answering system can sequentially input a plurality of sentences included in the information description text into the target text extraction model to obtain at least one key text corresponding to each sentence and a text label corresponding to each key text output by the target text extraction model. Wherein each text label corresponds to an attribute feature. For example, when the sentence is "cough, single cough, and difficult expectoration after 13 days before admission contact with his sister cold", a plurality of key texts are obtained by the target text extraction model: the text labels corresponding to 13 days before admission, and 13 days after contact with the cold sister, and the text labels corresponding to cough, single-sound cough, phlegm difficult to expectorate, are the occurrence time, the text labels corresponding to 13 days before admission, the text labels corresponding to the cold sister, cough, single-sound cough, phlegm difficult to expectorate are the cough.
Step 105, determining answer text matched with the question text through a pre-trained text matching model according to a plurality of key texts and the question text.
For example, after extracting a plurality of key texts, the intelligent question-answering system can determine whether each key text matches the question text according to the plurality of key texts and the question text through a pre-trained text matching model. And then determining answer text matched with the question text according to the key text matched with the question text. For example, the key text matched with the question text may be spliced according to the order of the key text in the information description text, and the text obtained after the splicing may be used as the answer text.
In summary, the disclosure includes first obtaining an information description text, and a question text corresponding to the information description text, determining, according to the information description text and the question text, a question text based on a question intention corresponding to the information description text, and then obtaining a target text extraction model corresponding to the question intention, wherein different question intents correspond to different text extraction models, extracting a plurality of key texts from the information description text through the target text extraction model, and finally determining an answer text matched with the question text from the plurality of key texts through a pre-trained text matching model according to the plurality of key texts and the question text. According to the method and the device for obtaining the answer text, the question text is understood by combining the information description text and the question text to determine the question intention, and the accurate answer text is obtained based on the question intention, so that a user can accurately obtain information required by the user, and the intelligence of solving the question is improved.
Alternatively, step 102 may be implemented by:
and taking the first words and the second words as input of an intention analysis model to obtain the question intention.
The first words are input to the intention analysis model according to the word order of each first word in the information description text, and the second words are input to the intention analysis model according to the word order of each second word in the question text.
For example, after the intelligent question-answering system acquires the plurality of first words and the plurality of second words, the plurality of first words and the plurality of second words may be respectively input into a preset word embedding layer, so as to convert each first word into a first word vector corresponding to the first word, and convert each second word into a second word vector corresponding to the second word. And then, according to the word sequence of each first word in the information description text, sequentially taking the first word vector corresponding to each first word as the input of the intention analysis model, and simultaneously, according to the word sequence of each second word in the question text, sequentially taking the second word vector corresponding to each second word as the input of the intention analysis model, so as to obtain the question intention output by the intention analysis model.
In one scenario, as shown in fig. 2, the intent analysis model may include a Bert (english: bidirectional Encoder Representation from Transformers) layer, a first attention layer, a second attention layer, and an output layer, and the obtaining of the question intent may be achieved by using a plurality of first words and a plurality of second words as inputs of the intent analysis model:
step 1), feature extraction is carried out on a plurality of first words and a plurality of second words which are input through a Bert layer respectively, so that word feature vectors corresponding to each first word and word feature vectors corresponding to each second word are obtained.
For example, after sequentially inputting the plurality of first word vectors and the plurality of second word vectors into the intent analysis model, the Bert layer may perform feature extraction on the plurality of first word vectors by using a preset Bert model to obtain word feature vectors corresponding to each first word, where when the number of first words is n, the word feature vectors corresponding to the first words may be expressed as:T t P is the word characteristic vector (t is an integer which is more than 0 and less than or equal to n) corresponding to the t first word>Is n first word vectors arranged in sequence. Meanwhile, the Bert layer may further perform feature extraction on a plurality of second word vectors by using a Bert model, so as to obtain word feature vectors corresponding to each second word, where when the number of the second words is m, the word feature vectors corresponding to the second words may be expressed as: / > For the word feature vector (q is an integer greater than 0 and less than or equal to m) corresponding to the q-th second word,/and>is m second word vectors arranged in sequence.
Step 2), for each first word, determining a first target feature vector corresponding to the first word by using a preset first attention mechanism through a first attention layer according to the word feature vector corresponding to the first word, and determining the first target feature vector corresponding to the first word by using a first history feature vector corresponding to the last first word of the first word.
In this step, a word feature vector corresponding to each first word and a word feature vector corresponding to each second word may be input to the first attention layer. For each first word, determining a first feature vector corresponding to the first word by using a first attention mechanism according to a word feature vector corresponding to the first word, a word feature vector corresponding to each second word and a first historical feature vector by using a first attention layer (the first attention layer can be a threshold attention layer), wherein the first feature vector is a vector fused with features of problematic text. The first attentiveness mechanism may be expressed as:
Wherein,for the first historical feature vector, tanh is the activation function, +.>And->Is the firstWeight parameters corresponding to the attention layer, +.>In order to extract the features of the word feature vector corresponding to the first word, the word feature vector corresponding to the second word and the first history feature vector through the activation function, a first context vector corresponding to the j1 th second word (j 1 is an integer greater than 0 and less than or equal to m, namely m first context vectors) is obtained>For the i1 st first context vector +.>Corresponding weight (i 1 is an integer greater than 0 and less than or equal to m), c t The first feature vector corresponding to the t first word is obtained.
And then, inputting the first feature vector corresponding to each first word into an LSTM (Long Short-Term Memory) network to obtain a first target feature vector corresponding to each first word. The first target feature vector may be expressed as:wherein (1)>And the first target feature vector corresponding to the t first word is obtained.
Step 3), aiming at each first word, determining a second target feature vector corresponding to the first word by a second attention layer according to a first target feature vector corresponding to the first word and a word feature vector corresponding to each first word by using a preset second attention mechanism.
For example, after determining the first target feature vector corresponding to each first word, the second attention layer (the second attention layer may be a self-matching attention layer), for each first word, determining, according to the first target feature vector corresponding to the first word and the word feature vector corresponding to each first word, using a second attention mechanism, a second feature vector corresponding to the first word, where the second feature vector is a vector fused with features of the information description text. The second attentiveness mechanism may be expressed as:
wherein,and->Weight parameter corresponding to the second attention layer, < ->In order to extract features of the first target feature vector and the word feature vector corresponding to the first word through the activation function, a second context vector (j 2 is an integer greater than 0 and less than or equal to n, namely n second context vectors) corresponding to the j2 th first word is obtained>For the i2 nd second context vector +.>Corresponding weight (i 2 is an integer greater than 0 and less than or equal to n), c t ' is the second feature vector corresponding to the t first word.
And then, the second feature vector corresponding to each first word can be input into an LSTM network to obtain a second target feature vector corresponding to each first word. The second target feature vector may be expressed as: Wherein,and the second target feature vector corresponding to the t first word.
And 4) determining the questioning intention by using a preset pointer network through the output layer according to the second target feature vector corresponding to each first word.
Specifically, after determining the second target feature vector corresponding to each first word, the output layer may determine, by using the pointer network, the question intention with respect to the second target feature vector corresponding to each first word. The pointer network can be expressed as:
wherein,and->For the weight parameter corresponding to the pointer network, +.>To pass through the activation function pairA third context vector (j 3 is an integer greater than 0 and less than or equal to n, namely n third context vectors) corresponding to the j3 th first word obtained after feature extraction of the two target feature vectors, and->For the i3 rd third context vector +.>Corresponding weights, and points are questioning intents.
Alternatively, the way the intent analysis model is trained may be: training data is first constructed, which may include: the system comprises a sample input set and a sample output set, wherein each sample input in the sample input set comprises a training description text and training question text corresponding to the training description text, the sample output set comprises sample output corresponding to each sample input, and each sample output comprises a pre-labeled training question text based on a question intention corresponding to the training description text. And then taking the training data as a model training sample to obtain a trained intention analysis model.
Alternatively, step 104 may be implemented by:
and sequentially taking each information description sentence in the plurality of information description sentences included in the information description text as the input of the target text extraction model to obtain a key text corresponding to each information description sentence, wherein each information description sentence corresponds to at least one key text.
For example, the intelligent question-answering system may identify sentences in the information description text by using a preset sentence identification algorithm, and split the information description text into a plurality of information description sentences according to the identification result. And then, sequentially taking each information description sentence as the input of a target text extraction model to obtain a key text corresponding to each information description sentence and a text label corresponding to each key text, wherein the text extraction model can be a named entity recognition model. Further, in the process of extracting the key text, a multi-layer named entity recognition mode can be adopted to obtain more accurate key text. Taking two layers of named entity recognition as an example, and taking an information description sentence as a first layer of named entity (namely, the information description sentence is taken as a key text), when the information description sentence is "wheeze appears before 7 days, the wheeze appears after 7 days, the key text corresponding to the first layer is determined to be" the wheeze appears before 7 days, the wheeze appears after 7 days, the wheeze appears before 7 days, the text labels corresponding to the wheeze and the wheeze appear after 7 days are symptoms, and the key text corresponding to the second layer is determined to be: the text labels corresponding to "7 days ago" and "wheeze, audible and hissing" are the occurrence time, and the text labels corresponding to "wheeze, audible and hissing" are the wheeze.
Alternatively, the manner of training the text extraction model may be: training data is first constructed, which may include: the system comprises a sample input set and a sample output set, wherein each sample input in the sample input set comprises a training description sentence, the sample output set comprises a sample output corresponding to each sample input, each sample output comprises at least one key text corresponding to a pre-labeled training description sentence, and a text label corresponding to each key text. And then taking the training data as a sample trained by a preset named entity recognition model to obtain a trained text extraction model.
Fig. 3 is a flow chart illustrating one step 105 of the embodiment shown in fig. 1. As shown in fig. 3, step 105 may include the steps of:
in step 1051, a target text matching the question text is determined from the plurality of key texts by a text matching model based on the plurality of key texts and the question text.
For example, after extracting a plurality of key texts, the intelligent question-answering system may first splice the question text and each key text to obtain a text to be processed corresponding to each key text. And then sequentially taking each text to be processed as the input of a text matching model to obtain an output label corresponding to each text to be processed. The output labels comprise a first output label and a second output label, wherein the first output label is used for representing that the key text corresponding to the text to be processed is matched with the question text, and the second output label is used for representing that the key text corresponding to the text to be processed is not matched with the question text. And finally, taking the key text corresponding to the first output label as a target text. Taking the text matching model as a classification model, the first output label is 1, and the second output label is 0 as an example, when the question text is "what is the cough symptom of the infant? When the key text is "continuous cough, no phlegm cough out" and "body temperature of 38.9 ℃, the question text and each key text can be spliced to obtain a text to be processed" what is the cough symptom of the infant? What are cough symptoms of the infant who are cough with phlegm cough not out and text to be treated? Body temperature 38.9 degrees celsius). The text to be processed is then "what is the cough symptoms of the child? What are cough symptoms of the infant who are cough with phlegm cough not out and text to be treated? The body temperature of 38.9 ℃ is input into the two classification models, and the text to be processed can be obtained, namely, what is the cough symptom of the infant? The coughing is that the corresponding output label is 1, the text to be processed is the text of what is the cough symptom of the child? The output label corresponding to the body temperature of 38.9 ℃ is 0, so that the key text 'continuous cough with phlegm cough not out' can be used as the target text.
Step 1052, determining answer text matching the question text according to the target text.
For example, after obtaining the target text, the intelligent question-answering system may determine answer text matching the question text according to the target text using a preset rule. The preset rule may be to splice all obtained target texts according to the word order of the target texts in the information description text, and use the text obtained after the splicing as an answer text, or select a preset number of key texts from all obtained target texts as answer texts, or any other realizable mode, which is not specifically limited in the disclosure.
Alternatively, the manner of training the text matching model may be: training data is first constructed, which may include: the system comprises a sample input set and a sample output set, wherein each sample input in the sample input set comprises a training processing text which is formed by splicing a training question text and a training answer text corresponding to the pre-labeled training question text, the sample output set comprises sample output corresponding to each sample input, and each sample output comprises an output label corresponding to the pre-labeled training processing text. And then taking the training data as a sample trained by a preset two-class model to obtain a trained text matching model.
In summary, the disclosure includes first obtaining an information description text, and a question text corresponding to the information description text, determining, according to the information description text and the question text, a question text based on a question intention corresponding to the information description text, and then obtaining a target text extraction model corresponding to the question intention, wherein different question intents correspond to different text extraction models, extracting a plurality of key texts from the information description text through the target text extraction model, and finally determining an answer text matched with the question text from the plurality of key texts through a pre-trained text matching model according to the plurality of key texts and the question text. According to the method and the device for obtaining the answer text, the question text is understood by combining the information description text and the question text to determine the question intention, and the accurate answer text is obtained based on the question intention, so that a user can accurately obtain information required by the user, and the intelligence of solving the question is improved.
Fig. 4 is a block diagram of a question and answer apparatus according to an example embodiment. As shown in fig. 4, the apparatus 200 includes:
the obtaining module 201 is configured to obtain an information description text and a question text corresponding to the information description text.
The determining module 202 is configured to determine, according to the information description text and the question text, a question text based on a question intention corresponding to the information description text.
The obtaining module 201 is further configured to obtain a target text extraction model corresponding to the question intention, where different question intents correspond to different text extraction models.
The extracting module 203 is configured to extract a plurality of key texts from the information description text through the target text extracting model.
The determining module 202 is further configured to determine answer text matching the question text according to the plurality of key texts and the question text through a pre-trained text matching model.
Fig. 5 is a block diagram of a determination module shown in the embodiment of fig. 4. As shown in fig. 5, the determining module 202 includes:
the obtaining submodule 2021 is configured to obtain a plurality of first words corresponding to the information description text, and a plurality of second words corresponding to the question text.
A first determination submodule 2022 is configured to determine, from the plurality of first terms and the plurality of second terms, a question intent through a pre-trained intent analysis model.
Optionally, the first determination submodule 2022 is configured to:
and taking the first words and the second words as input of an intention analysis model to obtain the question intention.
The first words are input to the intention analysis model according to the word order of each first word in the information description text, and the second words are input to the intention analysis model according to the word order of each second word in the question text.
Optionally, the intent analysis model includes a bi-directional encoding characterizing Bert layer, a first attention layer, a second attention layer, and an output layer. The first determination submodule 2022 is configured to:
and respectively extracting features of the plurality of first words and the plurality of second words which are input through the Bert layer to obtain word feature vectors corresponding to each first word and word feature vectors corresponding to each second word.
For each first word, determining a first target feature vector corresponding to the first word by using a preset first attention mechanism through a first attention layer according to a word feature vector corresponding to the first word, and determining a word feature vector corresponding to each second word and a first historical feature vector corresponding to the last first word of the first word.
For each first word, determining a second target feature vector corresponding to the first word by a second attention layer according to a first target feature vector corresponding to the first word and a word feature vector corresponding to each first word by using a preset second attention mechanism.
And determining the questioning intention by using a preset pointer network through the output layer according to the second target feature vector corresponding to each first word.
Optionally, the extracting module 203 is configured to:
and sequentially taking each information description sentence in the plurality of information description sentences included in the information description text as the input of the target text extraction model to obtain a key text corresponding to each information description sentence, wherein each information description sentence corresponds to at least one key text.
Fig. 6 is a block diagram of another determination module shown in the embodiment of fig. 4. As shown in fig. 6, the determining module 202 includes:
a second determining sub-module 2023 is configured to determine, according to the plurality of key texts and the question text, a target text matching the question text from the plurality of key texts through a text matching model.
A third determination sub-module 2024 is configured to determine answer text that matches the question text according to the target text.
Optionally, the second determination submodule 2023 is configured to:
and splicing the question text and each key text to obtain a text to be processed corresponding to each key text.
And sequentially taking each text to be processed as the input of a text matching model to obtain an output label corresponding to each text to be processed.
The output labels comprise a first output label and a second output label, wherein the first output label is used for representing that the key text corresponding to the text to be processed is matched with the question text, and the second output label is used for representing that the key text corresponding to the text to be processed is not matched with the question text.
And taking the key text corresponding to the first output label as a target text.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, the disclosure includes first obtaining an information description text, and a question text corresponding to the information description text, determining, according to the information description text and the question text, a question text based on a question intention corresponding to the information description text, and then obtaining a target text extraction model corresponding to the question intention, wherein different question intents correspond to different text extraction models, extracting a plurality of key texts from the information description text through the target text extraction model, and finally determining an answer text matched with the question text from the plurality of key texts through a pre-trained text matching model according to the plurality of key texts and the question text. According to the method and the device for obtaining the answer text, the question text is understood by combining the information description text and the question text to determine the question intention, and the accurate answer text is obtained based on the question intention, so that a user can accurately obtain information required by the user, and the intelligence of solving the question is improved.
Fig. 7 is a block diagram of an electronic device 700, according to an example embodiment. As shown in fig. 7, the electronic device 700 may include: a processor 701, a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700 to perform all or part of the steps in the question answering method described above. The memory 702 is used to store various types of data to support operation on the electronic device 700, which may include, for example, instructions for any application or method operating on the electronic device 700, as well as application-related data, such as contact data, messages sent and received, pictures, audio, video, and so forth. The Memory 702 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 703 can include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 702 or transmitted through the communication component 705. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is for wired or wireless communication between the electronic device 700 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 705 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processor (Digital Signal Processor, abbreviated DSP), digital signal processing device (Digital Signal Processing Device, abbreviated DSPD), programmable logic device (Programmable Logic Device, abbreviated PLD), field programmable gate array (Field Programmable Gate Array, abbreviated FPGA), controller, microcontroller, microprocessor, or other electronic components for performing the above-described question-answering method.
In another exemplary embodiment, a computer readable storage medium is also provided that includes program instructions that, when executed by a processor, implement the steps of the question-answering method described above. For example, the computer readable storage medium may be the memory 702 including program instructions described above that are executable by the processor 701 of the electronic device 700 to perform the question-answering method described above.
In summary, the disclosure includes first obtaining an information description text, and a question text corresponding to the information description text, determining, according to the information description text and the question text, a question text based on a question intention corresponding to the information description text, and then obtaining a target text extraction model corresponding to the question intention, wherein different question intents correspond to different text extraction models, extracting a plurality of key texts from the information description text through the target text extraction model, and finally determining an answer text matched with the question text from the plurality of key texts through a pre-trained text matching model according to the plurality of key texts and the question text. According to the method and the device for obtaining the answer text, the question text is understood by combining the information description text and the question text to determine the question intention, and the accurate answer text is obtained based on the question intention, so that a user can accurately obtain information required by the user, and the intelligence of solving the question is improved.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the foregoing embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, the present disclosure does not further describe various possible combinations.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (9)

1. A question-answering method, the method comprising:
acquiring an information description text and a problem text corresponding to the information description text;
according to the information description text and the question text, determining a question intention corresponding to the question text based on the information description text;
acquiring target text extraction models corresponding to the questioning intents, wherein different questioning intents correspond to different text extraction models;
Extracting a plurality of key texts from the information description text through the target text extraction model;
determining answer texts matched with the question texts through a pre-trained text matching model according to the plurality of key texts and the question texts;
the step of determining the question text based on the question intention corresponding to the information description text according to the information description text and the question text comprises the following steps:
acquiring a plurality of first words corresponding to the information description text and a plurality of second words corresponding to the problem text;
determining the questioning intention through a pre-trained intention analysis model according to a plurality of the first words and a plurality of the second words.
2. The method of claim 1, wherein said determining said questioning intent from a plurality of said first words and a plurality of said second words by a pre-trained intent analysis model comprises:
the first words and the second words are used as input of the intention analysis model, and the questioning intention is obtained; the first words are input to the intent analysis model according to the word order of each first word in the information description text, and the second words are input to the intent analysis model according to the word order of each second word in the question text.
3. The method of claim 2, wherein the intent analysis model comprises a bi-directional coding representation Bert layer, a first attention layer, a second attention layer, and an output layer; the obtaining the question intention by using the first words and the second words as input of the intention analysis model comprises:
extracting features of the plurality of first words and the plurality of second words which are input through the Bert layer respectively to obtain word feature vectors corresponding to the first words and word feature vectors corresponding to the second words;
for each first word, determining a first target feature vector corresponding to the first word by using a preset first attention mechanism through the first attention layer according to the word feature vector corresponding to the first word, and determining the word feature vector corresponding to each second word and a first history feature vector, wherein the first history feature vector is a first target feature vector corresponding to the last first word of the first word;
for each first word, determining a second target feature vector corresponding to the first word by the second attention layer according to a first target feature vector corresponding to the first word and a word feature vector corresponding to each first word by using a preset second attention mechanism;
And determining the questioning intention by the output layer according to the second target feature vector corresponding to each first word by using a preset pointer network.
4. The method of claim 1, wherein extracting a plurality of key texts from the information description text by the target text extraction model comprises:
and sequentially taking each information description sentence in a plurality of information description sentences included in the information description text as the input of the target text extraction model to obtain the key text corresponding to each information description sentence, wherein each information description sentence corresponds to at least one key text.
5. The method of claim 1, wherein said determining answer text matching said question text from a plurality of said key texts by means of a pre-trained text matching model based on said plurality of key texts and said question text comprises:
determining target texts matched with the question texts from the plurality of key texts through the text matching model according to the plurality of key texts and the question texts;
and determining answer text matched with the question text according to the target text.
6. The method of claim 5, wherein determining, from the plurality of key texts and the question text, a target text matching the question text from the plurality of key texts by the text matching model comprises:
splicing the question text and each key text to obtain a text to be processed corresponding to each key text;
sequentially taking each text to be processed as the input of the text matching model to obtain an output label corresponding to each text to be processed; the output labels comprise a first output label and a second output label, the first output label is used for representing that the key text corresponding to the text to be processed is matched with the question text, and the second output label is used for representing that the key text corresponding to the text to be processed is not matched with the question text;
and taking the key text corresponding to the first output label as the target text.
7. A question answering apparatus, the apparatus comprising:
the acquisition module is used for acquiring the information description text and the problem text corresponding to the information description text;
The determining module is used for determining the question text based on the question intention corresponding to the information description text according to the information description text and the question text;
the acquisition module is further used for acquiring target text extraction models corresponding to the questioning intents, and different questioning intents correspond to different text extraction models;
the extraction module is used for extracting a plurality of key texts from the information description text through the target text extraction model;
the determining module is further used for determining answer texts matched with the question texts through a pre-trained text matching model according to the plurality of key texts and the question texts;
the determining module includes:
the acquisition sub-module is used for acquiring a plurality of first words corresponding to the information description text and a plurality of second words corresponding to the problem text;
the first determining submodule is used for determining the questioning intention through a pre-trained intention analysis model according to a plurality of first words and a plurality of second words.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-6.
9. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-6.
CN202011005282.6A 2020-09-22 2020-09-22 Question answering method and device, electronic equipment and storage medium Active CN112131364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011005282.6A CN112131364B (en) 2020-09-22 2020-09-22 Question answering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011005282.6A CN112131364B (en) 2020-09-22 2020-09-22 Question answering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112131364A CN112131364A (en) 2020-12-25
CN112131364B true CN112131364B (en) 2024-03-26

Family

ID=73841657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011005282.6A Active CN112131364B (en) 2020-09-22 2020-09-22 Question answering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112131364B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120166B (en) * 2021-10-14 2023-09-22 北京百度网讯科技有限公司 Video question-answering method and device, electronic equipment and storage medium
TWI802459B (en) * 2022-07-01 2023-05-11 中華電信股份有限公司 A system and method for recommendation q&a based on data-enhanced

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006244262A (en) * 2005-03-04 2006-09-14 Nec Corp Retrieval system, method and program for answer to question
CN104899242A (en) * 2015-03-10 2015-09-09 四川大学 Mechanical product design two-dimensional knowledge pushing method based on design intent
CN108052577A (en) * 2017-12-08 2018-05-18 北京百度网讯科技有限公司 A kind of generic text content mining method, apparatus, server and storage medium
CN109670029A (en) * 2018-12-28 2019-04-23 百度在线网络技术(北京)有限公司 For determining the method, apparatus, computer equipment and storage medium of problem answers
CN110096577A (en) * 2018-01-31 2019-08-06 国际商业机器公司 From the intention of abnormal profile data prediction user
CN110287296A (en) * 2019-05-21 2019-09-27 平安科技(深圳)有限公司 A kind of problem answers choosing method, device, computer equipment and storage medium
WO2019201098A1 (en) * 2018-04-16 2019-10-24 上海智臻智能网络科技股份有限公司 Question and answer interactive method and apparatus, computer device and computer readable storage medium
CN110442697A (en) * 2019-08-06 2019-11-12 上海灵羚科技有限公司 A kind of man-machine interaction method, system, computer equipment and storage medium
CN110597951A (en) * 2019-08-13 2019-12-20 平安科技(深圳)有限公司 Text parsing method and device, computer equipment and storage medium
CN110909144A (en) * 2019-11-28 2020-03-24 中信银行股份有限公司 Question-answer dialogue method and device, electronic equipment and computer readable storage medium
CN110929014A (en) * 2019-12-09 2020-03-27 联想(北京)有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN111078837A (en) * 2019-12-11 2020-04-28 腾讯科技(深圳)有限公司 Intelligent question and answer information processing method, electronic equipment and computer readable storage medium
CN111382250A (en) * 2018-12-29 2020-07-07 深圳市优必选科技有限公司 Question text matching method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9886509B2 (en) * 2011-09-08 2018-02-06 Nokia Technologies Oy Method and apparatus for processing a query based on associating intent and audience

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006244262A (en) * 2005-03-04 2006-09-14 Nec Corp Retrieval system, method and program for answer to question
CN104899242A (en) * 2015-03-10 2015-09-09 四川大学 Mechanical product design two-dimensional knowledge pushing method based on design intent
CN108052577A (en) * 2017-12-08 2018-05-18 北京百度网讯科技有限公司 A kind of generic text content mining method, apparatus, server and storage medium
CN110096577A (en) * 2018-01-31 2019-08-06 国际商业机器公司 From the intention of abnormal profile data prediction user
WO2019201098A1 (en) * 2018-04-16 2019-10-24 上海智臻智能网络科技股份有限公司 Question and answer interactive method and apparatus, computer device and computer readable storage medium
CN109670029A (en) * 2018-12-28 2019-04-23 百度在线网络技术(北京)有限公司 For determining the method, apparatus, computer equipment and storage medium of problem answers
CN111382250A (en) * 2018-12-29 2020-07-07 深圳市优必选科技有限公司 Question text matching method and device, computer equipment and storage medium
CN110287296A (en) * 2019-05-21 2019-09-27 平安科技(深圳)有限公司 A kind of problem answers choosing method, device, computer equipment and storage medium
CN110442697A (en) * 2019-08-06 2019-11-12 上海灵羚科技有限公司 A kind of man-machine interaction method, system, computer equipment and storage medium
CN110597951A (en) * 2019-08-13 2019-12-20 平安科技(深圳)有限公司 Text parsing method and device, computer equipment and storage medium
CN110909144A (en) * 2019-11-28 2020-03-24 中信银行股份有限公司 Question-answer dialogue method and device, electronic equipment and computer readable storage medium
CN110929014A (en) * 2019-12-09 2020-03-27 联想(北京)有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN111078837A (en) * 2019-12-11 2020-04-28 腾讯科技(深圳)有限公司 Intelligent question and answer information processing method, electronic equipment and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Novel Slot-Gated Model Combined With a Key Verb Context Feature for Task Request Understanding by Service Robots;Zhang, SY等;《IEEE Access》;20190729;第7卷;105937-105947 *
Research on Question-Answering System Based on Deep Learning;Song, Bo等;《ADVANCES IN SWARM INTELLIGENCE,ICSI 2018》;第10942卷(第2期);522-529 *
基于Bi-LSTM的动画电影智能问答系统;黄东晋等;《现代电影技术》;第5卷;30-35+41 *
战场目标作战意图识别问题研究与展望;姚庆锴等;《指挥与控制学报》;20170615;第3卷(第2期);127-131 *

Also Published As

Publication number Publication date
CN112131364A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
US20200301954A1 (en) Reply information obtaining method and apparatus
CN108334487B (en) Missing semantic information completion method and device, computer equipment and storage medium
US20180336193A1 (en) Artificial Intelligence Based Method and Apparatus for Generating Article
CN110795552B (en) Training sample generation method and device, electronic equipment and storage medium
CN112528637B (en) Text processing model training method, device, computer equipment and storage medium
CN111967224A (en) Method and device for processing dialog text, electronic equipment and storage medium
CN112214591B (en) Dialog prediction method and device
CN112287069B (en) Information retrieval method and device based on voice semantics and computer equipment
WO2023241410A1 (en) Data processing method and apparatus, and device and computer medium
CN112131364B (en) Question answering method and device, electronic equipment and storage medium
CN113704460B (en) Text classification method and device, electronic equipment and storage medium
CN111382261B (en) Abstract generation method and device, electronic equipment and storage medium
WO2023201975A1 (en) Difference description sentence generation method and apparatus, and device and medium
CN115146068B (en) Method, device, equipment and storage medium for extracting relation triples
CN112463942A (en) Text processing method and device, electronic equipment and computer readable storage medium
CN113392197A (en) Question-answer reasoning method and device, storage medium and electronic equipment
Rana Eaglebot: A chatbot based multi-tier question answering system for retrieving answers from heterogeneous sources using BERT
CN117558270B (en) Voice recognition method and device and keyword detection model training method and device
CN117149998B (en) Intelligent diagnosis recommendation method and system based on multi-objective optimization
CN114297354B (en) Bullet screen generation method and device, storage medium and electronic device
CN115050371A (en) Speech recognition method, speech recognition device, computer equipment and storage medium
CN115862794A (en) Medical record text generation method and device, computer equipment and storage medium
CN115129849A (en) Method and device for acquiring topic representation and computer readable storage medium
CN113657092A (en) Method, apparatus, device and medium for identifying label
CN114792086A (en) Information extraction method, device, equipment and medium supporting text cross coverage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant