CN111782767A - Question answering method, device, equipment and storage medium - Google Patents

Question answering method, device, equipment and storage medium Download PDF

Info

Publication number
CN111782767A
CN111782767A CN202010613891.3A CN202010613891A CN111782767A CN 111782767 A CN111782767 A CN 111782767A CN 202010613891 A CN202010613891 A CN 202010613891A CN 111782767 A CN111782767 A CN 111782767A
Authority
CN
China
Prior art keywords
question
information
vector
answer
answer information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010613891.3A
Other languages
Chinese (zh)
Other versions
CN111782767B (en
Inventor
李世杰
张子健
陈欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010613891.3A priority Critical patent/CN111782767B/en
Publication of CN111782767A publication Critical patent/CN111782767A/en
Application granted granted Critical
Publication of CN111782767B publication Critical patent/CN111782767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a question answering method, a question answering device, question answering equipment and a storage medium, and belongs to the field of natural language processing. The method comprises the following steps: calling an answer generation model, acquiring at least one piece of answer information according to a first question vector and at least one second question vector of the first question information, and retrieving in a question-answer database according to the first question information or the first question vector; and calling a sequencing model, sequencing the acquired multiple pieces of answer information according to the sequence from high to low of the matching degree of the first question information, and determining the answer information arranged at the first position as target answer information. The answer information can be generated even if the answer information cannot be retrieved in a retrieval mode, the condition that the answer information is lost is avoided, the influence of the context and the matching degree of the answer information and the question information are comprehensively considered, and the accuracy of the answer information is improved.

Description

Question answering method, device, equipment and storage medium
Technical Field
The present application relates to the field of natural language processing, and in particular, to a question answering method, device, apparatus, and storage medium.
Background
With the popularization of the internet and the wide application of natural language processing technology, the intelligent question answering function is gradually activated, and after a user inputs question information, the user can automatically answer the question of the user by using the intelligent question answering function, so that the user can interact with the user.
In the related art, after a user inputs question information, a keyword in the question information is acquired, and answer information matched with the keyword is retrieved from a database based on the acquired keyword, so as to complete an answer to the question information.
However, if the matching answer information cannot be searched for in the database, the question information cannot be answered, and thus the above method has a limitation.
Disclosure of Invention
The embodiment of the application provides a question answering method, a question answering device, question answering equipment and a storage medium, and the accuracy of the obtained answer information is improved. The technical scheme is as follows:
in one aspect, a question answering method is provided, which includes:
acquiring a first question vector of first question information and a second question vector of at least one piece of second question information, wherein the at least one piece of second question information is question information acquired before the first question information;
calling an answer generation model, and acquiring at least one piece of answer information according to the first question vector and at least one second question vector, wherein the answer generation model is used for generating answer information matched with any question vector according to any question vector;
searching in a question-answer database according to the first question information or the first question vector;
and calling a sequencing model, sequencing the obtained multiple pieces of answer information according to the sequence from high to low of the matching degree of the first question information, and determining the answer information arranged at the first position as the target answer information.
In one possible implementation, the method further includes:
obtaining sample question information and corresponding sample answer information, and sample matching degree of the sample question information and the sample answer information;
and training the ranking model according to the sample question information, the sample answer information and the sample matching degree to obtain the trained ranking model.
In another possible implementation manner, the training the ranking model according to the sample question information, the sample answer information, and the sample matching degree to obtain a trained ranking model includes:
inputting the sample question information and the sample answer information into the sequencing model, and obtaining the prediction matching degree of the sample question information and the sample answer information;
and training the sequencing model according to the sample matching degree and the prediction matching degree to obtain the trained sequencing model.
In another possible implementation manner, the invoking of the ranking model, ranking the obtained pieces of answer information in an order from high to low in matching degree with the first question information, and determining the answer information ranked first as the target answer information includes:
calling the multiple matching layers to respectively obtain the matching degree of each piece of answer information and the first question information;
calling the fusion layer to obtain fusion matching degrees of a plurality of matching degrees of each piece of answer information;
and calling the sorting layer, sorting the plurality of pieces of answer information according to the sequence of the fusion matching degree from high to low, and determining the answer information arranged at the first position as the target answer information.
In another possible implementation manner, the obtaining a first problem vector of the first problem information includes:
performing word segmentation processing on the first question information to obtain a plurality of first words after word segmentation;
acquiring similar words of the plurality of first words;
replacing at least one first word with a corresponding similar word each time to generate similar problem information;
and acquiring the first question vector according to the first question information and the at least one piece of similar question information.
In another possible implementation manner, the obtaining similar words of the first words includes:
acquiring similar words associated with each first word in the plurality of first words from a knowledge database, wherein the knowledge database is used for storing the similar words associated with each word; or,
and acquiring the similarity of each first word in the plurality of first words and at least one preset word, and taking the preset word with the similarity larger than the first preset similarity with any first word as the similar word of any first word.
In another possible implementation manner, the obtaining the first problem vector according to the first problem information and the at least one piece of similar problem information includes:
and acquiring the feature vector of the at least one piece of similar problem information and the average vector of the feature vector of the first problem information as the first problem vector.
In another possible implementation manner, the invoking an answer generation model, obtaining at least one piece of answer information according to the first question vector and the at least one second question vector, includes:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and calling the answer generation model, and acquiring the at least one piece of answer information according to the third question vector.
In another possible implementation manner, the retrieving in a question-and-answer database according to the first question information or the first question vector includes:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and obtaining at least one piece of answer information according to the similarity between the third question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the third question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the third question vector.
In another possible implementation manner, the invoking an answer generation model, and obtaining at least one piece of answer information according to the first question vector and the at least one second question vector includes:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and calling the answer generation model according to the fusion question vector to obtain the at least one piece of answer information.
In another possible implementation manner, the retrieving in a question-and-answer database according to the first question information or the first question vector includes:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and acquiring at least one piece of answer information according to the similarity between the fused question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the fused question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the fused question vector.
In another possible implementation manner, the retrieving in a question-and-answer database according to the first question information or the first question vector includes:
searching in the question-answer database according to the first question vector, and determining at least one fifth question vector which comprises question vectors similar to the first question vector; or,
searching in the question-answer database according to the first question vector and the at least one second question vector to determine at least one fifth question vector, wherein the at least one fifth question vector comprises at least one of question vectors similar to the first question vector or question vectors similar to the at least one second question vector;
and acquiring answer information of the at least one fifth question vector from the question-answer database as the at least one piece of answer information.
In another possible implementation manner, the retrieving in a question-and-answer database according to the first question information or the first question vector includes:
and acquiring the at least one piece of answer information according to the similarity between the first question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the first question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the first question vector.
In another possible implementation manner, the retrieving in a question-and-answer database according to the first question information or the first question vector includes:
performing word segmentation processing on the first question information to obtain a keyword or an entity of the first question information;
performing word segmentation processing on preset answer information in the question-answer database to obtain keywords or entities of the preset answer information;
and acquiring at least one piece of answer information according to the key words or the entities of the first question information and the key words or the entities of preset answer information in the question-answer database.
In another aspect, a question answering apparatus is provided, the apparatus including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first question vector of first question information and at least one second question vector of second question information, and the at least one second question information is question information acquired before the first question information;
the generating module is used for calling an answer generating model, acquiring at least one piece of answer information according to the first question vector and at least one second question vector, and generating answer information matched with any question vector according to any question vector by the answer generating model;
the retrieval module is used for retrieving in a question-answer database according to the first question information or the first question vector;
and the sequencing module is used for calling a sequencing model, sequencing the acquired multiple pieces of answer information according to the sequence from high to low of the matching degree of the first question information, and determining the answer information arranged at the first position as the target answer information.
In one possible implementation, the apparatus further includes:
the acquisition module is used for acquiring sample question information and corresponding sample answer information as well as the sample matching degree of the sample question information and the sample answer information;
and the training module is used for training the ranking model according to the sample question information, the sample answer information and the sample matching degree to obtain the trained ranking model.
In another possible implementation manner, the training module includes:
the input unit is used for inputting the sample question information and the sample answer information into the sequencing model and acquiring the prediction matching degree of the sample question information and the sample answer information;
and the training unit is used for training the sequencing model according to the sample matching degree and the prediction matching degree to obtain the trained sequencing model.
In another possible implementation manner, the ranking model includes a plurality of matching layers, a fusion layer, and a ranking layer, and the generating module includes:
the matching degree obtaining unit is used for calling the plurality of matching layers and respectively obtaining the matching degree of each piece of answer information and the first question information;
the fusion unit is used for calling the fusion layer and acquiring the fusion matching degree of the multiple matching degrees of each piece of answer information;
and the sorting unit is used for calling the sorting layer, sorting the plurality of pieces of answer information according to the sequence of the fusion matching degree from high to low, and determining the answer information arranged at the first position as the target answer information.
In another possible implementation manner, the obtaining module is configured to perform any one of the following:
performing word segmentation processing on the first question information to obtain a plurality of first words after word segmentation;
acquiring similar words of the plurality of first words;
replacing at least one first word with a corresponding similar word each time to generate similar problem information;
and acquiring the first question vector according to the first question information and the at least one piece of similar question information.
In another possible implementation manner, the obtaining module is configured to perform any one of the following:
acquiring similar words associated with each first word in the plurality of first words from a knowledge database, wherein the knowledge database is used for storing the similar words associated with each word; or,
and acquiring the similarity of each first word in the plurality of first words and at least one preset word, and taking the preset word with the similarity larger than the first preset similarity with any first word as the similar word of any first word.
In another possible implementation manner, the obtaining module is configured to obtain an average vector of the feature vectors of the at least one piece of similar problem information and the feature vector of the first problem information as the first problem vector.
In another possible implementation manner, the generating module is configured to perform any one of the following:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and calling the answer generation model, and acquiring the at least one piece of answer information according to the third question vector.
In another possible implementation manner, the retrieving module is configured to perform any one of the following:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and obtaining at least one piece of answer information according to the similarity between the third question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the third question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the third question vector.
In another possible implementation manner, the at least one second problem vector includes a plurality of second problem vectors, and the generating module is configured to perform any one of the following:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and calling the answer generation model according to the fusion question vector to obtain the at least one piece of answer information.
In another possible implementation manner, the retrieving module is configured to perform any one of the following:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and acquiring at least one piece of answer information according to the similarity between the fused question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the fused question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the fused question vector.
In another possible implementation manner, the retrieving module is configured to perform any one of the following:
searching in the question-answer database according to the first question vector, and determining at least one fifth question vector which comprises question vectors similar to the first question vector; or,
searching in the question-answer database according to the first question vector and the at least one second question vector to determine at least one fourth question vector, wherein the at least one fourth question vector comprises at least one of question vectors similar to the first question vector or question vectors similar to the at least one second question vector;
and acquiring the answer information of the at least one fourth question vector from the question-answer database as the at least one piece of answer information.
In another possible implementation manner, the retrieving module is configured to perform any one of the following:
performing word segmentation processing on the first question information to obtain a keyword or an entity of the first question information;
performing word segmentation processing on preset answer information in the question-answer database to obtain keywords or entities of the preset answer information;
and acquiring at least one piece of answer information according to the key words or the entities of the first question information and the key words or the entities of preset answer information in the question-answer database.
In another aspect, an electronic device is provided that includes one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform an operation as performed by the question-and-answer method.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the question-answering method.
According to the question answering method, the device, the equipment and the storage medium, answer information is obtained through a mode based on automatic model generation and a retrieval mode, then a sequencing model is called, a plurality of pieces of obtained answer information are sequenced according to the matching degree of the answer information and the question information, the answer information arranged at the first position is obtained, namely the answer information which is most matched with the first question information, the answer information can be generated even under the condition that the answer information cannot be retrieved through the retrieval mode, the condition that the answer information is lost cannot occur, the influence of the context and the matching degree of the answer information and the question information are comprehensively considered, and the accuracy of the answer information is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a question answering method provided in an embodiment of the present application;
fig. 3 is a flowchart of a question answering method according to an embodiment of the present application;
FIG. 4 is a block diagram of an answer generation model provided in an embodiment of the present application;
fig. 5 is a block diagram of answer information retrieval according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of a ranking model for ranking according to an embodiment of the present application;
FIG. 7 is a block diagram of an answer determination provided by an embodiment of the present application;
FIG. 8 is a block diagram of operations performed by an input module according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a question answering device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a question answering device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The question answering method can automatically provide answers for questions asked by a user on line, improves the accuracy of answering the questions, and can be applied to various scenes:
for example, the question and answer method provided by the application is applied to an intelligent customer service scene, when a user uses any application program, the application program can provide a service of a customer service question and answer function for the user, the user can input question information in the application program, and after the application program detects the question information input by the user, the question and answer method provided by the embodiment of the application is adopted to determine a matched answer according to the question information so that the user can view the determined answer.
Or the question-answering method provided by the application is applied to an entertainment interactive scene, when a user needs to play entertainment, the user can interactively communicate with the intelligent robot, the user can input question information to the intelligent robot, then the question-answering method provided by the embodiment of the application is adopted, the matched answer is determined according to the question information input by the user, and then the answer is replied to the user, so that the user can interact with the question-answering method.
In addition, the method provided by the embodiment of the application is applied to the electronic equipment, and the electronic equipment can comprise a terminal and can also comprise a server.
When the electronic equipment comprises a terminal, the method provided by the embodiment of the application is executed by the terminal.
Or, fig. 1 shows a schematic structural diagram of an implementation environment of the embodiment of the present application, referring to fig. 1, when an electronic device includes a terminal 101 and a server 102, the terminal 101 is connected to the server 102 through a communication network, after the terminal 101 acquires first question information, the first question information is sent to the server 102 through the communication network, the server 102 determines target answer information according to the method provided in the embodiment of the present application, and then feeds the target answer information back to the terminal 101, so that a user can view the target question information through the terminal 101.
The terminal can be various terminals such as a mobile phone, a tablet computer, a computer and the like, and the server can be a server, a server cluster consisting of a plurality of servers or a cloud computing service center.
Fig. 2 is a flowchart of a question answering method provided in an embodiment of the present application, and referring to fig. 2, the method includes:
201. a first problem vector of the first problem information and a second problem vector of at least one piece of second problem information are obtained.
Wherein the at least one piece of second question information is question information acquired before the first question information.
202. And calling an answer generation model, and acquiring at least one piece of answer information according to the first question vector and the at least one second question vector.
The answer generation model is used for generating answer information matched with any question vector according to any question vector.
203. And searching in the question-answer database according to the first question information or the first question vector.
204. And calling a sequencing model, sequencing the acquired multiple pieces of answer information according to the sequence from high to low of the matching degree of the first question information, and determining the answer information arranged at the first position as target answer information.
According to the method provided by the embodiment of the application, the answer information is obtained by adopting a mode based on automatic generation of the model and a retrieval mode, then the sequencing model is called, the obtained multiple pieces of answer information are sequenced according to the matching degree of the answer information and the question information, the answer information arranged at the first position is obtained, namely the answer information which is most matched with the first question information, the answer information can be generated even if the answer information cannot be retrieved by adopting the retrieval mode, the condition that the answer information is lost can not occur, the influence of the context and the matching degree of the answer information and the question information are comprehensively considered, and the accuracy of the answer information is improved.
In one possible implementation, the method further comprises:
obtaining sample question information and corresponding sample answer information, and sample matching degree of the sample question information and the sample answer information;
and training the ranking model according to the sample question information, the sample answer information and the sample matching degree to obtain the trained ranking model.
In another possible implementation manner, training the ranking model according to the sample question information, the sample answer information, and the sample matching degree to obtain a trained ranking model includes:
inputting the sample question information and the sample answer information into a sequencing model, and obtaining the prediction matching degree of the sample question information and the sample answer information;
and training the sequencing model according to the sample matching degree and the prediction matching degree to obtain the trained sequencing model.
In another possible implementation manner, the step of calling the ranking model, ranking the obtained pieces of answer information in an order from high to low in matching degree with the first question information, and determining the answer information ranked first as the target answer information includes:
calling a plurality of matching layers to respectively obtain the matching degree of each piece of answer information and the first question information;
calling a fusion layer to obtain fusion matching degrees of a plurality of matching degrees of each piece of answer information;
and calling a sequencing layer, sequencing the plurality of pieces of answer information according to the sequence of the fusion matching degree from high to low, and determining the answer information arranged at the first position as target answer information.
In another possible implementation, obtaining a first problem vector of the first problem information includes:
performing word segmentation processing on the first question information to obtain a plurality of first words after word segmentation;
acquiring similar words of a plurality of first words;
replacing at least one first word with a corresponding similar word each time to generate similar problem information;
and acquiring a first question vector according to the first question information and at least one piece of similar question information.
In another possible implementation, obtaining similar words of the plurality of first words includes:
acquiring similar words related to each first word in the plurality of first words from a knowledge database, wherein the knowledge database is used for storing the similar words related to each word; or,
the similarity of each first word in the plurality of first words and at least one preset word is obtained, and the preset word with the similarity larger than the first preset similarity with any first word is used as the similar word of any first word.
In another possible implementation manner, obtaining a first problem vector according to the first problem information and at least one piece of similar problem information includes:
and acquiring at least one feature vector of the similar problem information and an average vector of the feature vectors of the first problem information as the first problem vector.
In another possible implementation manner, invoking an answer generation model, and obtaining at least one piece of answer information according to the first question vector and the at least one second question vector includes:
splicing the first problem vector and at least one second problem vector to obtain a third problem vector;
and calling an answer generation model, and acquiring at least one piece of answer information according to the third question vector.
In another possible implementation, the searching in the question-answer database according to the first question information or the first question vector includes:
splicing the first problem vector and at least one second problem vector to obtain a third problem vector;
and obtaining at least one piece of answer information according to the similarity between the third question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of at least one piece of answer information and the third question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the third question vector.
In another possible implementation manner, the at least one second question vector includes a plurality of second question vectors, the method includes invoking an answer generation model, and obtaining at least one piece of answer information according to the first question vector and the at least one second question vector, including:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and calling an answer generation model according to the fusion question vector to obtain at least one piece of answer information.
In another possible implementation, the searching in the question-answer database according to the first question information or the first question vector includes:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and obtaining at least one piece of answer information according to the similarity between the fused question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of at least one piece of answer information and the fused question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the fused question vector.
In another possible implementation, the searching in the question-answer database according to the first question information or the first question vector includes:
searching in a question-answer database according to the first question vector, and determining at least one fifth question vector, wherein the at least one fifth question vector comprises question vectors similar to the first question vector; or,
searching in a question-answer database according to the first question vector and the at least one second question vector to determine at least one fifth question vector, wherein the at least one fifth question vector comprises at least one of the question vectors similar to the first question vector or the question vectors similar to the at least one second question vector;
and acquiring at least one piece of answer information of the fifth question vector from the question-answer database as at least one piece of answer information.
In another possible implementation, the searching in the question-answer database according to the first question information or the first question vector includes:
and obtaining at least one piece of answer information according to the similarity between the first question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of at least one piece of answer information and the first question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the fusion question vector.
In another possible implementation, the searching in the question-answer database according to the first question information or the first question vector includes:
performing word segmentation processing on the first question information to obtain a keyword or an entity of the first question information;
performing word segmentation processing on preset answer information in a question-answer database to obtain keywords or entities of the preset answer information;
and acquiring at least one piece of answer information according to the key words or the entities of the first question information and the key words or the entities of preset answer information in the question-answer database.
Fig. 3 is a flowchart of a question answering method provided in an embodiment of the present application, and referring to fig. 3, the method is applied to an electronic device, and the method includes:
301. a first problem vector of the first problem information and a second problem vector of at least one piece of second problem information are obtained.
The first question information is question information currently input by a user. For example, the first question information may be question information for a user to ask for an operation, or question information for a user to ask for a name, or question information for a user to ask for a price of an article, and the like.
In the embodiment of the application, when answering to first question information, not only answering according to a first question vector of the acquired first question information, but also acquiring at least one piece of second question information before the first question information, and subsequently answering according to the first question information and the at least one piece of second question information.
The first question information is a question which is currently inquired by the user, and before the first question information, the user also inquires other questions, so that at least one second question information before the first question information is obtained, the question information which is inquired by the user before is subsequently integrated, the first question information of the user is answered, and the accuracy of the generated answer information can be improved.
In one possible implementation manner, the obtained at least one piece of second question information is input into a vector coding model, and a second question vector of the at least one piece of second question information is obtained. The vector coding model is used for obtaining a problem vector of any problem information.
Optionally, the word segmentation processing is performed on the first question information to obtain a plurality of first words after word segmentation, similar words of the plurality of first words are obtained, at least one first word is replaced with a corresponding similar word each time to generate a piece of similar question information, and a first question vector is obtained according to the first question information and the at least one piece of similar question information.
The similar words of the first words are obtained, the similar problem information similar to the first problem information can be obtained after the first words are replaced by the similar words, the expansion of the first problem information is further achieved through the expansion of the obtained first words, more other problem information similar to the first problem information can be obtained, and different expression modes of the same information can be fully considered.
After similar words of multiple first words of the word segmentation are obtained, at least one first word may be replaced with a corresponding similar word, for example, one of the first words is replaced with a corresponding similar word, two of the first words are replaced with corresponding similar words, and the like, then a piece of replaced similar problem information may be generated, and then a first problem vector may be obtained according to the first problem information and the at least one piece of similar problem information.
In one possible implementation, the similar term associated with each of the plurality of first terms is obtained from a knowledge database.
Wherein the knowledge database is used for storing similar words of which each word is associated with each word. And the knowledge database can describe the relation among all the words, establish the association among each word and form a network structure among the words.
When the first question information is subjected to word segmentation processing, the knowledge database is inquired according to the first words obtained by word segmentation, and then the similar words related to the first words in the knowledge database can be determined.
In another possible implementation manner, the similarity between each first word in the plurality of first words and at least one preset word is obtained, and the preset word with the similarity larger than the first preset similarity with any first word is used as the similar word of any first word.
In the embodiment of the application, after word segmentation processing is performed on the first question information, a plurality of first words are obtained, similarity between each first word and at least one preset word is calculated, whether the similarity between each first word and at least one preset word is greater than a first preset similarity is judged, and when the similarity between any first word and any preset word is determined to be greater than the first preset similarity, the preset word is used as the similar word of the any first word.
The first preset similarity is set by a server, or set by a technician, or set in other manners. The first predetermined similarity may be 0.7, 0.8, or other values.
For example, when the first question information input by the user is "i want to inquire about the order", the plurality of first words obtained by word segmentation are "i", "want", "inquire", "take", "this", "order" respectively, it can be determined that the similar term of "query" is "find" and the similar term of "order" is "bill", replacing "query" with "find", the generated similar problem information is that "i want to find the order", and "order" is replaced with "bill", the generated similar problem information is that "i want to inquire the bill", meanwhile, the 'inquiry' is replaced by 'searching', the 'order' is replaced by 'bill', the generated similar problem information is 'I need to search the bill', and then acquiring a first problem vector according to the first problem information and at least one piece of similar problem information.
Optionally, at least one of the feature vector of the similar question information and the average vector of the feature vector of the first question information is obtained as the first question vector.
When generating the first problem vector according to the first problem information and the at least one piece of similar problem information, firstly, the feature vector of the at least one piece of similar problem information and the feature vector of the first problem information are obtained, and then the average vector of the feature vector of the at least one piece of similar problem information and the feature vector of the first problem information is calculated to be used as the first problem vector.
In one possible implementation manner, after the first question information and the at least one piece of similar question information are obtained, the first question information and the at least one piece of similar question information are input into a sentence vector coding model, and then a feature vector of the first question information and a feature vector of the similar question information are obtained.
302. And calling an answer generation model, and acquiring at least one piece of answer information according to the first question vector and the at least one second question vector.
The answer generation model is used for generating answer information matched with any question vector according to any question vector.
The answer generation model may be seq2 seq-annotation model, encoder-decoder model, transform model, or the like.
Optionally, the answer generation model includes a vector decoding module, an attention mechanism weighting module, an answer error correction module, and a generated answer module, for example, as shown in fig. 4, after a question vector is input into the answer generation model, the question vector is weighted by the attention mechanism weighting module to obtain a most influential part in the question vector, then a predicted answer is obtained by the vector decoding module, and the predicted answer is subjected to syntax error correction by the answer error correction module to finally obtain answer information, where the answer information is generated answer information.
After the first question vector and the at least one second question vector are obtained, the answer generation model is called, and at least one piece of answer information can be obtained according to the first question vector and the at least one second question vector.
In a possible implementation manner, the first question vector and the at least one second question vector are spliced to obtain a third question vector, an answer generation model is called, and at least one piece of answer information is obtained according to the third question vector.
And sequentially splicing the first question vector and the at least one second question vector according to the sequence to obtain a spliced third question vector, and then inputting the third question vector into an answer generation model to generate at least one piece of answer information.
Optionally, when the first problem vector and the at least one second problem vector are sequentially spliced, sequentially splicing according to the sequence of obtaining the first problem vector and the at least one second problem vector to obtain a spliced third problem vector.
Or splicing the first problem vector and the at least one second problem vector in sequence according to a random sequence to obtain a spliced third problem vector.
In another possible implementation manner, the at least one obtained second problem vector includes a plurality of second problem vectors.
In order to reduce the data volume of the second question vectors, firstly, carrying out weighted average on a plurality of second question vectors to obtain a fourth question vector, then splicing the first question vector and the fourth question vector to obtain a fused question vector, and calling an answer generation model according to the fused question vector to obtain at least one piece of answer information.
Optionally, the invoked answer generation model may directly process the fused question vector to generate an answer message.
Or, the called answer generation model may adopt a plurality of processing layers, and respectively process the fused question vector to obtain a plurality of processed question vectors, and then each processing layer generates one piece of answer information according to one question vector, so that a plurality of pieces of answer information can be finally obtained.
Optionally, in the process of performing weighted average on the plurality of second problem vectors, determining the weight of each second problem vector according to the sequence of the plurality of second problem vectors, and performing weighted average on the plurality of second problem vectors according to the weight of each second problem vector to obtain a fourth problem vector.
For example, the reciprocal of the number of turns of interval between each second problem vector and each first problem vector is used as the weight of each second problem vector, the weight is 1 when the number of turns of interval between the second problem vector and the first problem vector is 1, the weight is 1/2 when the number of turns of interval between the second problem vector and the first problem vector is 2, and so on, the weight is smaller when the number of turns of interval between the second problem vector and the first problem vector is larger.
The process from the acquisition of the question information to the acquisition of the answer information based on the question information to the output of the answer information is a round of question-answering process, and therefore, the number of rounds of interval between the second question vector and the first question vector is the number of rounds of question-answering process performed between the first question vector and the second question vector.
In addition, before the answer generation model is called, the answer generation model needs to be trained, and then the trained answer generation model is called. Wherein the step of training the answer generating model comprises: the method comprises the steps of firstly obtaining an initial answer generation model or an answer generation model which is trained for one time or more, then obtaining a sample question vector and corresponding sample answer information, and training the initial answer generation model according to the sample question vector and the corresponding sample answer information to obtain a trained answer generation model.
In the training process, at least one sample question vector is input into an answer generating model, predicted answer information is obtained based on the answer generating model, an error between the sample answer information and the predicted answer information is obtained, the answer generating model is adjusted, the error obtained by the adjusted answer generating model is converged, and the training of the answer generating model is completed.
303. And searching in the question-answer database according to the first question information or the first question vector.
The question-answer database stores a plurality of pieces of answer information, and each piece of answer information in the plurality of pieces of answer information corresponds to question information, so that retrieval is performed in the question-answer database according to the obtained first question vector and at least one second question vector, and whether answer information matched with the first question information exists or not can be determined after retrieval.
In the process of searching in the question-answer database, searching can be performed according to the first question information, searching can also be performed according to the first question vector, and answer information can be obtained through searching subsequently.
In a possible implementation manner, the first question vector and the at least one second question vector are spliced to obtain a third question vector, the at least one piece of answer information is obtained according to the similarity between the third question vector and the feature vector of each piece of preset answer information in the question and answer database, and the similarity between the feature vector of the at least one piece of answer information and the third question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the third question vector.
The feature vectors of each piece of preset answer information in the question-answer database can be obtained, then the similarity between the third question vector and the feature vector of each piece of preset answer information is calculated, the preset answer information is ranked according to the sequence of the similarity from large to small, and at least one piece of answer information is obtained according to the ranking sequence.
Optionally, according to the arrangement sequence, a preset number of preset answer information is selected as the answer information obtained by retrieval.
Wherein the preset number can be set by the server, or set by a technician, or set by other means. The predetermined number may be 2, 3, 4, or other values.
Optionally, according to the similarity between the obtained third question vector and the feature vector of each piece of preset answer information, selecting the preset answer information with the similarity greater than the preset similarity as the retrieved answer information.
In another possible implementation manner, the plurality of second question vectors are weighted and averaged to obtain a fourth question vector, the first question vector and the fourth question vector are spliced to obtain a fused question vector, at least one piece of answer information is obtained according to the similarity between the fused question vector and the feature vector of each piece of preset answer information in the question-answer database, and the similarity between the feature vector of at least one piece of answer information and the fused question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the fused question vector.
The process of performing weighted average on the plurality of second problem vectors is similar to the weighted average process in the above steps, and is not described herein again. In addition, the process of calculating the similarity between the fused feature vector and the feature vector of the preset answer information is similar to the process of calculating the similarity between the third feature vector and the feature vector of the preset answer information in the above process, and is not repeated herein.
In another possible implementation manner, at least one piece of answer information is obtained according to the similarity between the first question vector and the feature vector of each piece of preset answer information in the question-answer database.
The similarity between the feature vector of the at least one piece of answer information and the first question vector is greater than the similarity between the feature vectors of other preset answer information and the first question vector.
The process of obtaining at least one piece of answer information according to the similarity is similar to the process of obtaining at least one piece of answer information, and is not described herein again.
Optionally, in the process of obtaining at least one piece of answer information according to the first question vector, the number of pieces of answer information obtained by controlling may be determined by a preset number, for example, when the preset number is 1, the obtained answer information is 1, and when the preset number is 2, the obtained answer information is multiple.
Or when answer information is acquired according to the preset similarity, because the similarity between the acquired third question vector or the fused feature vector and the feature vector of the preset answer information is not fixed, if the similarity is smaller than the preset similarity, the answer information cannot be acquired, if only one of the similarity is larger than the preset similarity, only one piece of answer information is acquired, and if a plurality of similarities are larger than the preset similarity, a plurality of pieces of answer information are acquired.
In another possible implementation, at least one fifth question vector is determined based on the first question vector, retrieved from a question-and-answer database. Wherein the at least one fifth problem vector comprises a problem vector similar to the first problem vector.
Optionally, the similarity between the at least one fifth problem vector and the first problem vector is greater than the similarities between the other preset problem vectors and the first problem vector.
In another possible implementation manner, according to the first question vector and the at least one second question vector, searching is performed in the question-answer database, at least one fifth question vector is determined, the at least one fifth question vector includes at least one of the question vectors similar to the first question vector or the question vectors similar to the at least one second question vector, and answer information of the at least one fifth question vector is obtained from the question-answer database as the at least one piece of answer information obtained through searching.
In the process of obtaining at least one fifth problem vector through retrieval, because the problem vector similar to the problem vector is determined, in order to reduce the situation that the fifth problem vector cannot be obtained, the obtained fifth problem vector may be a vector similar to the first problem vector or a vector similar to the second problem vector.
Optionally, since a fifth problem vector is obtained according to the first problem vector and the at least one second problem vector, three situations may occur, that is, the fifth problem vector is not obtained, one fifth problem vector is obtained, and a plurality of fifth problem vectors are obtained.
Optionally, the first question vector and the at least one second question vector are spliced to obtain a third question vector, and the third question vector is retrieved from the question-answer database to determine at least one fifth question vector.
And obtaining at least one fifth question vector according to the similarity between the third question vector and each preset question vector in the question-answer database, wherein the similarity between the at least one fourth question vector and the third question vector is greater than the similarity between other preset question vectors and the third question vector.
Optionally, the plurality of second question vectors are weighted and averaged to obtain a sixth question vector, the first question vector and the sixth question vector are spliced to obtain a fused question vector, and the fused question vector is retrieved from a question-answer database to determine at least one fifth question vector.
And obtaining at least one fifth question vector according to the similarity between the fused question vector and each preset question vector in the question-answer database, wherein the similarity between the at least one fifth question vector and the third question vector is greater than the similarity between other preset question vectors and the fused question vector.
It should be noted that, the embodiments of the present application are described only by taking as an example that the answer information is obtained by searching in the question and answer database according to the first question vector. In another embodiment, the first question information may be directly searched in the question-answer database to obtain at least one piece of answer information.
In one possible implementation manner, word segmentation is performed on first question information to obtain a keyword or an entity in the first question information, word segmentation is performed on preset answer information in a question-answer database to obtain a keyword or an entity of the preset answer information, and when the keyword or the entity in the first question information matches the keyword or the entity in the preset answer information, the preset answer information is determined as answer information.
In another possible implementation manner, word segmentation is performed on the first question information to obtain a keyword or an entity of the first question information, word segmentation is performed on preset question information in a question-answer database to obtain a keyword or an entity of preset answer information, retrieval is performed in the question-answer database according to the keyword or the entity of the first question information and the keyword or the entity of the preset answer information in the question-answer database to determine at least one piece of third question information similar to the first question information, and then answer information corresponding to the at least one piece of third question information is determined as answer information.
For example, as shown in fig. 5, in the search process, a plurality of pieces of preset answer information are obtained from the question-answer database, a feature vector of each piece of preset answer information is obtained according to the vector coding model, the similarity between the preset answer information and the current question vector is calculated, and the answer information is obtained according to the obtained similarity. Or, a keyword or an entity of the preset answer information and a keyword or an entity of the current question information may also be acquired, the similarity between the preset answer information and the keyword and the similarity between the keywords of the current question information are calculated, or the similarity between the preset answer information and the entity and the similarity between the preset answer information and the entity of the current question information are calculated, and the answer information is acquired according to the acquired similarity. The answer information acquired by the two methods can be retrieval answer information.
304. And calling a plurality of matching layers in the sequencing model to respectively obtain the matching degree of each piece of answer information and the first question information.
Wherein the ranking model comprises a plurality of matching layers, a fusion layer and a ranking layer. Each matching layer in the plurality of matching layers is used for obtaining the matching degree of the question information and the answer information. The fusion layer is used for fusing the matching degree of each question message and each answer message to obtain the fusion matching degree of the question messages and the answer messages. The sorting layer is used for sorting the answer information according to the fusion matching degree of the question information and each piece of answer information, and the answer information arranged at the first position is used as the target answer information of the question information.
After obtaining a plurality of pieces of answer information, calling a plurality of matching layers in the ranking model, obtaining a matching degree of each piece of answer information and the first question information through one matching layer, and obtaining a plurality of matching degrees through the plurality of matching layers for the same piece of answer information and the first question information.
Optionally, the matching degrees of the first question information and the answer information are obtained in different manners in the multiple matching layers.
The matching degree may be obtained by using machine learning algorithms such as LSTM (Long short-term memory), CNN (Convolutional Neural Networks), MLP (Multi-Layer Neural Networks), GBDT (Gradient Boosting Decision Tree), and the like, or by using other methods.
305. And calling a fusion layer in the sequencing model to obtain the fusion matching degree of the multiple matching degrees of each piece of answer information.
And after the multiple matching degrees of each piece of answer information are obtained, calling a fusion layer to obtain the fusion matching degree of the multiple matching degrees of each piece of answer information.
Optionally, an average value of the multiple matching degrees of each piece of answer information is obtained as a fusion matching degree of the answer information.
Optionally, each matching layer has a corresponding weight, and the sum of the weights corresponding to the multiple matching layers is 1, then the multiple matching degrees of each piece of answer information are weighted and averaged according to the weight corresponding to each matching layer, so as to obtain the fusion matching degree of the answer information.
306. And calling a sorting layer in the sorting model, sorting the plurality of pieces of answer information according to the sequence of the fusion matching degree from high to low, and determining the answer information arranged at the first position as target answer information.
For example, as shown in fig. 6, after obtaining a plurality of pieces of answer information, the ranking model is called to determine the target answer information.
In addition, in this embodiment of the application, when the ranking model is invoked to obtain target answer information from multiple pieces of answer information, the priority of at least one piece of answer information obtained in step 303 is greater than the priority of at least one piece of answer information obtained in step 302, so that when the ranking model ranks multiple pieces of answer information, the weight set for the answer information obtained in step 303 is greater than the weight set for the answer information obtained in step 303, when ranking is performed according to the fusion matching degree, the fusion matching degree is weighted according to the set weight to obtain the weighted fusion matching degree, and then ranking is performed according to the weighted fusion matching degree to obtain the target answer information.
It should be noted that, in the embodiment of the present application, the ranking model includes a plurality of matching layers, a fusion layer, and a ranking layer only for example, in another embodiment, step 304 and step 306 are optional steps, and the ranking model may be directly invoked without setting a plurality of matching layers in the ranking model to obtain a matching degree between the answer information and the first question information, and determine the target answer information according to the obtained matching degree.
In one possible implementation manner, a ranking model is called, at least a plurality of pieces of answer information are ranked according to the sequence of the matching degree of the first question information from high to low, and the answer information ranked at the first position is determined as the target answer information.
In the embodiment of the application, a ranking model is called, the matching degree of each piece of answer information in a plurality of pieces of answer information and the first question information is respectively obtained, and since each piece of answer information corresponds to one matching degree, the plurality of pieces of answer information are directly ranked according to the sequence from high to low of the matching degree with the first question information, and the answer information ranked at the first position is determined as the target answer information.
In addition, before the sequencing model is called, the sequencing model needs to be trained to obtain the trained sequencing model, and then the trained sequencing model is called to obtain the target answer. The process of training the ranking model is as follows:
and training the ranking model according to the sample question information, the sample answer information and the sample matching degree to obtain the trained ranking model.
The sample question information and the sample answer information are in a corresponding relationship, that is, the answer of the sample question information is the sample answer information. In addition, the sample matching degree of the sample question information and the sample answer information is used for indicating that the sample question information and the sample answer information are matched.
After the sample question information and the corresponding sample answer information are obtained and the sample matching degree of the sample question information and the corresponding sample answer information is obtained, the ranking model can be trained according to the sample question information, the sample answer information and the sample matching degree, and the trained ranking model is obtained.
Optionally, the sample question information and the sample answer information are input into the ranking model, the prediction matching degree of the sample question information and the sample answer information is obtained, and the ranking model is trained according to the sample matching degree and the prediction matching degree to obtain the trained ranking model.
In the training process, at least one piece of sample question information and corresponding sample answer information are input into a sequencing model, a prediction matching degree is obtained based on the sequencing model, an error between the sample matching degree and the prediction matching degree is obtained, the sequencing model is adjusted, the error obtained by the adjusted sequencing model is converged, and the training of the sequencing model is completed.
The flow of the method provided in the embodiment of the present application is shown in fig. 7, wherein step 301 is executed by the input module, step 302 is executed by the answer generation module to obtain at least one generated answer, step 303 is executed by the answer retrieval module to obtain at least one retrieved answer, and step 304 and step 306 are executed by the answer ranking module to rank the at least one generated answer and the at least one retrieved answer to obtain the answer of the current round.
In addition, as shown in fig. 8, the process in the input module is to obtain first problem information input by a user in a current turn, perform word segmentation processing on the first problem information, expand words after word segmentation to obtain similar problem information, generate a fusion feature vector according to the first problem information, the similar problem information and second problem information, where the second problem information is scene context information, call a scene context coding model to obtain a scene context information vector, call a sentence vector coding model to obtain a first problem vector of the first problem information and the similar problem information, and generate the fusion feature vector according to the scene context information vector and the first problem vector, where the fusion feature vector is input by a subsequent generation module and a retrieval module and becomes scene context information of next problem information.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, the answer information is obtained by adopting a mode based on automatic generation of the model and a retrieval mode, then the sequencing model is called, the obtained multiple pieces of answer information are sequenced according to the matching degree of the answer information and the question information, the answer information arranged at the first position is obtained, namely the answer information which is most matched with the first question information, the answer information can be generated even if the answer information cannot be retrieved by adopting the retrieval mode, the condition that the answer information is lost can not occur, the influence of the context and the matching degree of the answer information and the question information are comprehensively considered, and the accuracy of the answer information is improved.
And moreover, multiple matching degrees of the question information and the answer information are obtained by adopting multiple matching degree obtaining modes, the multiple matching degrees are fused to obtain a fusion matching degree, the fusion matching degree integrates the multiple matching degrees, and the fusion matching degree is more accurate, so that the answer information with the highest fusion matching degree is determined as the target answer information, and the accuracy of the target answer information can be improved.
And moreover, the method of firstly segmenting words and then expanding words is adopted, so that the expansion of the first question information is realized, more question information similar to the first question information can be obtained, different expression modes of the same information are fully considered, and the accuracy of subsequently obtained answer information is improved.
Fig. 9 is a schematic structural diagram of a question answering device according to an embodiment of the present application. Referring to fig. 9, the apparatus includes:
an obtaining module 901, configured to obtain a first problem vector of first problem information and at least one second problem vector of second problem information, where the at least one second problem information is problem information obtained before the first problem information;
a generating module 902, configured to invoke an answer generating model, obtain at least one piece of answer information according to the first question vector and the at least one second question vector, where the answer generating model is configured to generate answer information matched with any question vector according to any question vector;
a retrieval module 903, configured to retrieve from a question-and-answer database according to the first question information or the first question vector;
and a sorting module 904, configured to invoke a sorting model, sort the obtained pieces of answer information in an order from high to low in matching degree with the first question information, and determine the answer information arranged at the first position as the target answer information.
The device provided by the embodiment of the application acquires answer information by adopting a mode based on automatic generation of a model and a retrieval mode, calls a sequencing model, sequences a plurality of pieces of acquired answer information according to the matching degree of the answer information and the question information, acquires the answer information arranged at the first position, namely the answer information most matched with the first question information, ensures that the answer information can be generated even under the condition that the answer information cannot be retrieved by adopting the retrieval mode, avoids the condition of answer information loss, comprehensively considers the influence of the context and the matching degree of the answer information and the question information, and improves the accuracy of the answer information.
In one possible implementation, the apparatus further includes:
the obtaining module 901 is configured to obtain sample question information and corresponding sample answer information, and a sample matching degree between the sample question information and the sample answer information;
the training module 905 is configured to train the ranking model according to the sample question information, the sample answer information, and the sample matching degree, so as to obtain a trained ranking model.
In another possible implementation manner, the training module 905 includes:
an input unit 9051, configured to input the sample question information and the sample answer information into the ranking model, and obtain a prediction matching degree of the sample question information and the sample answer information;
and the training unit 9052 is configured to train the ranking model according to the sample matching degree and the prediction matching degree, so as to obtain a trained ranking model.
In another possible implementation manner, the ranking model includes a plurality of matching layers, a fusion layer, and a ranking layer, and the generating module 902 includes:
a matching degree obtaining unit 9021, configured to invoke the multiple matching layers, and obtain matching degrees of each piece of answer information and the first question information, respectively;
a fusion unit 9022, configured to invoke the fusion layer, and obtain a fusion matching degree of the multiple matching degrees of each piece of answer information;
and the sorting unit 9023 is configured to invoke the sorting layer, sort the plurality of pieces of answer information according to a sequence from high to low of the fusion matching degree, and determine the answer information ranked first as the target answer information.
In another possible implementation manner, the obtaining module 901 is configured to perform any one of the following:
performing word segmentation processing on the first question information to obtain a plurality of first words after word segmentation;
acquiring similar words of the plurality of first words;
replacing at least one first word with a corresponding similar word each time to generate similar problem information;
and acquiring the first question vector according to the first question information and the at least one piece of similar question information.
In another possible implementation manner, the obtaining module 901 is configured to perform any one of the following:
acquiring similar words associated with each first word in the plurality of first words from a knowledge database, wherein the knowledge database is used for storing the similar words associated with each word; or,
and acquiring the similarity of each first word in the plurality of first words and at least one preset word, and taking the preset word with the similarity larger than the first preset similarity with any first word as the similar word of any first word.
In another possible implementation manner, the obtaining module 901 is configured to obtain an average vector of the feature vectors of the at least one piece of similar problem information and the feature vector of the first problem information as the first problem vector.
In another possible implementation manner, the generating module 902 is configured to perform any one of the following:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and calling the answer generation model, and acquiring the at least one piece of answer information according to the third question vector.
In another possible implementation manner, the retrieving module 903 is configured to perform any one of the following:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and obtaining at least one piece of answer information according to the similarity between the third question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the third question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the third question vector.
In another possible implementation manner, the at least one second problem vector includes a plurality of second problem vectors, and the generating module 902 is configured to perform any one of the following:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and calling the answer generation model according to the fusion question vector to obtain the at least one piece of answer information.
In another possible implementation manner, the retrieving module 903 is configured to perform any one of the following:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and acquiring at least one piece of answer information according to the similarity between the fused question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the fused question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the fused question vector.
In another possible implementation manner, the retrieving module 903 is configured to perform any one of the following:
searching in the question-answer database according to the first question vector, and determining at least one fifth question vector which comprises question vectors similar to the first question vector; or,
searching in the question-answer database according to the first question vector and the at least one second question vector to determine at least one fourth question vector, wherein the at least one fourth question vector comprises at least one of question vectors similar to the first question vector or question vectors similar to the at least one second question vector;
and acquiring the answer information of the at least one fourth question vector from the question-answer database as the at least one piece of answer information.
In another possible implementation manner, the retrieving module 903 is configured to perform any one of the following:
performing word segmentation processing on the first question information to obtain a keyword or an entity of the first question information;
performing word segmentation processing on preset answer information in the question-answer database to obtain keywords or entities of the preset answer information;
and acquiring at least one piece of answer information according to the key words or the entities of the first question information and the key words or the entities of preset answer information in the question-answer database.
It should be noted that: in the above-described embodiment, the question answering device is illustrated by only dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the electronic device may be divided into different functional modules to complete all or part of the functions described above. In addition, the embodiment of the question answering device provided by the above embodiment and the embodiment of the question answering method belong to the same concept, and the specific implementation process thereof is described in the method embodiment and is not described herein again.
Fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1100 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group audio Layer III, motion Picture Experts compression standard audio Layer 3), an MP4 player (Moving Picture Experts Group audio Layer IV, motion Picture Experts compression standard audio Layer 4), a notebook computer, a desktop computer, a head-mounted device, or any other intelligent terminal. Terminal 1100 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1100 includes: a processor 1101 and a memory 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1101 may also include a main processor and a coprocessor, the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1101 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one instruction for being possessed by processor 1101 to implement the question-answering method provided by the method embodiments herein.
In some embodiments, the terminal 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a bus or signal lines. Various peripheral devices may be connected to the peripheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, display screen 1105, camera assembly 1106, audio circuitry 1107, positioning assembly 1108, and power supply 11011.
The peripheral interface 1103 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1101 and the memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1101, the memory 1102 and the peripheral device interface 1103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 8G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1104 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1105 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1105 is a touch display screen, the display screen 1105 also has the ability to capture touch signals on or over the surface of the display screen 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this point, the display screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1105 may be one, disposed on a front panel of terminal 1100; in other embodiments, the display screens 1105 can be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in other embodiments, display 1105 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1100. Even further, the display screen 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 1106 is used to capture images or video. Optionally, camera assembly 1106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing or inputting the electric signals to the radio frequency circuit 1104 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1107 may also include a headphone jack.
Positioning component 1108 is used to locate the current geographic position of terminal 1100 for purposes of navigation or LBS (location based Service). The positioning component 1108 may be a positioning component based on the united states GPS (global positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 1109 is configured to provide power to various components within terminal 1100. The power supply 1109 may be alternating current, direct current, disposable or rechargeable. When the power supply 1109 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1100 can also include one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
Acceleration sensor 1111 may detect acceleration levels in three coordinate axes of a coordinate system established with terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1101 may control the display screen 1105 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may cooperate with the acceleration sensor 1111 to acquire a 3D motion of the user with respect to the terminal 1100. From the data collected by gyroscope sensor 1112, processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1113 may be disposed on a side bezel of terminal 1100 and/or underlying display screen 1105. When the pressure sensor 1113 is disposed on the side frame of the terminal 1100, the holding signal of the terminal 1100 from the user can be detected, and the processor 1101 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the display screen 1105, the processor 1101 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and the processor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1101 to have associated sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1114 may be disposed on the front, back, or side of terminal 1100. When a physical button or vendor Logo is provided on the terminal 1100, the fingerprint sensor 1114 may be integrated with the physical button or vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the display screen 1105 based on the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1105 is increased; when the ambient light intensity is low, the display brightness of the display screen 1105 is reduced. In another embodiment, processor 1101 may also dynamically adjust the shooting parameters of camera assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
Proximity sensor 1116, also referred to as a distance sensor, is typically disposed on a front panel of terminal 1100. Proximity sensor 1116 is used to capture the distance between the user and the front face of terminal 1100. In one embodiment, when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 is gradually decreased, the display screen 1105 is controlled by the processor 1101 to switch from a bright screen state to a dark screen state; when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 becomes progressively larger, the display screen 1105 is controlled by the processor 1101 to switch from a breath-screen state to a light-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 does not constitute a limitation of terminal 1100, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 12 is a schematic structural diagram of a server 1200 according to an embodiment of the present application, where the server 1200 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors (CPUs) 1201 and one or more memories 1202, where the memory 1202 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 1201 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The server 1200 may be used to perform the steps performed by the server in the question-answering method described above.
The embodiment of the present application further provides an electronic device, where the electronic device includes one or more processors and one or more memories, where at least one instruction is stored in the one or more memories, and the at least one instruction is loaded and executed by the one or more processors to implement the operations performed by the question-answering method.
The embodiment of the present application further provides a computer-readable storage medium, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the question answering method.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A question-answering method, characterized in that it comprises:
acquiring a first question vector of first question information and a second question vector of at least one piece of second question information, wherein the at least one piece of second question information is question information acquired before the first question information;
calling an answer generation model, and acquiring at least one piece of answer information according to the first question vector and at least one second question vector, wherein the answer generation model is used for generating answer information matched with any question vector according to any question vector;
searching in a question-answer database according to the first question information or the first question vector;
and calling a sequencing model, sequencing the obtained multiple pieces of answer information according to the sequence from high to low of the matching degree of the first question information, and determining the answer information arranged at the first position as the target answer information.
2. The method of claim 1, further comprising:
obtaining sample question information and corresponding sample answer information, and sample matching degree of the sample question information and the sample answer information;
and training the ranking model according to the sample question information, the sample answer information and the sample matching degree to obtain the trained ranking model.
3. The method of claim 2, wherein the training the ranking model according to the sample question information, the sample answer information, and the sample matching degree to obtain a trained ranking model comprises:
inputting the sample question information and the sample answer information into the sequencing model, and obtaining the prediction matching degree of the sample question information and the sample answer information;
and training the sequencing model according to the sample matching degree and the prediction matching degree to obtain the trained sequencing model.
4. The method according to claim 1, wherein the ranking model includes a plurality of matching layers, a fusion layer, and a ranking layer, and the calling ranking model ranks the acquired plurality of pieces of answer information in order of high to low matching degrees with the first question information, and determines the answer information ranked first as the target answer information, including:
calling the multiple matching layers to respectively obtain the matching degree of each piece of answer information and the first question information;
calling the fusion layer to obtain fusion matching degrees of a plurality of matching degrees of each piece of answer information;
and calling the sorting layer, sorting the plurality of pieces of answer information according to the sequence of the fusion matching degree from high to low, and determining the answer information arranged at the first position as the target answer information.
5. The method of claim 1, wherein obtaining the first problem vector of the first problem information comprises:
performing word segmentation processing on the first question information to obtain a plurality of first words after word segmentation;
acquiring similar words of the plurality of first words;
replacing at least one first word with a corresponding similar word each time to generate similar problem information;
and acquiring the first question vector according to the first question information and the at least one piece of similar question information.
6. The method of claim 5, wherein obtaining similar terms for the first terms comprises:
acquiring similar words associated with each first word in the plurality of first words from a knowledge database, wherein the knowledge database is used for storing the similar words associated with each word; or,
and acquiring the similarity of each first word in the plurality of first words and at least one preset word, and taking the preset word with the similarity larger than the first preset similarity with any first word as the similar word of any first word.
7. The method of claim 5, wherein obtaining the first question vector based on the first question information and the at least one similar question information comprises:
and acquiring the feature vector of the at least one piece of similar problem information and the average vector of the feature vector of the first problem information as the first problem vector.
8. The method of claim 1, wherein the invoking an answer generation model to obtain at least one piece of answer information from the first question vector and at least one second question vector comprises:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and calling the answer generation model, and acquiring the at least one piece of answer information according to the third question vector.
9. The method of claim 1, wherein the retrieving in a question-and-answer database based on the first question information or the first question vector comprises:
splicing the first problem vector and the at least one second problem vector to obtain a third problem vector;
and obtaining at least one piece of answer information according to the similarity between the third question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the third question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the third question vector.
10. The method of claim 1, wherein the at least one second question vector comprises a plurality of second question vectors, and wherein invoking the answer generation model to obtain at least one piece of answer information based on the first question vector and the at least one second question vector comprises:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and calling the answer generation model according to the fusion question vector to obtain the at least one piece of answer information.
11. The method of claim 1, wherein the retrieving in a question-and-answer database based on the first question information or the first question vector comprises:
carrying out weighted average on the plurality of second problem vectors to obtain a fourth problem vector;
splicing the first problem vector and the fourth problem vector to obtain a fusion problem vector;
and acquiring at least one piece of answer information according to the similarity between the fused question vector and the feature vector of each piece of preset answer information in the question-answer database, wherein the similarity between the feature vector of the at least one piece of answer information and the fused question vector is greater than the similarity between the feature vectors of other pieces of preset answer information and the fused question vector.
12. The method of claim 1, wherein the retrieving in a question-and-answer database based on the first question information or the first question vector comprises:
searching in the question-answer database according to the first question vector, and determining at least one fifth question vector which comprises question vectors similar to the first question vector; or,
searching in the question-answer database according to the first question vector and the at least one second question vector to determine at least one fifth question vector, wherein the at least one fifth question vector comprises at least one of question vectors similar to the first question vector or question vectors similar to the at least one second question vector;
and acquiring answer information of the at least one fifth question vector from the question-answer database as the at least one piece of answer information.
13. A question answering device, characterized in that the device comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first question vector of first question information and at least one second question vector of second question information, and the at least one second question information is question information acquired before the first question information;
the generating module is used for calling an answer generating model, acquiring at least one piece of answer information according to the first question vector and at least one second question vector, and generating answer information matched with any question vector according to any question vector by the answer generating model;
the retrieval module is used for retrieving in a question-answer database according to the first question information or the first question vector;
and the sequencing module is used for calling a sequencing model, sequencing the acquired multiple pieces of answer information according to the sequence from high to low of the matching degree of the first question information, and determining the answer information arranged at the first position as the target answer information.
14. An electronic device, comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the question-answering method according to any one of claims 1 to 12.
15. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the question-answering method according to any one of claims 1 to 12.
CN202010613891.3A 2020-06-30 2020-06-30 Question answering method, device, equipment and storage medium Active CN111782767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010613891.3A CN111782767B (en) 2020-06-30 2020-06-30 Question answering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010613891.3A CN111782767B (en) 2020-06-30 2020-06-30 Question answering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111782767A true CN111782767A (en) 2020-10-16
CN111782767B CN111782767B (en) 2024-08-27

Family

ID=72761439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010613891.3A Active CN111782767B (en) 2020-06-30 2020-06-30 Question answering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111782767B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239169A (en) * 2021-06-01 2021-08-10 平安科技(深圳)有限公司 Artificial intelligence-based answer generation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160247068A1 (en) * 2013-11-01 2016-08-25 Tencent Technology (Shenzhen) Company Limited System and method for automatic question answering
CN109241258A (en) * 2018-08-23 2019-01-18 江苏索迩软件技术有限公司 A kind of deep learning intelligent Answer System using tax field
CN109684452A (en) * 2018-12-25 2019-04-26 中科国力(镇江)智能技术有限公司 A kind of neural network problem generation method based on answer Yu answer location information
CN110287296A (en) * 2019-05-21 2019-09-27 平安科技(深圳)有限公司 A kind of problem answers choosing method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160247068A1 (en) * 2013-11-01 2016-08-25 Tencent Technology (Shenzhen) Company Limited System and method for automatic question answering
CN109241258A (en) * 2018-08-23 2019-01-18 江苏索迩软件技术有限公司 A kind of deep learning intelligent Answer System using tax field
CN109684452A (en) * 2018-12-25 2019-04-26 中科国力(镇江)智能技术有限公司 A kind of neural network problem generation method based on answer Yu answer location information
CN110287296A (en) * 2019-05-21 2019-09-27 平安科技(深圳)有限公司 A kind of problem answers choosing method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239169A (en) * 2021-06-01 2021-08-10 平安科技(深圳)有限公司 Artificial intelligence-based answer generation method, device, equipment and storage medium
CN113239169B (en) * 2021-06-01 2023-12-05 平安科技(深圳)有限公司 Answer generation method, device, equipment and storage medium based on artificial intelligence

Also Published As

Publication number Publication date
CN111782767B (en) 2024-08-27

Similar Documents

Publication Publication Date Title
CN110134804B (en) Image retrieval method, device and storage medium
CN110471858B (en) Application program testing method, device and storage medium
CN108717432B (en) Resource query method and device
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN110572716B (en) Multimedia data playing method, device and storage medium
CN111897996A (en) Topic label recommendation method, device, equipment and storage medium
WO2022057435A1 (en) Search-based question answering method, and storage medium
CN111782950B (en) Sample data set acquisition method, device, equipment and storage medium
CN110503160B (en) Image recognition method and device, electronic equipment and storage medium
CN111581958A (en) Conversation state determining method and device, computer equipment and storage medium
CN110942046A (en) Image retrieval method, device, equipment and storage medium
CN112052354A (en) Video recommendation method, video display method and device and computer equipment
CN112148899A (en) Multimedia recommendation method, device, equipment and storage medium
CN113918767A (en) Video clip positioning method, device, equipment and storage medium
CN114547428A (en) Recommendation model processing method and device, electronic equipment and storage medium
CN110837557B (en) Abstract generation method, device, equipment and medium
CN111613213A (en) Method, device, equipment and storage medium for audio classification
CN114328815A (en) Text mapping model processing method and device, computer equipment and storage medium
CN112100528B (en) Method, device, equipment and medium for training search result scoring model
CN111611414B (en) Vehicle searching method, device and storage medium
CN111782767B (en) Question answering method, device, equipment and storage medium
CN111563201A (en) Content pushing method, device, server and storage medium
CN114817709A (en) Sorting method, device, equipment and computer readable storage medium
CN111641853B (en) Multimedia resource loading method and device, computer equipment and storage medium
CN112560472B (en) Method and device for identifying sensitive information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant