CN112989001A - Question and answer processing method, device, medium and electronic equipment - Google Patents

Question and answer processing method, device, medium and electronic equipment Download PDF

Info

Publication number
CN112989001A
CN112989001A CN202110349133.XA CN202110349133A CN112989001A CN 112989001 A CN112989001 A CN 112989001A CN 202110349133 A CN202110349133 A CN 202110349133A CN 112989001 A CN112989001 A CN 112989001A
Authority
CN
China
Prior art keywords
question
questions
user
stored
answer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110349133.XA
Other languages
Chinese (zh)
Other versions
CN112989001B (en
Inventor
付博
王雪
李宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCB Finetech Co Ltd filed Critical CCB Finetech Co Ltd
Priority to CN202110349133.XA priority Critical patent/CN112989001B/en
Publication of CN112989001A publication Critical patent/CN112989001A/en
Application granted granted Critical
Publication of CN112989001B publication Critical patent/CN112989001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a question and answer processing method, a question and answer processing device, a question and answer processing medium and electronic equipment. Relates to the field of artificial intelligence, and the method comprises the following steps: performing first feature matching on the user question and pre-stored questions in a pre-stored question-answer pair, and taking at least two pre-stored questions successfully matched as initial questions; performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question; selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question; wherein the first characteristic is different from the second characteristic. By executing the scheme, the user problems can be accurately identified, the accuracy of answering the user problems is improved, and the user experience is further improved.

Description

Question and answer processing method, device, medium and electronic equipment
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a question and answer processing method, device, medium and electronic equipment.
Background
With the continuous development of society, more and more people acquire knowledge and information through a question-answering system. The Question Answering System (QA) is a high-level form of information retrieval System that can answer questions posed by users in natural language with accurate and concise natural language. The question-answering system is a research direction which is concerned with and has wide development prospect in the fields of artificial intelligence and natural language processing.
Currently, the industry generally adopts a question-answering system constructed based on faq (Frequency assisted questions) to process the user's question, such as the answer returned to the user's question based on the retrieval method TF-IDF (Term Frequency-Inverse Document Frequency), the statistical machine learning method, and the deep learning method dssm (deep Structured Semantic model). However, the above method only considers the case that the user's question only includes a single question, and when the user's question includes a plurality of associated questions, it is difficult to completely match all the questions and give an accurate answer. In an actual user interaction scenario, the data percentage of the user questions including more than 2 questions is 39%, and therefore, it is important to be able to identify the number of questions included in the user questions and return corresponding answers to each question.
Disclosure of Invention
The embodiment of the application provides a question-answering processing method, a question-answering processing device, a question-answering processing medium and electronic equipment, which can be used for identifying the number of questions in a user question and returning corresponding answers to each question, so that the purposes of improving the response accuracy of a question-answering system and the intelligence of the question-answering system are achieved.
In a first aspect, an embodiment of the present application provides a question and answer processing method, where the method includes:
performing first feature matching on the user question and pre-stored questions in a pre-stored question-answer pair, and taking at least two pre-stored questions successfully matched as initial questions;
performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question;
selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question; wherein the first characteristic is different from the second characteristic.
In a second aspect, an embodiment of the present application provides a question and answer processing apparatus, where the apparatus includes:
the initial question determining module is used for carrying out first feature matching on the user question and a pre-stored question in a pre-stored question-answer pair, and taking at least two pre-stored questions which are successfully matched as initial questions;
the candidate problem determining module is used for performing second feature matching on the user problem and the initial problem, and taking the initial problem which is successfully matched as a candidate problem;
a target question determining module, configured to select a target question from the candidate questions according to third feature data between the user question and the candidate questions, and use an answer associated with the target question as an answer to the user question; wherein the first characteristic is different from the second characteristic.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a question-answering processing method according to an embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable by the processor, where the processor executes the computer program to implement the question-answering processing method according to the embodiment of the present application.
According to the technical scheme, the user questions are matched with the questions in the pre-stored question-answer pairs, answers of the user questions are determined in known question answers, the user questions and the pre-stored questions in the pre-stored question-answer pairs are subjected to feature matching of different levels and different types for multiple times, the search range of answers of the user questions is gradually reduced, the pre-stored questions with the highest matching degree with the user questions are determined as target questions, answers related to the target questions serve as answers of the user questions, corresponding answers of each sub-question in the user questions are guaranteed through multi-level and multi-angle feature matching, answer accuracy rate of a question-answer system and intelligence of the question-answer system are improved, and user experience is further improved.
Drawings
Fig. 1 is a flowchart of a question answering processing method according to an embodiment of the present application;
fig. 2 is a flowchart of another question answering processing method provided in the second embodiment of the present application;
fig. 3 is a flowchart of another question answering processing method provided in the third embodiment of the present application;
fig. 4 is a flowchart of another question answering processing method provided in the fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of a question answering processing device according to a fifth embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a question-answering processing method according to an embodiment of the present application, which is applicable to a case where a human-computer interaction question-answering system receives a user question and feeds back a matching answer to the user in response to the user question. The method can be executed by the question answering processing device provided by the embodiment of the application, and the device can be realized by software and/or hardware and can be integrated in electronic equipment running the system.
As shown in fig. 1, the question answering processing method includes:
s110, carrying out first feature matching on the pre-stored questions in the pre-stored question-answer pairs and the user questions, and taking at least two pre-stored questions successfully matched as initial questions.
The user questions refer to the questions to be solved received by the man-machine interactive question-answering system. The pre-answer question-answer pair is composed of a pre-stored question and an answer for answering the pre-stored question. The pre-answer question-answer pairs are a set of common questions and answers which are sorted by related technicians and are pre-stored in a local database or a cloud of the man-machine interaction question-answer system. Under the condition that the user question is known and the pre-stored question-answer pair exists, the user question can be matched with the pre-stored question in the pre-stored question-answer pair, and the answer of the pre-stored question which is successfully matched is used as the answer of the user question.
And performing first characteristic matching on the user question and the pre-stored question in the pre-stored question-answer pair, and taking at least two pre-stored questions successfully matched as initial questions. The initial question is successfully matched with the first characteristic of the user question, preliminary screening of the pre-stored questions is achieved by performing first characteristic matching, and the search range of the user question answers is narrowed.
In an alternative embodiment, the first characteristic refers to a frequency of occurrence of a language unit constituting the user question in the pre-stored question, the language unit including words or phrases.
The method comprises the steps of carrying out first feature matching on a user question and a pre-stored question, specifically, counting the frequency of single characters or words of a language unit forming the user question in the pre-stored question, wherein the higher the frequency of the same characters or words in the user question and the pre-stored question is, the higher the possibility that the user question and the pre-stored question correspond to the same answer is. In order to reduce the influence of word splitting errors in the word segmentation processing of the user problem on the determination of the initial problem, preferably, the characters are used as language units, that is, the frequency of the single characters of the user problem appearing in the pre-stored problem is counted. By doing so, the search range of answers for answering the user's question can be determined quickly, improving the processing efficiency of the user's question.
And S120, performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question.
Wherein, the candidate question refers to a result of further screening of the initial question, and the candidate question is a pre-stored question matched by the first feature and the second feature. The first feature match literally determines an approximate search range of the user's question answer, and the second feature is different from the first feature, and the second feature match is a deeper feature match relative to the first feature match. Because sentences obtained by different combinations of the same characters have different meanings, deep-level feature analysis of user problems is also needed. Optionally, the second feature is a semantic feature. Semantic features of the user questions and the initial questions are matched, the initial questions which are successfully matched are used as candidate questions, re-screening of the pre-stored questions is achieved, and the search range of user question answers is further narrowed. The candidate question is not only more similar in language units to the user question than the initial question, but also closer in sentence meaning to the user question.
S130, selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question.
After the first feature matching and the second feature matching, a final search range of the user question answer, namely the answer of the candidate question is determined. Since a user question may include sub-questions, for example, in a bank financing scenario, the user question is "how good the interest rate of the product is and the risk is high", and the user question includes two sub-questions of the interest rate of the financing product and the risk of the financing product. In order to ensure that the user question is completely answered, further analysis of sentence characteristics of the user question itself, such as sentence structure and sentence meaning, is required.
And selecting a target question from the candidate questions according to third characteristic data between the user question and the candidate questions. Specifically, the target question is finally determined by combining the sentence characteristics of the user question and the candidate question with the matching degree between the sentence meanings of the user question and the candidate questions, and the answer associated with the target question is used as the answer of the user question. The target question can be regarded as different expression modes of the user question, and the target question and the user question correspond to the same answer.
In an alternative embodiment, the third characteristic data comprises: syntactic, contextual and semantic similarity features.
In an alternative embodiment, the syntactic characteristics may be determined by: analyzing the user question and the candidate question by utilizing a dependency syntax to obtain syntax structure information; wherein the syntax structure information is information associated with semantics; sentence component information and sentence component combination relations of the user question and the candidate question are respectively extracted; and vectorizing the syntactic structure information according to the sentence components and the number of the sentence component combination relations, and splicing vectorized results to serve as syntactic characteristics.
The dependency syntax is to analyze the sentence into a dependency syntax tree, and describe the dependency relationship between the words. That is, syntactic collocation relationships between words are indicated, which are semantically related. Illustratively, the sentence "meeting announces the first list of senior citizens. The words "declare" dominate "the meeting", "has" and "the list" are obtained through the dependency syntax, so that these dominate words can be used as collocation words of "declare". And analyzing the user question and the candidate question respectively by using the dependency syntax to obtain syntax structure information.
Sentence component information and sentence component combination relations of the user question and the candidate question are respectively extracted, wherein the sentence component information refers to the composition components of a sentence, and the sentence components comprise at least one of the following: subject, predicate, object, verb, predicate, object, complement, and center. The sentence component combination relation comprises at least one of the following: a cardinal relationship or an actor-guest relationship. And respectively vectorizing the syntactic structure information according to the number of the sentence components and the number of the sentence component combination relations, and splicing vectorized results to serve as syntactic characteristics. The sentence component and sentence component combination relationship may reflect the number of sub-questions included in the user question.
In an alternative embodiment, the contextual characteristics may be determined by: extracting context semantic information of the user question and the candidate question respectively; and respectively vectorizing the context semantic information, and splicing vectorized results to serve as the semantic features.
Wherein the contextual features may reflect the sentence semantic features as a whole. Since the words constituting a sentence do not exist independently, there is a relationship between the words. And if the semantic features of the sentence are grasped as a whole, the association relationship among the words needs to be considered. Optionally, the text content of the user question and the candidate question is encoded by using a two-way LSTM (Long-Short Term Memory), the context semantic information of the user question and the candidate question is respectively extracted, then the context semantic information is respectively vectorized, and the vectorized result is spliced to serve as the semantic feature.
The semantic similarity characteristic is an index for measuring similarity between the user question and the candidate question at a semantic level, and in an optional embodiment, the semantic similarity characteristic is a sentence pair semantic similarity score output by a semantic similarity model.
According to the technical scheme, the user questions are matched with the questions in the pre-stored question-answer pairs, answers of the user questions are determined in known question answers, the user questions and the pre-stored questions in the pre-stored question-answer pairs are subjected to feature matching of different levels and different types for multiple times, the search range of answers of the user questions is gradually reduced, the pre-stored questions with the highest matching degree with the user questions are determined as target questions, answers related to the target questions serve as answers of the user questions, corresponding answers of each sub-question in the user questions are guaranteed through multi-level and multi-angle feature matching, answer accuracy rate of a question-answer system and intelligence of the question-answer system are improved, and user experience is further improved.
Example two
Fig. 2 is a flowchart of another question answering processing method provided in the second embodiment of the present application. The present embodiment is further optimized on the basis of the above-described embodiments. Specifically, the optimization is to perform first feature matching on the pre-stored questions in the user question and pre-stored question-answer pairs, and to take at least two pre-stored questions successfully matched as initial questions, including: segmenting the user question into independent language units; calculating a first similarity score between the user question and the pre-stored question according to the frequency of the pre-stored question in the pre-stored question-answer pair of each language unit; and selecting at least two pre-stored questions with the first similarity score larger than a preset similarity threshold value as initial questions.
As shown in fig. 2, the question answering processing method includes:
s210, dividing the user question into independent language units.
The language unit is a basic unit forming a user problem, and optionally, the language unit is a word or a single character. To avoid word splitting errors, it is preferable to treat a single word as a language unit. That is, the user question is divided into individual independent words.
S220, calculating a first similarity score between the user question and the pre-stored question according to the frequency of the pre-stored question in the pre-stored question-answer pair of each language unit.
Optionally, the frequency of each word in the user question appearing in each pre-stored question in the pre-stored question-answer pair is directly used as the first similarity score. Preferably, the frequency of each character in the user question appearing in each pre-stored question in the pre-stored question-answer pair, the number of the pre-stored questions in the pre-stored question-answer pair, the average length of the pre-stored questions in the pre-stored question-answer pair and other information are counted, and the first similarity score of the user question and each pre-stored question is comprehensively calculated.
S230, selecting at least two pre-stored problems with the first similarity score larger than a preset similarity threshold value as initial problems.
The preset similarity threshold is an empirical value determined by a relevant technician according to an actual situation, and is not limited herein, and is specifically determined according to the actual situation. The higher the first similarity score, the higher the degree of matching of the user question and the pre-stored question. And taking at least two pre-stored problems with the first similarity score larger than a preset similarity threshold value as initial problems.
In an alternative embodiment, calculating a first similarity score between the user question and the pre-stored question according to the frequency of occurrence of each pre-stored question in the pre-stored question-answer pair by each language unit includes:
calculating a first similarity score of the user question and the pre-stored question according to the following formula:
Figure BDA0003001900140000091
Figure BDA0003001900140000092
wherein, tiRepresenting language units in the user question, faqjRepresenting the pre-stored problem, f (t)i,faqj) Represents tiPreexisting questions faqjFrequency of occurrence of, k1And b is a first adjustment factor and a second adjustment factor, wiRepresenting the relevance weights, avgFAQ representing the average word length of the pre-stored questions, N representing the number of said pre-stored question-answer pairs, N (t)i) Representation includes tiFaq of the pre-existing questionsjNumber of (2), piIs used as a weight for representing tiThe degree of importance. Wherein k is1B is an empirical value determined by a correlation technician according to actual conditions, n is the number of language units in the user question, i is the userThe number of language units in the question. The subscript j denotes the jth prestored question faq in the prestored question-answer pairj
In particular, it can be according to tiClass determination of (p)iThe weight of the real word is determined to be 2, the weight of the stop word is determined to be 0.1, and the weight of other types is 0.1, so that the real word is highlighted and the stop word is weakened.
Figure BDA0003001900140000101
S240, carrying out second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question.
S250, selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question.
According to the technical scheme provided by the embodiment of the application, the user problem is divided into independent language units. And calculating a first similarity score between the user question and the pre-stored question according to the frequency of the pre-stored question in the pre-stored question-answer pair of each language unit, and selecting at least two pre-stored questions with the first similarity scores larger than a preset similarity threshold value as initial questions. By executing the scheme, the search range of answers for answering the user questions can be determined quickly, preliminary screening of pre-stored questions is achieved, the search range of answers of the user questions is narrowed, and the processing efficiency of the user questions is improved.
EXAMPLE III
Fig. 3 is a flowchart of another question answering processing method provided in the third embodiment of the present application. The present embodiment is further optimized on the basis of the above-described embodiments. Specifically, the optimization is to perform second feature matching on the user question and the initial question, and to use the initial question successfully matched as a candidate question, including: judging whether the initial problems can form semantic similar sentence pairs with the user problems or not by utilizing a semantic similarity model according to the user problems and the text content information and sentence structure information of each initial problem; and if so, determining the initial problem as a candidate problem.
As shown in fig. 3, the question answering processing method includes:
s310, carrying out first feature matching on the pre-stored questions in the pre-stored question-answer pairs and the user questions, and taking at least two pre-stored questions successfully matched as initial questions.
S320, judging whether the initial problems can form semantic similar sentence pairs with the user problems or not by utilizing a semantic similarity model according to the user problems, the text content information and the sentence structure information of each initial problem.
S330, if yes, determining the initial problem as a candidate problem.
The semantic similar sentence pair is a sentence pair formed by two semantically similar sentences, the two sentences forming the sentence pair are expressed in different modes, and the user questions and the initial questions forming the semantic similar sentence pair can correspond to the same answer.
The semantic similarity model is used for calculating semantic similarity between the user question and each initial question and determining whether the input user question and the initial question can form a semantic similar sentence pair according to the semantic similarity. The semantic similarity model is a pre-training completion model, text content information and sentence structure information of the user questions and each initial question are input into the semantic similarity model, and semantic similarity scores of the user questions and the initial questions and a judgment result of whether the user questions and the initial questions form a semantic similar sentence pair or not are output by the semantic similarity model.
The semantic similarity model takes semantic similarity judgment as a two-classification problem, determines the input formed by user problems and initial problems, and compares the probability of belonging to a semantic similar sentence pair with the probability of not belonging to the semantic similar sentence pair to obtain the judgment result of whether the semantic similar sentence pair is present. By doing so, the problem of similarity calculation is effectively solved from the perspective of deep semantics.
In an optional embodiment, before determining whether the initial question and the user question can form a semantic similar sentence pair by using a semantic similarity model according to the user question and text content information and sentence structure information of each initial question, the method further includes a training process of the semantic similarity model: determining label data of training sample sentence pairs by utilizing a pre-trained semantic similar sentence pair judgment model; wherein the tag data comprises: scoring the classification attribute and sentence pair semantic similarity by the semantic similar sentences;
the training sample sentence pair is formed by splicing two problems, and the semantic similar sentence pair judgment model is a model used for determining label data of the training sample sentence pair. The tag data includes: the semantically similar sentences score the classification attributes and sentence-to-semantic similarity. Extracting the text content information and the sentence structure information of the training sample sentence pair as characteristic data; and taking the feature data and the label data as training data to train the semantic similarity model so as to enable the semantic similarity model to output classification attributes of semantic similar sentences and scores of sentence-to-sentence semantic similarity.
The classification attribute of the semantic similar sentence pair is that the training sample sentence pair belongs to the semantic similar sentence pair, and the training sample sentence pair does not belong to the semantic similar sentence pair. And the semantic similarity score is the probability that the training sample sentence pair belongs to the semantic similar sentence pair, or the probability that the training sample sentence pair does not belong to the semantic similar sentence pair, and the two are selected as the semantic similarity score.
And taking the extracted text content information and sentence structure information of the training sample sentence pair as feature data, and taking the feature data and the label data as training data of a semantic similarity model to train the semantic similarity model. Optionally, the semantic similarity model is a bert (bidirectional Encoder replication from transforms) semantic similarity model.
In order to implement automatic determination of the labels of the training samples, in an optional embodiment, a semantic similar sentence pair judgment model is constructed to implement automatic determination of the label data of the training samples required by the training semantic similarity model.
The construction process of the training sample of the semantic similar sentence pair judgment model is as follows: performing the first feature matching on two sample problems in a training sample sentence pair, and if the matching is successful, taking the training sample sentence pair as a positive example sample sentence pair; and if the matching fails, taking the training sample sentence pair as a negative example sample sentence. The output of the semantic similar sentence pair judgment model is label data of a training sample of the semantic similarity model, namely classification attributes of the semantic similar sentence pairs and sentence-to-semantic similarity scores. Optionally, the semantic similar sentence pair judgment model is an LTR (Learning to Rank) model.
S340, selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question.
According to the technical scheme provided by the embodiment of the application, whether the initial problem can form a semantic similar sentence pair with the user problem or not is judged by utilizing a pre-trained semantic similarity model according to the user problem, the initial problem which can form the semantic similar sentence pair with the user problem is determined as a candidate problem, the semantic features of the user problem and the initial problem are matched, the successfully matched initial problem is used as the candidate problem, the initial problem is screened again, and the search range of the user problem answer is further narrowed. Compared with the initial question, the obtained candidate question has higher similarity to the user question literally, is closer to the user question in sentence meaning, and improves the reply accuracy of the question-answering system.
Example four
Fig. 4 is a flowchart of another question answering processing method provided in the fourth embodiment of the present application. The present embodiment is further optimized on the basis of the above-described embodiments. Specifically, the selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and using an answer associated with the target question as an answer of the user question includes: splicing the third characteristic data of the user question and the candidate question to be used as the input of a neural network model; and determining a target question in the candidate questions according to the quantity of the sub-questions of the user question output by the neural network model and the category of the sub-questions.
As shown in fig. 4, the question answering processing method includes:
s410, carrying out first feature matching on the user question and the pre-stored question in the pre-stored question-answer pair, and taking at least two pre-stored questions successfully matched as initial questions.
And S420, performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question.
And S430, splicing the third characteristic data of the user question and the candidate question to be used as the input of a neural network model.
And splicing the syntactic characteristics, the context characteristics and the semantic similarity characteristics, and inputting the spliced syntactic characteristics, the context characteristics and the semantic similarity characteristics into the neural network model as characteristic vectors. The neural network model is a natural language processing model which is determined by relevant technicians according to actual conditions and is trained in advance. The neural network model is used for outputting the number of the sub-problems of the user problem and the category of the sub-problems according to the syntactic characteristics, the context characteristics and the semantic similarity characteristics.
S440, determining a target question from the candidate questions according to the quantity of the sub-questions of the user question output by the neural network model and the category of the sub-questions, and taking an answer associated with the target question as an answer of the user question.
The category of the sub-question is determined by the technical personnel according to the actual business scene, and exemplarily comprises the types of financing, loan and deposit. Corresponding answers of the same user question in different service scenes are different, so that the factors of the service scenes need to be comprehensively considered in addition to the similarity between the user question and the candidate question. The category of the sub-problem reflects the service scenario information.
In addition, in order to ensure that each sub-question in the user question can be answered by the corresponding service scope. In an alternative embodiment, determining a target question among the candidate questions according to the number of sub-questions of the user question and the category of the sub-questions output by the neural network model includes: selecting a question in accordance with the category of the sub-question as a target question from the candidate questions; and determining the number of the target problems according to the number of the sub-problems so as to check whether the sub-problems all have corresponding target problems.
In order for the neural network to output the category of the problem and the number of sub-problems included in the problem, the neural network is trained using a training data set whose tag data includes the category of the problem and the number of sub-problems included in the problem. The training data set can be a public data set or can be constructed independently. The method comprises the steps of autonomously constructing a training data set, exemplarily and randomly extracting 5000 samples from question and answer data interacted by a user-customer manager under a bank human-computer interaction question and answer situation, and handing the samples to a annotator to manually label standard questions corresponding to the user questions and common question sets. And calculating the consistency of the labeling result by using the consistency test value, wherein the number of question sentences of the specific labeling sample and the corresponding standard question category. The consistency check value of the labeling results on the two tasks is greater than 0.7, which indicates that the labeling consistency is higher and the linguistic data is available, so that a training data set is formed.
The common question set is a pre-answer question set, can be a question set used in public, and can also be obtained by arranging questions frequently proposed by users in the marketing of financial products by service personnel according to professional knowledge. Illustratively, the common question set can be expanded by manually expanding each standard question into similar questions. If 100 types of standard questions are collected in total, 10 semantically similar expanded questions in each type are collected, and 1000 standard question-answer pairs are used as the pre-answer question-answer pairs.
According to the technical scheme provided by the embodiment of the application, the third characteristic data of the user question and the third characteristic data of the candidate question are spliced and used as the input of the neural network model. And determining a target question from the candidate questions according to the number of the sub-questions of the user question output by the neural network model and the category of the sub-questions, and taking an answer associated with the target question as an answer of the user question. According to the technical scheme, the similarity between the user question and the candidate question and the factors of the service scene are comprehensively considered, so that each subproblem in the user question can be answered in a corresponding service range, and the answering accuracy of the question answering system and the intelligence of the question answering system are improved.
EXAMPLE five
Fig. 5 is a device for processing a question and answer according to the fifth embodiment of the present application, which is applicable to a case where a human-computer interaction question and answer system receives a user question and feeds back a matching answer to the user for the user question. The device can be realized by software and/or hardware, and can be integrated in electronic equipment such as an intelligent terminal.
As shown in fig. 5, the apparatus may include:
an initial question determining module 510, configured to perform first feature matching on the user question and a pre-stored question in a pre-stored question-answer pair, and take at least two pre-stored questions successfully matched as initial questions;
a candidate question determining module 520, configured to perform second feature matching on the user question and the initial question, and use the initial question successfully matched as a candidate question;
a target question determining module 530, configured to select a target question from the candidate questions according to third feature data between the user question and the candidate questions, and use an answer associated with the target question as an answer of the user question; wherein the first characteristic is different from the second characteristic.
According to the technical scheme, the user questions are matched with the questions in the pre-stored question-answer pairs, answers of the user questions are determined in known question answers, the user questions and the pre-stored questions in the pre-stored question-answer pairs are subjected to feature matching of different levels and different types for multiple times, the search range of answers of the user questions is gradually reduced, the pre-stored questions with the highest matching degree with the user questions are determined as target questions, answers related to the target questions serve as answers of the user questions, corresponding answers of each sub-question in the user questions are guaranteed through multi-level and multi-angle feature matching, answer accuracy rate of a question-answer system and intelligence of the question-answer system are improved, and user experience is further improved.
Optionally, the first characteristic is a frequency of occurrence of a language unit constituting the user question in the pre-stored question, and the language unit includes a word or a word; the second feature is a semantic feature; the third characteristic data includes: syntactic, contextual and semantic similarity features.
Optionally, the initial problem determination module 510 includes:
the language unit segmentation submodule is used for segmenting the user question into independent language units;
the first similarity score calculating submodule is used for calculating a first similarity score between the user question and the pre-stored question according to the frequency of the pre-stored question in the pre-stored question-answer pair of each language unit;
and the initial problem determining submodule is used for selecting at least two pre-stored problems with the first similarity score being larger than a preset similarity threshold value as initial problems.
Optionally, the first similarity score calculating sub-module is specifically configured to:
calculating a first similarity score of the user question and the pre-stored question according to the following formula:
Figure BDA0003001900140000171
Figure BDA0003001900140000172
wherein, tiRepresenting language units in the user question, faqjRepresenting the pre-stored problem, f (t)i,faqj) Represents tiPreexisting questions faqjFrequency of occurrence of, k1And b is a first adjustment factor and a second adjustment factor, wiRepresenting the correlation weight, avgFAQ denotes the average word length of the pre-stored questions, N denotes the number of said pre-stored question-answer pairs, N (t)i) Representation includes tiFaq of the pre-existing questionsjNumber of (2), piThe weights being used to represent tiThe degree of importance.
Optionally, the performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question includes:
judging whether the initial problems can form semantic similar sentence pairs with the user problems or not by utilizing a semantic similarity model according to the user problems and the text content information and sentence structure information of each initial problem;
and if so, determining the initial problem as a candidate problem.
Optionally, the apparatus further comprises: the semantic similarity model training module is used for training the semantic similarity model before judging whether the initial problem can form a semantic similar sentence pair with the user problem by utilizing the semantic similarity model according to the user problem and the text content information and sentence structure information of each initial problem;
the semantic similarity model training module comprises: the label data determining submodule is used for determining label data of a training sample sentence pair by utilizing a pre-trained semantic similar sentence pair judgment model; wherein the tag data comprises: scoring the classification attribute and sentence pair semantic similarity by the semantic similar sentences;
the characteristic data determining submodule is used for extracting the text content information and the sentence structure information of the training sample sentence pair as characteristic data;
and the semantic similarity model training submodule is used for training the semantic similarity model by taking the feature data and the label data as training data so as to enable the semantic similarity model to output classification attributes of semantic similar sentences and sentence-to-semantic similarity scores.
Optionally, the apparatus further comprises: the semantic similar sentence pair judgment model training sample construction module is specifically used for constructing a training sample of the semantic similar sentence pair judgment model;
the semantic similar sentence pair judgment model training sample construction module comprises: a positive example sample sentence pair construction submodule, configured to perform the first feature matching on two sample problems in a training sample sentence pair, and if matching is successful, take the training sample sentence pair as a positive example sample sentence pair;
and the negative example sample sentence pair construction submodule is used for taking the training sample sentence pair as a negative example sample sentence pair if the matching fails.
Optionally, the apparatus further includes a syntactic characteristic determining module, specifically configured to determine syntactic characteristics;
the syntactic characteristic determining module includes: a syntax structure information determining submodule, configured to analyze the user question and the candidate question respectively by using a dependency syntax, so as to obtain syntax structure information; wherein the syntax structure information is information associated with semantics;
a sentence component information and sentence component combination relation extraction submodule for extracting sentence component information and sentence component combination relations of the user question and the candidate question respectively;
and the syntactic characteristic determining submodule is used for respectively vectorizing the syntactic structure information according to the sentence components and the number of the sentence component combination relations, and splicing vectorized results to serve as syntactic characteristics.
Optionally, the apparatus further comprises: a context feature determination module, specifically configured to determine a context feature;
a contextual feature determination module comprising: a context semantic information extraction submodule for extracting context semantic information of the user question and the candidate question respectively;
and the semantic features respectively vectorize the context semantic information, and splice vectorization results to serve as the semantic features.
Optionally, the semantic similarity feature is the sentence pair semantic similarity score output by the semantic similarity model.
Optionally, the target problem determining module 530 includes:
the third characteristic data splicing sub-module is used for splicing the third characteristic data of the user question and the candidate question as the input of a neural network model;
and the target problem determining submodule is used for determining a target problem in the candidate problems according to the number of the sub-problems of the user problem output by the neural network model and the category of the sub-problems.
Optionally, the target problem determination sub-module includes:
a target question determination first unit configured to select, as a target question, a question that coincides with the category of the sub-question from among the candidate questions;
and the target problem determination second unit is used for determining the number of the target problems according to the number of the sub-problems so as to check whether the sub-problems all have corresponding target problems.
The question-answer processing device provided by the embodiment of the invention can execute the question-answer processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the question-answer processing method.
EXAMPLE six
A sixth embodiment of the present application further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a question-and-answer processing method, including:
performing first feature matching on the user question and pre-stored questions in a pre-stored question-answer pair, and taking at least two pre-stored questions successfully matched as initial questions;
performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question;
selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question; wherein the first characteristic is different from the second characteristic.
Storage media refers to any of various types of memory electronics or storage electronics. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in the computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network (such as the internet). The second computer system may provide the program instructions to the computer for execution. The term "storage medium" may include two or more storage media that may reside in different unknowns (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the above-described question-answering processing operation, and may also perform related operations in the question-answering processing method provided in any embodiments of the present application.
EXAMPLE seven
A seventh embodiment of the present application provides an electronic device, where the question and answer processing apparatus provided in the embodiment of the present application may be integrated in the electronic device, and the electronic device may be configured in a system, or may be a device that performs part or all of functions in the system. Fig. 6 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present application. As shown in fig. 6, the present embodiment provides an electronic device 600, which includes: one or more processors 620; the storage device 610 is configured to store one or more programs, and when the one or more programs are executed by the one or more processors 620, the one or more processors 620 are enabled to implement the question and answer processing method provided in the embodiment of the present application, the method includes:
performing first feature matching on the user question and pre-stored questions in a pre-stored question-answer pair, and taking at least two pre-stored questions successfully matched as initial questions;
performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question;
selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question; wherein the first characteristic is different from the second characteristic.
Of course, those skilled in the art can understand that the processor 620 also implements the technical solution of the question answering processing method provided in any embodiment of the present application.
The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the electronic device 600 includes a processor 620, a storage device 610, an input device 630, and an output device 640; the number of the processors 620 in the electronic device may be one or more, and one processor 620 is taken as an example in fig. 6; the processor 620, the storage device 610, the input device 630, and the output device 640 in the electronic apparatus may be connected by a bus or other means, and are exemplified by being connected by a bus 650 in fig. 6.
The storage device 610 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and module units, such as program instructions corresponding to the question answering processing method in the embodiment of the present application.
The storage device 610 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. In addition, the storage 610 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage 610 may further include memory located remotely from the processor 620, which may be connected via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 630 may be used to receive input numbers, character information, or voice information, and to generate key signal inputs related to user settings and function control of the electronic device. The output device 640 may include a display screen, a speaker, and other electronic devices.
The question answering processing device, the medium and the electronic equipment provided in the embodiments can execute the question answering processing method provided in any embodiment of the application, and have corresponding functional modules and beneficial effects for executing the method. For technical details that are not described in detail in the above embodiments, reference may be made to the question answering method provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (15)

1. A question-answer processing method, characterized in that the method comprises:
performing first feature matching on the user question and pre-stored questions in a pre-stored question-answer pair, and taking at least two pre-stored questions successfully matched as initial questions;
performing second feature matching on the user question and the initial question, and taking the initial question successfully matched as a candidate question;
selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and taking an answer associated with the target question as an answer of the user question; wherein the first characteristic is different from the second characteristic.
2. The method according to claim 1, wherein the first characteristic is frequency of occurrence of language units constituting the user question in the pre-stored question, the language units including words or phrases; the second feature is a semantic feature; the third characteristic data includes: syntactic, contextual and semantic similarity features.
3. The method of claim 2, wherein the first feature matching of the user question with the pre-stored questions in the pre-stored question-answer pair and the successful matching of at least two of the pre-stored questions as initial questions comprises:
segmenting the user question into independent language units;
calculating a first similarity score between the user question and the pre-stored question according to the frequency of the pre-stored question in the pre-stored question-answer pair of each language unit;
and selecting at least two pre-stored questions with the first similarity score larger than a preset similarity threshold value as initial questions.
4. The method of claim 3, wherein calculating a first similarity score between the user question and the pre-stored question according to the frequency of occurrence of each pre-stored question in the pre-stored question-answer pair by each linguistic unit comprises:
calculating a first similarity score of the user question and the pre-stored question according to the following formula:
Figure FDA0003001900130000021
Figure FDA0003001900130000022
wherein, tiRepresenting language units in the user question, faqjRepresenting the pre-stored problem, f (t)i,faqj) Represents tiPreexisting questions faqjFrequency of occurrence of, k1And b is a first adjustment factor and a second adjustment factor, wiRepresenting the relevance weights, avgFAQ representing the average word length of the pre-stored questions, N representing the number of said pre-stored question-answer pairs, N (t)i) Representation includes tiFaq of the pre-existing questionsjNumber of (2), piIs used as a weight for representing tiThe degree of importance.
5. The method of claim 2, wherein the second feature matching the user question with the initial question and the initial question with a successful matching as a candidate question comprises:
judging whether the initial problems can form semantic similar sentence pairs with the user problems or not by utilizing a semantic similarity model according to the user problems and the text content information and sentence structure information of each initial problem;
and if so, determining the initial problem as a candidate problem.
6. The method according to claim 5, further comprising a training process of the semantic similarity model before using the semantic similarity model to determine whether the initial question and the user question can form a semantic similar sentence pair according to the user question and text content information and sentence structure information of each initial question:
determining label data of training sample sentence pairs by utilizing a pre-trained semantic similar sentence pair judgment model; wherein the tag data comprises: scoring the classification attribute and sentence pair semantic similarity by the semantic similar sentences;
extracting the text content information and the sentence structure information of the training sample sentence pair as characteristic data;
and taking the feature data and the label data as training data to train the semantic similarity model so as to enable the semantic similarity model to output classification attributes of semantic similar sentences and scores of sentence-to-sentence semantic similarity.
7. The method according to claim 6, wherein the construction process of the training sample of the semantic similar sentence pair judgment model is as follows:
performing the first feature matching on two sample problems in a training sample sentence pair, and if the matching is successful, taking the training sample sentence pair as a positive example sample sentence pair;
and if the matching fails, taking the training sample sentence pair as a negative example sample sentence pair.
8. The method according to claim 2, wherein the method further comprises a determination process of syntactic characteristics:
analyzing the user question and the candidate question by utilizing a dependency syntax to obtain syntax structure information; wherein the syntax structure information is information associated with semantics;
sentence component information and sentence component combination relations of the user question and the candidate question are respectively extracted;
and vectorizing the syntactic structure information according to the sentence components and the number of the sentence component combination relations, and splicing vectorized results to serve as syntactic characteristics.
9. The method of claim 2, wherein the method further comprises a context feature determination process:
extracting context semantic information of the user question and the candidate question respectively;
and respectively vectorizing the context semantic information, and splicing vectorized results to serve as the semantic features.
10. The method of claim 6, wherein the semantic similarity feature is the sentence-to-semantic similarity score output by the semantic similarity model.
11. The method of claim 2, wherein selecting a target question from the candidate questions according to third feature data between the user question and the candidate questions, and using an answer associated with the target question as an answer to the user question comprises:
splicing the third characteristic data of the user question and the candidate question to be used as the input of a neural network model;
and determining a target question in the candidate questions according to the quantity of the sub-questions of the user question output by the neural network model and the category of the sub-questions.
12. The method of claim 11, wherein determining a target question among the candidate questions according to the number of sub-questions of the user question and the category of the sub-questions output by the neural network model comprises:
selecting a question in accordance with the category of the sub-question as a target question from the candidate questions;
and determining the number of the target problems according to the number of the sub-problems so as to check whether the sub-problems all have corresponding target problems.
13. A question-answering processing apparatus characterized by comprising:
the initial question determining module is used for carrying out first feature matching on the user question and a pre-stored question in a pre-stored question-answer pair, and taking at least two pre-stored questions which are successfully matched as initial questions;
the candidate problem determining module is used for performing second feature matching on the user problem and the initial problem, and taking the initial problem which is successfully matched as a candidate problem;
a target question determining module, configured to select a target question from the candidate questions according to third feature data between the user question and the candidate questions, and use an answer associated with the target question as an answer to the user question; wherein the first characteristic is different from the second characteristic.
14. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the question-answer processing method according to any one of claims 1 to 12.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the question-answer processing method according to any one of claims 1 to 12 when executing the computer program.
CN202110349133.XA 2021-03-31 2021-03-31 Question and answer processing method and device, medium and electronic equipment Active CN112989001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110349133.XA CN112989001B (en) 2021-03-31 2021-03-31 Question and answer processing method and device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110349133.XA CN112989001B (en) 2021-03-31 2021-03-31 Question and answer processing method and device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112989001A true CN112989001A (en) 2021-06-18
CN112989001B CN112989001B (en) 2023-05-26

Family

ID=76338725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110349133.XA Active CN112989001B (en) 2021-03-31 2021-03-31 Question and answer processing method and device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112989001B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116204726A (en) * 2023-04-28 2023-06-02 杭州海康威视数字技术股份有限公司 Data processing method, device and equipment based on multi-mode model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920654A (en) * 2018-06-29 2018-11-30 泰康保险集团股份有限公司 A kind of matched method and apparatus of question and answer text semantic
CN111858859A (en) * 2019-04-01 2020-10-30 北京百度网讯科技有限公司 Automatic question-answering processing method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920654A (en) * 2018-06-29 2018-11-30 泰康保险集团股份有限公司 A kind of matched method and apparatus of question and answer text semantic
CN111858859A (en) * 2019-04-01 2020-10-30 北京百度网讯科技有限公司 Automatic question-answering processing method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116204726A (en) * 2023-04-28 2023-06-02 杭州海康威视数字技术股份有限公司 Data processing method, device and equipment based on multi-mode model
CN116204726B (en) * 2023-04-28 2023-07-25 杭州海康威视数字技术股份有限公司 Data processing method, device and equipment based on multi-mode model

Also Published As

Publication number Publication date
CN112989001B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN111353310B (en) Named entity identification method and device based on artificial intelligence and electronic equipment
CN110110062B (en) Machine intelligent question and answer method and device and electronic equipment
CN108829682B (en) Computer readable storage medium, intelligent question answering method and intelligent question answering device
CN112329824A (en) Multi-model fusion training method, text classification method and device
CN110175229A (en) A kind of method and system carrying out online training based on natural language
CN112036705A (en) Quality inspection result data acquisition method, device and equipment
CN112287090A (en) Financial question asking back method and system based on knowledge graph
CN111782793A (en) Intelligent customer service processing method, system and equipment
WO2023040516A1 (en) Event integration method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN111738018A (en) Intention understanding method, device, equipment and storage medium
CN112989001B (en) Question and answer processing method and device, medium and electronic equipment
CN117573985A (en) Information pushing method and system applied to intelligent online education system
CN116450855A (en) Knowledge graph-based reply generation strategy method and system for question-answering robot
Arbaaeen et al. Natural language processing based question answering techniques: A survey
Acheampong et al. Answer triggering of factoid questions: A cognitive approach
CN116562280A (en) Literature analysis system and method based on general information extraction
CN113742445B (en) Text recognition sample obtaining method and device and text recognition method and device
CN116186219A (en) Man-machine dialogue interaction method, system and storage medium
Otani et al. Large-scale acquisition of commonsense knowledge via a quiz game on a dialogue system
CN114841143A (en) Voice room quality evaluation method and device, equipment, medium and product thereof
Rosander et al. Email Classification with Machine Learning and Word Embeddings for Improved Customer Support
Dikshit et al. Automating Questions and Answers of Good and Services Tax system using clustering and embeddings of queries
CN117541044B (en) Project classification method, system, medium and equipment based on project risk analysis
WO2024098282A1 (en) Geometric problem-solving method and apparatus, and device and storage medium
CN117291192B (en) Government affair text semantic understanding analysis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant