CN117493505A - Intelligent question-answering method, device, equipment and storage medium - Google Patents

Intelligent question-answering method, device, equipment and storage medium Download PDF

Info

Publication number
CN117493505A
CN117493505A CN202311339533.8A CN202311339533A CN117493505A CN 117493505 A CN117493505 A CN 117493505A CN 202311339533 A CN202311339533 A CN 202311339533A CN 117493505 A CN117493505 A CN 117493505A
Authority
CN
China
Prior art keywords
question
processed
questions
candidate
answering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311339533.8A
Other languages
Chinese (zh)
Inventor
李东根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202311339533.8A priority Critical patent/CN117493505A/en
Publication of CN117493505A publication Critical patent/CN117493505A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides an intelligent question-answering method, device, equipment and storage medium, wherein the method comprises the following steps: acquiring a problem to be treated; selecting a plurality of candidate questions meeting the requirement of the similarity with the questions to be processed from the question-answering library; carrying out semantic analysis on the problem to be processed and the plurality of candidate problems by utilizing a semantic matching model to obtain a target problem corresponding to the problem to be processed, wherein the semantic matching model is obtained by carrying out fine tuning on a pre-trained large language model; according to the target questions, answers corresponding to the questions to be processed are determined, and accuracy of intelligent question-answering results is improved. In addition, in the scheme, after the pretrained large language model is subjected to fine adjustment based on a small amount of training data, a semantic matching model with good performance and strong generalization can be obtained, so that when intelligent question-answering processing is performed by using the semantic matching model, a target problem corresponding to a to-be-processed problem can be more accurately matched, and the accuracy of an intelligent question-answering result is further improved.

Description

Intelligent question-answering method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent question-answering method, an intelligent question-answering device, intelligent question-answering equipment and an intelligent question-answering storage medium.
Background
The intelligent question-answering is one of the core technologies of man-machine interaction, can automatically find a standard question matched with the question-answering knowledge base aiming at the question proposed by the user, and pushes the answer of the standard question to the user, so that the burden of manual answer can be greatly reduced. The intelligent question and answer has wide practical application in the fields of self-service, intelligent customer service and the like.
The question and answer matching model can be adopted to conduct intelligent question and answer on the questions presented by the user. However, because the existing question-answer matching model has strong dependence on training data and poor generalization, the question-answer matching result with high accuracy cannot be obtained for some question questions.
Disclosure of Invention
The embodiment of the invention provides an intelligent question-answering method, device, equipment and storage medium, which are used for improving the accuracy of question-answering matching results.
In a first aspect, an embodiment of the present invention provides an intelligent question-answering method, where the method includes:
acquiring a problem to be treated;
selecting a plurality of candidate questions meeting the requirement of the similarity with the to-be-processed questions from a question-answer library;
carrying out semantic analysis on the to-be-processed problem and the plurality of candidate problems by utilizing a semantic matching model to obtain a target problem corresponding to the to-be-processed problem, wherein the semantic matching model is obtained by fine tuning a pre-trained large language model;
And determining an answer corresponding to the to-be-processed question according to the target question.
In a second aspect, an embodiment of the present invention provides an intelligent question-answering apparatus, including:
the acquisition module is used for acquiring the problem to be processed;
the selection module is used for selecting a plurality of candidate questions which meet the requirement on the similarity of the questions to be processed from a question-answer library;
the processing module is used for carrying out semantic analysis on the to-be-processed problem and the plurality of candidate problems by utilizing a semantic matching model so as to obtain a target problem corresponding to the to-be-processed problem, wherein the semantic matching model is obtained after a pre-trained large language model is subjected to fine adjustment;
and the determining module is used for determining an answer corresponding to the to-be-processed question according to the target question.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, a communication interface; wherein the memory has executable code stored thereon that, when executed by the processor, causes the processor to perform the intelligent question-answering method of the first aspect.
In a fourth aspect, embodiments of the present invention provide a non-transitory machine-readable storage medium having executable code stored thereon, which when executed by a processor of an electronic device, causes the processor to at least implement the intelligent question-answering method according to the first aspect.
In the intelligent question-answering scheme provided by the embodiment of the invention, when an intelligent question-answering service is provided for a user, a to-be-processed question of the user question-answering request can be acquired firstly, then, a plurality of candidate questions meeting the requirement on the similarity of the to-be-processed question are selected from a question-answering library, semantic analysis is carried out on the to-be-processed question and the plurality of candidate questions by utilizing a semantic matching model, so as to obtain a target question corresponding to the to-be-processed question, wherein the semantic matching model is obtained after the pretrained large language model is subjected to fine adjustment, and the target question is a reference standard question matched with the to-be-processed question. And finally, determining an answer corresponding to the to-be-processed question according to the target question.
According to the scheme, the plurality of candidate questions meeting the requirement on the similarity of the questions to be processed are screened out from the question-answering library, and then the semantic matching model is utilized to further screen the plurality of candidate questions so as to determine the target questions corresponding to the questions to be processed, namely, when answers corresponding to the questions to be processed are determined, screening is conducted twice, the target questions corresponding to the questions to be processed can be matched more accurately, and therefore accuracy of intelligent question-answering results is improved. In addition, the semantic understanding capability of the large language model can be utilized, after the pre-trained large language model is subjected to fine adjustment based on a small amount of training data, the semantic matching model with good performance can be obtained, and the obtained semantic matching model has strong generalization, so that the semantic analysis can be better carried out on the problem to be processed and a plurality of candidate problems by utilizing the semantic matching model, the accuracy of semantic matching can be improved, and then the target problem closest to the problem semantics of the user can be selected, so that the accuracy of the intelligent question-answering result is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is an application scenario diagram of an intelligent question-answering method according to an embodiment of the present invention;
fig. 1b is an application scenario diagram of an intelligent question-answering method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an intelligent question-answering method according to an embodiment of the present invention;
FIG. 3 is a flowchart of determining a target problem corresponding to a problem to be processed according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a preset prompt learning template according to an embodiment of the present invention;
fig. 5 is an application schematic diagram of an intelligent question-answering method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an application of semantic matching processing using a semantic matching model according to an embodiment of the present invention;
FIG. 7 is a flowchart of a semantic matching model training method according to an embodiment of the present invention;
Fig. 8 is a schematic structural diagram of an intelligent question-answering device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to the present embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
In the current intelligent question-answering scheme, aiming at the to-be-processed problem requested in the intelligent question-answering task, a question-answering matching model is utilized to match a target problem corresponding to the to-be-processed problem from a question-answering library. Specifically, when question-answer matching is performed on a to-be-processed question requested by a user, the question-answer matching model firstly extracts feature vectors corresponding to key information in the to-be-processed question and feature vectors corresponding to key information in a plurality of reference questions in a question-answer library, and calculates similarity among the feature vectors to determine a target question closest to the to-be-processed question. And matching the key information in the to-be-processed problem with the key information in the multiple reference problems in the question-answering library to match a target problem closest to the to-be-processed problem.
The existing question-answer matching model is matched based on key information, for some questions to be processed which are difficult to distinguish, question-answer results with high accuracy cannot be obtained, the accuracy of the matching results is low, the existing question-answer matching model is obtained by training based on similar question pairs, the dependency on training data is strong, and similar question pairs which are not existing in some training data cannot be accurately identified, so that generalization is poor, and accuracy is low.
In order to solve the problems, the embodiment of the invention provides a novel intelligent question-answering scheme. In the intelligent question-answering scheme, a small amount of training data is utilized to finely tune a large language model on the basis of the pre-trained large language model so as to obtain a semantic matching model with better performance, and then the semantic analysis is carried out on a to-be-processed problem and a plurality of candidate problems meeting the requirement on the similarity of the to-be-processed problem by utilizing the trained semantic matching model so as to select a target problem closest to the user problem semantics, thereby improving the accuracy of intelligent question-answering results and providing intelligent question-answering service for users better.
In order that the scheme can be better understood, before the specific implementation mode of the intelligent question-answering method is introduced, the intelligent question-answering scheme is described in an exemplary mode with reference to an application scene.
The execution subject of the intelligent question-answering method provided in this embodiment may be an intelligent question-answering device, which may be a specific robot device, for example, a mobile robot, an operation robot, a chat robot, a sweeping robot, or the like, which may directly provide an intelligent question-answering service for a user to the user, and is shown in fig. 1 a.
The method comprises the steps that a user can directly initiate an intelligent question-answer request to the robot device through voice, a keyboard or gestures and the like, and after the robot device receives the intelligent question-answer request initiated by the user, the robot device firstly acquires a to-be-processed problem. The to-be-processed problem refers to any problem requested by a user, and the content and type of the to-be-processed problem in the embodiment of the invention are not particularly limited. And then, selecting a plurality of candidate questions with the similarity meeting the requirement with the to-be-processed question from the question-answering library, and carrying out semantic analysis on the to-be-processed question and the plurality of candidate questions by utilizing a semantic matching model so as to obtain a target question which is most matched with the to-be-processed question. And then, determining an answer corresponding to the to-be-processed question according to the target question. The semantic matching model is obtained after the pre-trained large language model is subjected to fine tuning, and language understanding capability and reasoning capability of the large language model are fully excited, so that a small amount of training samples are utilized to obtain the semantic matching model with better performance. And finally, the robot equipment returns the answers corresponding to the questions to be processed to the user as question and answer results.
In addition, the intelligent question-answering device may be an intelligent question-answering service platform or the like, and may be connected with a request end/client end in a communication manner, as shown in fig. 1 b. The client/client may be any computing device with a certain data transmission capability, and in particular, the client/client may be a mobile phone, a personal computer PC, a tablet computer, a setting application program, or the like. When the user has intelligent question-answering requirements, a client or a request end can be used for initiating an intelligent question-answering request to an intelligent question-answering device, after the intelligent question-answering device receives the intelligent question-answering request initiated by the client or the request end, a to-be-processed problem is obtained, then a plurality of candidate problems meeting the requirement on similarity with the to-be-processed problem are selected from a question-answering library, and semantic analysis is carried out on the to-be-processed problem and the plurality of candidate problems by utilizing a semantic matching model so as to obtain a target problem which is matched with the to-be-processed problem best. And finally, determining an answer corresponding to the to-be-processed question according to the target question, and sending the answer corresponding to the to-be-processed question to a corresponding client or a request end as a question-answer result.
In the embodiment of the invention, the plurality of candidate questions meeting the requirement on the similarity of the questions to be processed are screened out from the question-answering library, and the questions to be processed and the plurality of candidate questions are subjected to semantic analysis processing by utilizing the semantic matching model so as to further screen out the target questions matched with the questions to be processed, so that the target questions corresponding to the questions to be processed can be more accurately matched, and the accuracy of the intelligent question-answering result is further improved. In addition, the semantic matching model is obtained by fine tuning the pre-trained large language model, so that the semantic matching model has better semantic understanding capability and strong generalization, and when the semantic matching model is utilized to better analyze the semantics of the problem to be processed and the candidate problems, the accuracy of semantic matching can be improved, and the target problem closest to the semantics of the user problem can be selected, so that the accuracy of the intelligent question-answering result is improved.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. In the case where there is no conflict between the embodiments, the following embodiments and features in the embodiments may be combined with each other.
Fig. 2 is a flowchart of an intelligent question-answering method according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
201. and obtaining the problem to be treated.
202. And selecting a plurality of candidate questions meeting the requirement of the similarity with the to-be-processed questions from a question-answer library.
203. And carrying out semantic analysis on the to-be-processed problem and the plurality of candidate problems by using a semantic matching model to obtain a target problem corresponding to the to-be-processed problem, wherein the semantic matching model is obtained after fine tuning of a pre-trained large language model.
204. And determining an answer corresponding to the to-be-processed question according to the target question.
When intelligent question-answering processing is performed on the question-answering request, a question to be processed can be acquired first. The to-be-processed problem refers to a question requesting question and answer. In practical application, when receiving an intelligent question-answer request initiated by a user, question data carried in the intelligent question-answer request may be a question text carrying a question content, a question image carrying a question content, a question voice carrying a question content, a table carrying a question content, and the like. When the intelligent question-answering device receives the intelligent question-answering request, the question data carried in the intelligent question-answering request can be preprocessed, and the question to be processed can be obtained based on the preprocessed question data.
For example, when the question data carried in the intelligent question-answering request is an inquiry image, the inquiry image may be subjected to image recognition processing to obtain inquiry contents contained in the inquiry image, and then the inquiry contents are converted into inquiry text to obtain a question to be processed. When the question data carried in the intelligent question-answering request is a query voice, the query voice can be firstly converted into a corresponding query text, and the query text is determined to be a to-be-processed question.
In addition, in practical application, there are multiple different language types corresponding to the same question, and multiple different expression modes may exist for the same question, so in another alternative implementation, in order to further improve accuracy of the intelligent question-answering result, after the question to be processed is obtained, the question to be processed may be subjected to language type conversion processing. For example, the obtained text to be processed is text content in the form of dialect or network term, etc., then the language type conversion processing can be further performed on the problem to be processed, so as to obtain the processed problem to be processed.
After the to-be-processed questions are obtained, a plurality of candidate questions meeting the requirement on similarity with the to-be-processed questions are selected from the question-answering library. The question-answer library is pre-stored with a plurality of reference questions and answers corresponding to the plurality of reference questions, and the answers corresponding to the questions to be processed can be determined according to the plurality of reference questions and the answers corresponding to the plurality of reference questions stored in the question-answer library. In addition, a plurality of question-answer pairs stored in the question-answer library can be updated in time or new question pairs can be stored in the question-answer library in time, and the question pairs in the question-answer library can be expanded based on the plurality of question-answer pairs in the question-answer library according to actual conditions, so that the question-answer library contains more and more comprehensive question pairs, and the following question pairs based on the question library can be used for replying to the to-be-processed questions in the intelligent question-answer request more accurately, so that the user experience is improved.
In the embodiment of the invention, a plurality of candidate questions meeting the requirement of the similarity with the questions to be processed can be selected from the question-answering library by using a text similarity algorithm. The text similarity algorithm may be a cosine similarity algorithm, a word frequency-inverse text frequency index (TF-IDF) algorithm, a BM25 algorithm, or the like, or a text similarity model may be used to calculate text similarity between the to-be-processed problem and each reference problem in the question-answering library, so as to select a plurality of candidate problems meeting the requirement of similarity with the to-be-processed problem.
Specifically, in an alternative embodiment, the implementation process of selecting a plurality of candidate questions meeting the requirement of similarity to the to-be-processed question from the question-answer library may be: determining a first feature vector corresponding to a to-be-processed problem and a second feature vector corresponding to each of a plurality of reference problems in a question-answering library; respectively calculating cosine values of included angles of the first feature vector and the second feature vectors corresponding to the multiple reference problems; if the cosine value of the included angle between the first feature vector and the first reference problem meets a second preset threshold, determining the first reference problem as a candidate problem with the similarity meeting the requirement of the problem to be processed. Wherein the first reference problem is any one of a plurality of reference problems. Or calculating the similarity between the first feature vector and each second feature vector by using a machine learning model so as to determine candidate problems with the similarity meeting the requirement of the problem to be processed.
When intelligent question-answering processing is performed, a plurality of candidate questions meeting the requirement on similarity of the questions to be processed can be first coarsely screened out from a question-answering library, then semantic analysis is performed on the questions to be processed and the plurality of candidate questions by utilizing a semantic matching model, and further screening is performed to obtain target questions corresponding to the questions to be processed. The semantic matching model is obtained after the pre-trained large language model is subjected to fine tuning, a small amount of training samples are utilized to carry out fine tuning on the pre-trained large language model, so that the language understanding capability and the reasoning capability of the large language model are fully excited, and the semantic matching model obtained after training can carry out semantic analysis on a problem to be processed and a plurality of candidate problems more accurately, so that a target problem corresponding to the problem to be processed is determined. And finally, determining an answer corresponding to the to-be-processed question according to the target question.
In addition, since the training process of the semantic matching model is similar to the process of performing semantic analysis processing on a problem to be processed and a plurality of candidate problems using the semantic matching model to determine a target problem corresponding to the problem to be processed, only the process using the semantic matching model will be described herein.
The target problem may be a candidate problem that is screened from a plurality of candidate problems and has the closest semantic meaning to the problem to be processed, or a preset problem other than the plurality of candidate problems. For example, in practical application, if the semantic matching degree of the plurality of candidate questions and the to-be-processed question does not meet the preset threshold, the preset question may be directly determined as the target question corresponding to the to-be-processed question. The preset questions may be: the problem can not be solved temporarily at present, and the preset problem can be designed in a personalized way according to the application scene.
From the above description, it is clear that: when determining the target problem corresponding to the problem to be processed, screening twice, firstly screening a plurality of candidate problems meeting the requirement on the similarity of the problem to be processed from a question-answer database according to the text similarity between the problem to be processed and each reference problem in the question-answer database, then carrying out semantic analysis processing on the problem to be processed and the plurality of candidate problems by utilizing a semantic matching model to determine whether the candidate problem closest to the semantic of the problem to be processed exists in the plurality of candidate problems, determining the candidate problem as the target problem corresponding to the problem to be processed if the candidate problem closest to the semantic of the problem to be processed exists in the plurality of candidate problems, and determining the preset problem as the target problem corresponding to the problem to be processed if the candidate problem closest to the semantic of the problem to be processed does not exist in the plurality of candidate problems, thereby further improving the accuracy of the matched target problem and further improving the accuracy of intelligent question-answer results.
In addition, in order to enable the user to have better experience, when candidate questions meeting the preset requirements on the semantic matching degree of the to-be-processed questions cannot be screened out from a plurality of candidate questions meeting the requirements on the similarity of the to-be-processed questions, the corresponding answers of the to-be-processed questions can be further determined according to the semantics of the to-be-processed questions by further utilizing the mapping relation between a plurality of reference questions in a question-answer library and the set answers.
In the embodiment of the invention, the plurality of candidate questions meeting the requirement on the similarity of the questions to be processed are firstly screened out from the question-answering library, and then the plurality of candidate questions are further screened by utilizing the semantic matching model to determine the target questions corresponding to the questions to be processed, namely, when the answers corresponding to the questions to be processed are determined, the target questions corresponding to the questions to be processed can be more accurately matched by screening twice, so that the accuracy of the intelligent question-answering result is improved. In addition, the semantic understanding capability of the large language model can be utilized, after the pre-trained large language model is subjected to fine adjustment based on a small amount of training data, the semantic matching model with good performance can be obtained, and the obtained semantic matching model has strong generalization, so that the semantic analysis can be better carried out on the problem to be processed and a plurality of candidate problems by utilizing the semantic matching model, the accuracy of semantic matching is improved, the target problem closest to the problem semantics of the user is further selected, and the accuracy of the intelligent question-answering result is improved.
The above embodiment introduces the whole process of intelligent question-answering processing, and when intelligent question-answering processing is performed, semantic analysis is performed on a to-be-processed problem and a plurality of candidate problems by using a pre-trained semantic matching model so as to match a target problem corresponding to the to-be-processed problem. In the embodiment of the invention, in order to enable the semantic matching model to have better semantic matching capability, the pre-trained large language model is subjected to fine tuning on the basis of the pre-trained large language model so as to obtain the semantic matching model.
The performance of the semantic matching model directly influences the finally matched target problem and further influences the final intelligent question-answering result, so that how to fully excite the language understanding capability and the reasoning capability of the large language model makes fine tuning of the large language model into the required semantic matching model important. In the embodiment of the invention, the semantic matching model can be obtained after the pre-trained large language model is subjected to fine adjustment based on prompt learning, so that the semantic matching model has better semantic matching capability.
Then, in order to better understand the implementation process of the target problem corresponding to the problem to be processed by using the trained semantic matching model in the above embodiment, the implementation process of determining the target problem corresponding to the problem to be processed is exemplarily described with reference to fig. 3.
FIG. 3 is a flowchart of determining a target problem corresponding to a problem to be processed according to an embodiment of the present invention; as shown in fig. 3, the semantic matching model is obtained by fine tuning a pre-trained large language model based on prompt learning, and the method comprises the following steps:
301. and acquiring the to-be-processed problem and the to-be-processed tasks corresponding to the plurality of candidate problems through a preset prompt learning template, wherein the preset prompt learning template is used for limiting the content format for describing the to-be-processed task.
302. And inputting the task to be processed into the semantic matching model to obtain a target problem corresponding to the problem to be processed.
After a plurality of candidate questions meeting the requirement on the similarity with the to-be-processed questions are selected from the question-answer library, firstly, carrying out semantic analysis on the to-be-processed questions and the plurality of candidate questions by using a semantic matching model so as to determine whether candidate questions closest to the semantic matching degree of the to-be-processed questions exist in the plurality of candidate questions.
However, in practical application, when the conventional semantic model performs semantic matching, the problem to be processed is generally converted into a feature vector capable of representing semantic features of a text, then a plurality of reference problems in a question-answering library are converted into feature vectors representing semantic features of the text, and the similarity between the feature vector corresponding to the problem to be processed and the feature vector corresponding to the reference problem is calculated to screen out candidate problems most similar to the problem to be processed, so that the workload of the semantic matching model is increased, and the whole semantic matching process is complex. In addition, since the common semantic matching model is based on the feature vectors corresponding to all the problems to determine whether the to-be-processed problem is semantically connected with all the reference problems, and since the language and literature is profound, sometimes the selected problem is very noisy and contains a large amount of irrelevant information, when the text semantic features corresponding to all the problems are extracted, the obtained feature vectors are possibly inaccurate, the essential features corresponding to all the problems cannot be essential, and the final semantic matching result is not ideal, so that the intelligent question-answering result obtained by the user request is influenced.
In order to improve the accuracy of semantic matching, in the embodiment of the invention, the pre-trained large language model can be finely tuned to obtain a semantic matching model based on prompt learning. Because the large language model has better language understanding capability and reasoning capability, on the basis of the pre-trained large language model, the pre-trained large language model is finely tuned based on a prompt learning mode, so that the language understanding capability and reasoning capability of the large language model can be better stimulated, the semantic matching model has better semantic understanding capability and semantic matching capability, and meanwhile, the trained semantic matching model can better understand a task to be processed so as to complete semantic matching processing, and the accuracy of intelligent question-answering is improved.
When the semantic matching model is used for carrying out semantic analysis on the problem to be processed and a plurality of candidate problems with the similarity meeting requirements of the problem to be processed, the problem to be processed and the task to be processed corresponding to the plurality of candidate problems can be acquired through a preset prompt learning template, wherein the preset prompt learning template is used for limiting the content format for describing the task to be processed. And then, inputting the task to be processed into a semantic matching model to obtain a target problem corresponding to the problem to be processed. Wherein the semantic matching model is trained for determining a target problem corresponding to the problem to be processed.
The preset prompt learning template can comprise various elements such as elements used for limiting task input information, elements used for limiting task description information and the like, and can be set according to actual requirements. When the method is implemented, the representing elements corresponding to the prompt learning template can be determined first, and then the task input information and the task description information corresponding to the task to be processed are acquired based on the representing elements. When the task processing requirement exists, the to-be-processed problem and the plurality of candidate problems can be converted into corresponding to-be-processed tasks based on the preset prompt learning template, and the description of different to-be-processed tasks in a uniform format is realized as the preset prompt learning template is used for limiting the content format for describing the to-be-processed tasks. For example, any one of the tasks may be described collectively by way of a < input, task literal > triplet, or any one of the tasks may be described collectively by way of a < input, task literal, output > triplet, or the like.
When the method is implemented, for example, a preset prompt learning template shown in fig. 4 can be used for taking a to-be-processed question and a plurality of candidate questions as task input information, and the question with the most similar semantics to the to-be-processed question in the plurality of candidate questions is answered, and if the question with the similar semantics does not exist, the preset question is answered.
In an alternative embodiment, the to-be-processed problem and the plurality of candidate problems may be spliced together, as task input information, a semantic matching degree between the to-be-processed problem and the plurality of candidate problems is determined as task description information, a problem that the semantic matching degree between the to-be-processed problem in the plurality of candidate problems meets a first preset threshold is answered, and if the semantic matching degree between the to-be-processed problem in the plurality of candidate problems does not meet the first preset threshold, the answer preset problem is determined as task output information. In addition, since the task input information includes a plurality of candidate questions and questions to be processed, in order to make the semantic matching model better understand the task input information, when the plurality of candidate questions and the questions to be processed are spliced, separators can be added between the questions so that the semantic matching model can better distinguish the questions.
From the above, the to-be-processed problem and the to-be-processed task corresponding to the plurality of candidate problems can be obtained through the preset prompt learning template, so that each to-be-processed problem and the plurality of candidate problems corresponding to each to-be-processed problem can be described in a unified content format, and the semantic matching model can better understand the to-be-processed task so as to more accurately output the target problem corresponding to each to-be-processed problem.
In an alternative embodiment, in order to increase the reasoning speed of the semantic matching model, the content description corresponding to the task to be processed may be shortened as much as possible, and the length of the output information of the semantic matching model may be shortened as much as possible. The task input information is used for carrying out the number processing on the problems in the task input information, so that each problem carries the corresponding number information, the semantic matching model only needs to output the number information corresponding to each problem, and the reasoning speed of the semantic matching model can be reduced to the greatest extent.
Specifically, acquiring a to-be-processed problem and to-be-processed tasks corresponding to a plurality of candidate problems through a preset prompt learning template; inputting a task to be processed into a semantic matching model to obtain problem number information corresponding to a problem to be processed; and determining a target problem corresponding to the problem to be processed according to the problem number information. The task input information corresponding to the task to be processed may include a plurality of candidate questions, number information corresponding to the plurality of candidate questions, and the task to be processed. If the task description information corresponding to the task to be processed is a candidate problem which has the semantic matching degree meeting the first preset threshold value with the semantic matching degree of the problem to be processed in the plurality of candidate problems, outputting the number information corresponding to the candidate problem meeting the first preset threshold value; and/or if no candidate problem with the semantic matching degree of the to-be-processed problem meeting the preset threshold exists, outputting numbering information corresponding to the preset problem, wherein the preset problem is a problem not contained in the question-answer library, and extracting the set problem.
In the embodiment of the invention, the to-be-processed problem and the to-be-processed tasks corresponding to the plurality of candidate problems are acquired through the preset prompt learning template, and the to-be-processed tasks are input into the semantic matching model to acquire the target problem corresponding to the to-be-processed problem, so that the semantic matching model can better understand the to-be-processed task to more accurately input the target problem corresponding to the to-be-processed problem, and the accuracy of the intelligent question-answering result is improved.
The embodiment described above describes a specific implementation process of determining a target problem corresponding to a to-be-processed problem based on a semantic matching model, however, in practical application, after determining the target problem corresponding to the to-be-processed problem by using the semantic matching model, an answer corresponding to the to-be-processed problem may be determined according to the target problem.
In the implementation, if the target question is a reference question in the question-answer library, an answer corresponding to the reference question can be directly obtained from the question-answer library, and the answer is used as an answer corresponding to the to-be-processed question. If the target question is a preset question, the preset question can be directly determined to be an answer corresponding to the question to be processed. In addition, in order to improve the user experience, when the target question is a preset question, the question to be processed may be input into a question-answer model to determine an answer corresponding to the question to be processed, and the question-answer model learns a mapping relationship between different question questions and different set answers. The multiple question pairs in the constructed question-answer library can be used as a training data set to learn and train a machine learning model, so that the question-answer model obtained after training can determine answers corresponding to the questions to be processed.
In recent years, as technology is continuously developed, more and more users can use the intelligent question-answering service in more real scenes. In order to facilitate understanding of the specific implementation process of the above embodiment, the intelligent question-answering process is illustrated in connection with a specific application scenario. In particular application, reference is made to FIGS. 5-6.
When the user has an intelligent question-answering requirement, an intelligent question-answering request is initiated to the intelligent question-answering device, wherein the intelligent question-answering request carries a to-be-processed problem 'introduction of North university of lake' input by the user. When the intelligent question answering device acquires a to-be-processed question "introducing the north university of lake", a plurality of candidate questions meeting the requirement on similarity with the to-be-processed question are selected from a question answering library, namely, the selected plurality of candidate questions are respectively "introducing the north of lake university", "introducing the north of lake industrial university", "introducing the north of lake college of culture of literature", "introduction of the north of lake university of culture of Hubei", "introducing the north of lake medical college", "introducing the five phototechnological park of Hubei", "introducing the north of lake engineering college", "introducing the north of lake university of knowledge college of culture of lake".
Next, determining a representation element corresponding to a preset prompt learning template, and determining the task input information of the to-be-processed problem and the to-be-processed task corresponding to the candidate problems according to the representation element corresponding to the prompt learning template, wherein the task input information is as follows: 1: introduction of Hubei university of Chinese medicine < segmenter >2: introduction to the university of North lake < segmenter >3: introduction to Hubei university of industry < segmenter >4: introduction to Hubei literature < segmenter >5: lake north university profile < segmenter >6: introduction to Hubei medical college < segmenter >7: introduction to Hubei five-party phototechnology garden < segmenter >8: introduction to Hubei engineering institute < segmenter >9: introduction to the university of North university of lake, university of North, academy of learning. And the task description information in the determined to-be-processed questions and the to-be-processed tasks corresponding to the candidate questions is the number of the most semantically similar question in the answer candidate questions and the introduction of the North university of Hubei, and if the semantic similar question does not exist, the answer is "0". And (3) taking the task to be processed as input information of a semantic matching model, inputting the input information into the semantic matching model, and obtaining problem number information which is output by the semantic matching model and corresponds to the problem to be processed as 2, wherein the method is specifically shown in fig. 6.
And then, determining the target problem corresponding to the to-be-processed problem as 'introduction of North university' according to the problem number information output by the semantic matching model. And finally, determining answers corresponding to the questions to be processed from a question-answer library according to the target questions, and returning the answers as question-answer results to the corresponding user ends.
The specific implementation process involved in the embodiment of the present invention may refer to the content in each embodiment, which is not described herein.
The above embodiments introduce a specific implementation process of determining the target problem corresponding to the problem to be processed through the semantic matching model. In order to facilitate understanding of the working principle of the semantic matching model, the embodiment of the invention also provides a semantic matching model training method. The training process of the semantic matching model is exemplarily described with reference to the following embodiments.
FIG. 7 is a flowchart of a semantic matching model training method according to an embodiment of the present invention; referring to fig. 7, the present embodiment provides a semantic matching model training method, where the execution subject of the method may be a semantic matching model training apparatus, and it is understood that the model training apparatus may be implemented as software, or a combination of software and hardware. Specifically, the method can comprise the following steps:
701. And obtaining training samples, wherein the training samples comprise question questions and a plurality of candidate questions meeting the requirement on similarity with the question questions, and each training sample corresponds to a reference target question.
702. And determining a processing task corresponding to the training sample through a preset prompt learning template.
703. And inputting the processing task into a pre-trained large language model to obtain a predicted target problem corresponding to the processing task.
704. And fine tuning the pre-trained large language model according to the predicted target problem and the reference target problem to obtain a semantic matching model.
In training the semantic matching model used in the embodiment shown in fig. 3, training samples are first obtained. Each training sample comprises a question and a plurality of candidate questions meeting the requirement of similarity of the question, and each training sample corresponds to a reference target question. After the training sample is obtained, determining a processing task corresponding to the training sample through a preset prompt learning template, and inputting the processing task into a pre-trained large language model to obtain a predicted target problem corresponding to the processing task. And finally, fine tuning the pre-trained large language model according to the predicted target problem and the reference target problem to obtain a semantic matching model.
According to the embodiment of the invention, the questioning questions and a plurality of candidate questions meeting the requirement on similarity of the questioning questions are taken as training data, the semantic matching model trained by taking the reference target questions as supervision information can accurately understand each task to be processed, and the prediction target questions corresponding to the questions to be processed can be accurately input, so that the accurate target questions can be determined, and the reliability of intelligent questioning and answering is ensured.
The specific implementation process involved in the embodiment of the present invention may refer to the content in each embodiment, which is not described herein.
The intelligent question-answering method provided by the embodiment of the invention can be executed in the cloud, a plurality of computing nodes (cloud servers) can be deployed in the cloud, and each computing node has processing resources such as computation, storage and the like. At the cloud, a service may be provided by multiple computing nodes, although one computing node may provide one or more services. The cloud may provide the service by providing a service interface to the outside, and the user invokes the service interface to use the corresponding service.
Aiming at the scheme provided by the embodiment of the invention, the cloud can provide a service interface with intelligent question-answering service, a user calls the service interface through terminal equipment to trigger an intelligent question-answering service request to the cloud, the request comprises a to-be-processed problem, the cloud determines a computing node responding to the request, and the following steps are executed by utilizing processing resources in the computing node:
Acquiring a problem to be treated;
selecting a plurality of candidate questions meeting the requirement of the similarity with the questions to be processed from the question-answering library;
carrying out semantic analysis on the problem to be processed and the plurality of candidate problems by utilizing a semantic matching model to obtain a target problem corresponding to the problem to be processed, wherein the semantic matching model is obtained by carrying out fine tuning on a pre-trained large language model;
and determining an answer corresponding to the to-be-processed question according to the target question.
And sending an answer corresponding to the to-be-processed question to the terminal equipment.
The above execution may refer to the related descriptions in the other embodiments, which are not described in detail herein.
The intelligent question-answering device of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these means may be configured by the steps taught by the present solution using commercially available hardware components.
Fig. 8 is a schematic structural diagram of an intelligent question-answering device according to an embodiment of the present invention, as shown in fig. 8, where the device includes: the device comprises an acquisition module 11, a selection module 12, a processing module 13 and a determination module 14.
An obtaining module 11, configured to obtain a problem to be processed.
And the selection module 12 is used for selecting a plurality of candidate questions meeting the requirement of the similarity with the to-be-processed questions from the question-answering library.
The processing module 13 is configured to perform semantic analysis on the to-be-processed problem and the plurality of candidate problems by using a semantic matching model, so as to obtain a target problem corresponding to the to-be-processed problem, where the semantic matching model is obtained by performing fine tuning on a pre-trained large language model.
And the determining module 14 is configured to determine an answer corresponding to the to-be-processed question according to the target question.
In an alternative embodiment, the semantic matching model is obtained after fine tuning the pre-trained large language model based on prompt learning, and the processing module 13 may specifically be configured to: acquiring the to-be-processed problem and to-be-processed tasks corresponding to the plurality of candidate problems through a preset prompt learning template, wherein the preset prompt learning template is used for limiting a content format for describing the to-be-processed task; inputting the task to be processed into a semantic matching model to obtain problem number information corresponding to the problem to be processed; and determining a target problem corresponding to the problem to be processed according to the problem number information.
In an alternative embodiment, the processing module 13 may be specifically configured to: determining a representation element corresponding to the prompt learning template, wherein the representation element comprises: an element for defining task input information, an element for defining task description information; and acquiring task input information and task description information corresponding to the task to be processed based on the representation element.
In an optional embodiment, the task input information includes the plurality of candidate questions, numbering information corresponding to the plurality of candidate questions, and the question to be processed; the task description information is number information corresponding to a candidate problem if the semantic matching degree of the candidate problem with the to-be-processed problem meets a first preset threshold value in the plurality of candidate problems; and/or if no candidate questions with the semantic matching degree of the questions to be processed meeting a preset threshold value exist, outputting numbering information corresponding to the preset questions, wherein the preset questions are questions not contained in the question-answer library.
In an alternative embodiment, the determining module 14 may specifically be configured to: if the target question is a preset question, inputting the question to be processed into a question-answer model to determine an answer corresponding to the question to be processed, wherein the question-answer model learns the mapping relation between different question questions and different set answers.
In an alternative embodiment, the processing module 13 may be further configured to: obtaining training samples, wherein the training samples comprise questioning questions and a plurality of candidate questions with similarity meeting requirements with the questioning questions, and each training sample corresponds to a reference target question; determining a processing task corresponding to the training sample through a preset prompt learning template; inputting the processing task into a pre-trained large language model to obtain a predicted target problem corresponding to the processing task; and fine tuning the pre-trained large language model according to the predicted target problem and the reference target problem to obtain a semantic matching model.
In an alternative embodiment, the question-answer library includes a plurality of reference questions and answers corresponding to the reference questions, and the selection module 12 may specifically be configured to: determining a first feature vector corresponding to the to-be-processed problem and a second feature vector corresponding to each of a plurality of reference problems in a question-answering library; respectively calculating cosine values of included angles of the first feature vector and second feature vectors corresponding to the multiple reference problems; if the cosine value of the included angle between the first feature vector and the first reference question meets a second preset threshold, determining the first reference question as a candidate question meeting the requirement of similarity with the to-be-processed question, wherein the first reference question is any one of the plurality of reference questions.
The apparatus shown in fig. 8 may perform the steps in the intelligent question-answering method in the foregoing embodiments, and the detailed execution and technical effects are referred to the description in the foregoing embodiments, which are not repeated herein.
An embodiment of the present invention further provides an electronic device, as shown in fig. 9, where the electronic device may include: a processor 21, a memory 22, a communication interface 23. Wherein the memory 22 has stored thereon executable code which, when executed by the processor 21, causes the processor 21 to implement the intelligent question-answering method as in the previous embodiments.
In addition, embodiments of the present invention provide a non-transitory machine-readable storage medium having executable code stored thereon, which when executed by a processor of an electronic device, causes the processor to at least implement the intelligent question-answering method as provided in the previous embodiments.
The apparatus embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by adding necessary general purpose hardware platforms, or may be implemented by a combination of hardware and software. Based on such understanding, the foregoing aspects, in essence and portions contributing to the art, may be embodied in the form of a computer program product, which may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An intelligent question-answering method is characterized by comprising the following steps:
acquiring a problem to be treated;
selecting a plurality of candidate questions meeting the requirement of the similarity with the to-be-processed questions from a question-answer library;
carrying out semantic analysis on the to-be-processed problem and the plurality of candidate problems by utilizing a semantic matching model to obtain a target problem corresponding to the to-be-processed problem, wherein the semantic matching model is obtained by fine tuning a pre-trained large language model;
and determining an answer corresponding to the to-be-processed question according to the target question.
2. The method of claim 1, wherein the semantic matching model is obtained after fine tuning a pre-trained large language model based on prompt learning; the semantic analysis is performed on the to-be-processed problem and the plurality of candidate problems by using a semantic matching model to obtain a target problem corresponding to the to-be-processed problem, including:
Acquiring the to-be-processed problem and to-be-processed tasks corresponding to the plurality of candidate problems through a preset prompt learning template, wherein the preset prompt learning template is used for limiting a content format for describing the to-be-processed task;
inputting the task to be processed into a semantic matching model to obtain problem number information corresponding to the problem to be processed;
and determining a target problem corresponding to the problem to be processed according to the problem number information.
3. The method according to claim 2, wherein the method further comprises:
determining a representation element corresponding to the prompt learning template, wherein the representation element comprises: an element for defining task input information, an element for defining task description information;
and acquiring task input information and task description information corresponding to the task to be processed based on the representation element.
4. A method according to claim 3, wherein the task input information includes the plurality of candidate questions, numbering information corresponding to the plurality of candidate questions, the question to be processed;
the task description information is number information corresponding to a candidate problem if the semantic matching degree of the candidate problem with the to-be-processed problem meets a first preset threshold value in the plurality of candidate problems; and/or
If no candidate questions with the semantic matching degree of the to-be-processed questions meeting a preset threshold value exist, outputting numbering information corresponding to the preset questions, wherein the preset questions are questions not contained in the question-answering library.
5. The method of claim 4, wherein determining an answer corresponding to the question to be processed according to the target question comprises:
if the target question is a preset question, inputting the question to be processed into a question-answer model to determine an answer corresponding to the question to be processed, wherein the question-answer model learns the mapping relation between different question questions and different set answers.
6. The method according to claim 1, wherein the method further comprises:
obtaining training samples, wherein the training samples comprise questioning questions and a plurality of candidate questions with similarity meeting requirements with the questioning questions, and each training sample corresponds to a reference target question;
determining a processing task corresponding to the training sample through a preset prompt learning template;
inputting the processing task into a pre-trained large language model to obtain a predicted target problem corresponding to the processing task;
And fine tuning the pre-trained large language model according to the predicted target problem and the reference target problem to obtain a semantic matching model.
7. The method of claim 1, wherein the question-answer library includes a plurality of reference questions and answers corresponding to the plurality of reference questions, and the selecting a plurality of candidate questions from the question-answer library that satisfy the similarity requirement with the to-be-processed questions includes:
determining a first feature vector corresponding to the to-be-processed problem and a second feature vector corresponding to each of a plurality of reference problems in a question-answering library;
respectively calculating cosine values of included angles of the first feature vector and second feature vectors corresponding to the multiple reference problems;
if the cosine value of the included angle between the first feature vector and the first reference question meets a second preset threshold, determining the first reference question as a candidate question meeting the requirement of similarity with the to-be-processed question, wherein the first reference question is any one of the plurality of reference questions.
8. An intelligent question-answering device, comprising:
the acquisition module is used for acquiring the problem to be processed;
the selection module is used for selecting a plurality of candidate questions which meet the requirement on the similarity of the questions to be processed from a question-answer library;
The processing module is used for carrying out semantic analysis on the to-be-processed problem and the plurality of candidate problems by utilizing a semantic matching model so as to obtain a target problem corresponding to the to-be-processed problem, wherein the semantic matching model is obtained after a pre-trained large language model is subjected to fine adjustment;
and the determining module is used for determining an answer corresponding to the to-be-processed question according to the target question.
9. An electronic device, comprising: a memory, a processor, a communication interface; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the intelligent question-answering method according to any one of claims 1 to 7.
10. A non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the intelligent question-answering method according to any one of claims 1 to 7.
CN202311339533.8A 2023-10-16 2023-10-16 Intelligent question-answering method, device, equipment and storage medium Pending CN117493505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311339533.8A CN117493505A (en) 2023-10-16 2023-10-16 Intelligent question-answering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311339533.8A CN117493505A (en) 2023-10-16 2023-10-16 Intelligent question-answering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117493505A true CN117493505A (en) 2024-02-02

Family

ID=89681816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311339533.8A Pending CN117493505A (en) 2023-10-16 2023-10-16 Intelligent question-answering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117493505A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117875908A (en) * 2024-03-08 2024-04-12 蒲惠智造科技股份有限公司 Work order processing method and system based on enterprise management software SAAS
CN118410155A (en) * 2024-07-03 2024-07-30 宁波港信息通信有限公司 Port question-answering method based on large language model and related equipment thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117875908A (en) * 2024-03-08 2024-04-12 蒲惠智造科技股份有限公司 Work order processing method and system based on enterprise management software SAAS
CN118410155A (en) * 2024-07-03 2024-07-30 宁波港信息通信有限公司 Port question-answering method based on large language model and related equipment thereof

Similar Documents

Publication Publication Date Title
CN110837550B (en) Knowledge graph-based question answering method and device, electronic equipment and storage medium
US20200301954A1 (en) Reply information obtaining method and apparatus
CN112164391B (en) Statement processing method, device, electronic equipment and storage medium
CN111428010B (en) Man-machine intelligent question-answering method and device
US20190057164A1 (en) Search method and apparatus based on artificial intelligence
CN112527998B (en) Reply recommendation method, reply recommendation device and intelligent equipment
CN112800170A (en) Question matching method and device and question reply method and device
CN117493505A (en) Intelligent question-answering method, device, equipment and storage medium
WO2020155619A1 (en) Method and apparatus for chatting with machine with sentiment, computer device and storage medium
CN110941698B (en) Service discovery method based on convolutional neural network under BERT
CN110162596B (en) Training method and device for natural language processing, automatic question answering method and device
CN116450867B (en) Graph data semantic search method based on contrast learning and large language model
CN116821301A (en) Knowledge graph-based problem response method, device, medium and computer equipment
CN118051635A (en) Conversational image retrieval method and device based on large language model
CN116881471B (en) Knowledge graph-based large language model fine tuning method and device
CN113420111A (en) Intelligent question-answering method and device for multi-hop inference problem
CN113220854A (en) Intelligent dialogue method and device for machine reading understanding
CN110413750B (en) Method and device for recalling standard questions according to user questions
CN116561271A (en) Question and answer processing method and device
CN111240787A (en) Interactive help method and system based on real scene semantic understanding
CN116431912A (en) User portrait pushing method and device
CN114328995A (en) Content recommendation method, device, equipment and storage medium
CN116992111B (en) Data processing method, device, electronic equipment and computer storage medium
CN117591658B (en) Intelligent question-answering method, device, equipment and storage medium
CN118247798B (en) OCR (optical character recognition) -oriented big data analysis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination