CN117149985B - Question and answer method, device, equipment and medium based on large model - Google Patents

Question and answer method, device, equipment and medium based on large model Download PDF

Info

Publication number
CN117149985B
CN117149985B CN202311422458.1A CN202311422458A CN117149985B CN 117149985 B CN117149985 B CN 117149985B CN 202311422458 A CN202311422458 A CN 202311422458A CN 117149985 B CN117149985 B CN 117149985B
Authority
CN
China
Prior art keywords
target
question
large model
sql
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311422458.1A
Other languages
Chinese (zh)
Other versions
CN117149985A (en
Inventor
冯卫森
刘微
孟卫明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202311422458.1A priority Critical patent/CN117149985B/en
Publication of CN117149985A publication Critical patent/CN117149985A/en
Application granted granted Critical
Publication of CN117149985B publication Critical patent/CN117149985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the technical field of natural language processing, in particular to a question-answering method, a question-answering device, question-answering equipment and question-answering media based on a large model. In the embodiment of the application, the database information of the third party database and the prompt word prompting the large model to think about the problem to be replied are added in the question-answering template, so that the large model thinks about the target problem, and the target SQL is determined and output based on the thinking result. The electronic equipment can search target data corresponding to the target SQL in a third-party database according to the target SQL output by the large model, and the large model generates reply information according to the target data, so that a question-answering process is realized, and the accuracy of question-answering is improved. In the embodiment of the application, the electronic equipment realizes high accuracy and fully utilizes the data in the third-party database to reply, so that the reliability of the reply result of the large model is greatly improved, and the generalization capability, the reliability and the accuracy of the large model are improved.

Description

Question and answer method, device, equipment and medium based on large model
Technical Field
The application relates to the technical field of natural language processing, in particular to a question-answering method, a question-answering device, question-answering equipment and question-answering media based on a large model.
Background
The existing large model shows generalization capability beyond understanding in terms of semantic understanding, and can achieve good generalization effect under the condition of low labeling and even zero labeling.
However, the large model has obvious defects: the large model is a closed model, the reply information is replied based on the existing knowledge system of the large model, and a lot of knowledge in specific application scenes such as urban cloud brain projects and the like is stored in specific databases such as urban voice systems and the like. The large model has huge parameter quantity and high debugging cost, and is difficult to debug based on specific databases such as urban voice systems, so that the large model cannot carry out relatively accurate question-answering based on the specific databases such as the urban voice systems.
Disclosure of Invention
The application provides a question and answer method, device, equipment and medium based on a large model, which are used for solving the problem that the large model cannot answer questions aiming at a third party database in the prior art.
In a first aspect, an embodiment of the present application provides a big model-based question-answering method, where the method includes:
acquiring a pre-configured question-answer template, and writing a target question to be replied into the question-answer template; the question-answering template at least comprises database information of a third-party database and a first prompting word for prompting a large model to think about the target problem;
Inputting the written question-answer templates into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL;
searching target data corresponding to the target SQL in the third-party database according to the target SQL output by the large model; and inputting the target data into the large model, and generating and outputting reply information by the large model according to the target data.
In a second aspect, an embodiment of the present application further provides a question-answering device based on a large model, where the device includes:
the processing module is used for acquiring a pre-configured question-answer template and writing a target question to be replied into the question-answer template; the question-answering template at least comprises database information of a third-party database and a first prompting word for prompting a large model to think about the target problem; inputting the written question-answer templates into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL;
The searching module is used for searching target data corresponding to the target SQL in the third-party database according to the target SQL output by the large model;
the processing module is further used for inputting the target data into the large model, and the large model generates and outputs reply information according to the target data.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor configured to implement the steps of the big model based question-answering method according to any one of the above when executing a computer program stored in a memory.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of a large model-based question-answering method according to any one of the above.
In the embodiment of the application, the electronic equipment acquires a pre-configured question-answer template, and writes a target question to be replied into the question-answer template; the question-answering template at least comprises database information of data stored in a third party database and a first prompting word for prompting the big model to think about the target problem; inputting the written question-answer template into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL; searching target data corresponding to the target SQL in the third-party database according to the target SQL output by the large model; inputting the target data into the large model, and generating and outputting reply information by the large model according to the target data. In the embodiment of the application, the database information of the third party database and the prompt word prompting the large model to think about the problem to be replied are added in the question-answering template, so that the large model thinks about the target problem, and the target SQL is determined and output based on the thinking result. The electronic equipment can search target data corresponding to the target SQL in a third-party database according to the target SQL output by the large model, and the large model generates reply information according to the target data, so that a question-answering process is realized, and the accuracy of question-answering is improved. In the embodiment of the application, the electronic equipment realizes high accuracy and fully utilizes the data in the third-party database to reply, so that the reliability of the reply result of the large model is greatly improved, and the generalization capability and accuracy of the large model are improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a question-answering process based on a large model according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a question-answer template provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a question answering device based on a large model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, wherein it is apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the present embodiments, "large model" may be understood as a model based on a converter (transducer) architecture; the "large model" may also be understood as a machine learning model with a huge parameter scale and complexity, e.g., a neural network model with millions to billions of parameters or billions of parameters; the "large model" may also be understood as a deep learning model trained on large-scale training data by semi (weakly) supervised, fully supervised, self-supervised or unsupervised techniques. In the embodiment of the application, the large model can process a plurality of different tasks, training is generally performed based on training data of a certain target task field when the large model is trained, and the large model obtained through training can be migrated to other task fields similar to the target task field for use under the general condition. The existing large model shows generalization capability beyond understanding in terms of semantic understanding, and can achieve good generalization effect under the condition of low labeling and even zero labeling.
However, the large model has obvious defects, and the first large model is a closed model, and the reply information is recovered based on the existing knowledge system of the large model. The knowledge in the urban cloud brain project is stored in a third party database such as a database of the urban voice system; secondly, the large model has huge parameter quantity and high debugging cost, so that the large model is difficult to debug based on third party databases such as databases of urban voice systems; thirdly, the data volume is large in debugging of the large model, and the large model is trained by a large amount of data, so that the output of the large model is required to be influenced, and a large amount of fine tuning data is required to be subjected to corresponding parameter optimization; fourth, aiming at the inquiry of complex questions, the large model is easy to understand and error, and the condition of messy answer is generated.
On the basis of the above problems, in the related art, the data of the urban cloud brain project is stored in a third party database such as a database of the urban speech system, and the database is frequently maintained and updated, so that the large model cannot be debugged based on the third party database such as the database of the urban speech system.
Based on the method, the device, the equipment and the medium for question and answer based on the large model are provided in order to realize question and answer based on third party databases such as a database of the urban voice system and the like and improve the accuracy of question and answer.
In the embodiment of the application, the electronic equipment acquires a pre-configured question-answer template, and writes a target question to be replied into the question-answer template; the question-answering template at least comprises database information of a third party database and a first prompt word for prompting the big model to think about the target problem; inputting the written question-answer template into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL; searching target data corresponding to the target SQL in the third-party database according to the target SQL output by the large model; inputting the target data into the large model, and generating and outputting reply information by the large model according to the target data.
Fig. 1 is a schematic diagram of a question-answering process provided in an embodiment of the present application, where the process includes:
s101: acquiring a pre-configured question-answer template, and writing a target question to be replied into the question-answer template; the question-answering template at least comprises database information and a first prompting word for prompting the big model to think about the target problem.
The question-answering method based on the large model is applied to electronic equipment, and the electronic equipment can be a PC, a server, an urban voice system and the like.
In the embodiment of the application, the problem of generating SQL by a large model is solved by means of the design of the prompt words. Specifically, a question and answer template is pre-stored in the electronic equipment, wherein the question and answer template is stored in the electronic equipment, and the question and answer template at least comprises database information of data stored in a third party database and a first prompt word for prompting a large model to think about a target problem. In this embodiment, the third party database is an existing database of other non-large models, such as a database of an urban voice system, and the like, which is not limited herein.
In the embodiment of the application, the big model thinking is guided through the first prompt word in the question-answer template, the big model thinks what should be solved in each step, the complex problems are simplified through the thinking mode, and the complex problems are finally solved through solving the simple problems; by means of the database information in the question-answer template, an external information source is realized, the third-party database is used as the information source, so that the large model can determine SQL (structured query language) used for inquiring from the third-party database according to the database information, and the reliability of the question-answer result output by the large model is improved.
In the embodiment of the present application, the database information is basic information of a third party database, including, but not limited to, table names, each field, and specific meanings of the fields that need to be queried.
Specifically, in the embodiment of the application, a field for writing a target question in a question-answering template stored in the electronic device, and after the electronic device receives an input target question to be answered, the electronic device writes the target question into the field for writing a target text in the question-answering template.
For example, the question-answer templates are:
database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: … …;
thinking: … …;
SQL:……。
wherein, "thinking" is a first prompt word; "database information", "question" and "SQL" are also prompt words in the question-and-answer template, and "database information" is database information of data stored in a database of the urban speech system.
Table 1 is an example of database information provided in an embodiment of the present application:
as shown in table 1 above, the database information includes an index information lookup table (fengtest). Wherein the index information lookup table includes the fields: index name (index_name), index value (index_value), index unit (unit), index statistics type (index_type), province (province), city (city), administrative district (distribution), street (street), year (year), month (month), quarter (quarter), week (week), day (day), and hour (hour).
Wherein, the data corresponding to the index name field is a character string (string); the data corresponding to the index value field is an integer (int); the data corresponding to the indicator unit field is enumeration (enum), and the enumeration value written in the field includes: hundred million yuan, individual, percent, yuan, ten thousand yuan, table, ten thousand cubic meters, ten thousand weight boxes, ten hundred million kilowatt hours, ten thousand tons of standard coal, home, person, time, item, piece, ten thousand people, kilometers, strip, ton kilometers, person kilometers, vehicle, square kilometers, mo Jijiao, hundred million cubic meters, home, ten thousand yuan, square meters, house, age; the data corresponding to the index statistics type field is enumeration (enum), and the enumeration value written in the field includes: total amount, same ratio, ring ratio; the data corresponding to the province field is a string (string), and the string is a fixed value: the Shandong province; the data corresponding to the city field is enumeration (enum), and the enumeration value written in the field includes: qingdao, jinan, zibo, jujube, dongying, jitai, weifang, jining, taian, weihai, japanese, linyi, tezhou, chat, binzhou; the data corresponding to the administrative area field is a character string (string); the data corresponding to the street field is a string (string); the data corresponding to the year field is enumeration (enum), and the enumeration value written in the field includes: 2021. 2022 and 2023; the data corresponding to the month field is enumeration (enum), and the enumeration value written in the field includes: 1. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12; the data corresponding to the quarter field is enumeration (enum), and the enumeration value written in the field includes: 1. 2, 3 and 4; the data corresponding to the peripheral field is enumeration (enum), and the enumeration value written in the field includes: 1. 2, 3, 4, 5; the data corresponding to the day field is an integer (int); the data corresponding to the hour field is an integer (int).
Table 2 is an example of a database of the urban speech system provided in the embodiments of the present application:
as shown in table 2 above, the index name is GDP; index values of 200 hundred million, 300 hundred million and 400 hundred million; index unit is hundred million; the index statistics type is the total amount; the province is Shandong province; the cities are Qingdao city and Jinan city; years 2022 and 2023.
S102: and inputting the written question-answer template into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL.
In the embodiment of the application, the electronic device inputs the question-answer template written with the target question into the large model, and after the large model receives the question-answer template written with the target question, the large model reads the database information written in the question-answer template, the target question and the first prompting word for prompting thinking, and thinks about the target question according to the database information, the target question and the first prompting word, so as to determine a thinking result.
In the embodiment of the application, the big model thinks about the target problem to be solved by the big model, if the target problem is a complex problem, the complex problem can be simplified by thinking, and the complex problem can be finally solved by solving the simple problems. Specifically, if the target problem is a complex problem, the thinking result of the large model is a sub-problem that should be solved in the current step; if the target problem is a simple problem, the thinking result is to solve the simple problem.
For example, in the embodiment of the present application, if the target problem is a simple problem, such as "what GDP is in Qingdao city 2022", the thinking result of the large model is "what GDP is in Qingdao city 2022 is queried"; if the objective problem is a complex problem, such as "how much GDP was in Qingdao city 2022 and" atanan bipna ", the result of the large model thinking is" how much GDP was in Qingdao city 2022 should be queried in the first step ".
In the embodiment of the application, after determining the thinking result, the big model determines the SQL corresponding to the thinking result according to the thinking result and outputs the SQL.
S103: searching target data corresponding to the target SQL in the third-party database according to the target SQL output by the large model; and inputting the target data into the large model, and generating and outputting reply information by the large model according to the target data.
In the embodiment of the application, the electronic equipment acquires SQL output by the large model, and searches target data corresponding to the SQL in the third-party database according to the SQL. The electronic device inputs the target data into a large model, so that the large model generates and outputs reply information according to the target data.
Specifically, in the embodiment of the present application, if the target problem is a simple problem or the target problem is a complex problem but the current thought result is the last step of solving the complex problem, the large model generates and outputs the reply information according to the target data, the current and previous thought results.
In the embodiment of the application, the database information of the third party database and the prompt word prompting the large model to think about the problem to be replied are added in the question-answering template, so that the large model thinks about the target problem, and the target SQL is determined and output based on the thinking result. The electronic equipment can search target data corresponding to the target SQL in a third-party database according to the target SQL output by the large model, and the large model generates reply information according to the target data, so that a question-answering process is realized, and the accuracy of question-answering is improved.
In order to improve accuracy of questions and answers, based on the above embodiment, in this embodiment of the present application, the question and answer template further includes a first field for writing SQL generated by a large model;
the determining the target SQL corresponding to the target problem based on the thinking result and outputting comprises the following steps:
writing the target SQL into the first field in the question-answer template, and outputting the question-answer template with the written-in completion.
In the embodiment of the application, the question-answering template further comprises a first field used for writing SQL generated by the big model, after the big model determines the target SQL corresponding to the thinking result, the big model writes the target SQL into the first field, and the question-answering template with the written result is output.
For example, the question-answer templates are:
database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: … …;
thinking: … …;
SQL:#SQL#。
where "# SQL#" is the first field for writing SQL generated by the large model, which can write the determined target SQL to.
For example, in embodiments of the present application, the large model may output the following:
"thinking: the present step should solve those problems
SQL: SQL should be generated at this step).
Wherein the large model thinks that "the present step should solve those problems", and the large model writes the determined "SQL that should be generated by the present step" into the first field.
In order to improve accuracy of questions and answers, based on the embodiments, in the embodiments of the present application, the question and answer template further includes a second prompting word and a second field for prompting writing of data corresponding to the SQL;
The inputting the target data into the large model includes:
and writing the target data into the second field of the question-answer template, and inputting the written question-answer template into the large model, so that the large model reads the target data according to the second prompt word.
In the embodiment of the application, the question-answer template further comprises a second field for writing data corresponding to the SQL, and after the electronic equipment obtains the question-answer template output by the large model, the electronic equipment searches target data corresponding to the target SQL in the third-party database according to the target SQL carried in the question-answer template. And the electronic equipment writes the searched target data into a second field of the question-answer template output by the large model, and inputs the written question-answer template into the large model.
Specifically, in this embodiment of the present application, the second field further includes an "observation" hint word, the electronic device writes the target data into the second field, and the large model may locate the second field according to the "observation" hint word, and read the target data from the second field.
In the embodiment of the application, because the large model needs to receive target data corresponding to the SQL input by the electronic device, that is, the large model needs to receive third-party database information, when the stop word is designed, the "observation" is used as the stop word. Based on this, only the content before observation is output when the large model outputs the target SQL for the first time.
And the electronic equipment performs related query in a third party database according to the target SQL output by the large model. And splicing the output result of the large model with the inquired target data, and re-inputting the output result to the large model. The input form is:
problems: how much GDP was in Qingdao, 2022 and Jinan Bina;
thinking: the first step should query the Qingdao city for what GDP was in 2022;
SQL: SELECT index_ value FROM fengtest WHERE index _name= 'GDP' AND index_type= 'total' AND precursor= 'eastern province' AND city= 'island city' AND year=2022;
and (3) observation: 200 billion.
Where the content before "observation" is the output of the large model, and "200 billion" is the target data.
In order to improve accuracy of questions and answers, in the embodiments of the present application, a third prompting word for prompting the big model to judge whether the target problem is solved according to the thinking result and the target data is further included in the questions and answers template;
before the large model generates and outputs reply information according to the target data, the method comprises the following steps:
the big model judges whether the target problem is solved according to the thinking result, the target data and the third prompt word; and if the large model determines that the target problem is solved, the large model executes the subsequent steps of generating and outputting reply information according to the target data.
In the embodiment of the application, the question-answering template further comprises a third prompt word for prompting the big model to judge whether the target problem is solved according to the thinking result and the target data, and after the big model receives the input target data, the question-answering template output by the big model judges whether the target problem is solved according to the thinking result, the target data and the third prompt word; if the large model determines that the target problem has been solved, the large model performs the subsequent step of generating and outputting reply information according to the target data.
Specifically, in this embodiment of the present application, the second field further includes a "judgment" prompt word, and the large model may judge whether the target problem is solved according to the "judgment" prompt word; if the large model determines that the target problem has been solved, the large model performs the subsequent step of generating and outputting reply information according to the target data.
In order to improve accuracy of questions and answers, based on the embodiments above, in an embodiment of the present application, the method further includes:
if the large model determines that the target problem is not solved, repeating the following operations until the large model determines that the target problem is solved:
The big model carries out secondary thinking on the target problem according to the first prompt word, the thinking result and target data determined before the current time and the first prompt word, and determines other thinking results; determining and outputting other target SQL according to the other thinking results;
according to the other target SQL output by the large model, other target data corresponding to the other target SQL are searched in the third-party database; and inputting the other target data into the large model, and judging whether the target problem is solved or not according to the other target data, the other thought results, the third prompt word, the thought results determined before the current time and the target data by the large model.
In the embodiment of the application, if the target problem is a complex problem with multiple intentions, the large model can split the target problem into a plurality of sub-problems through thinking, and determine whether the target problem is solved or not through judgment. If the large model determines that the target problem is not solved, the large model repeatedly performs the following operations until the large model determines that the target problem is solved:
the large model considers the target problem again to determine other thinking results, and determines and outputs other target SQL corresponding to the other thinking results according to the other thinking results. The electronic equipment searches other target data corresponding to the other target SQL in the third-party database according to the other target SQL output by the large model; and inputting the other target data into the large model, and judging whether the target problem is solved or not according to the other target data, the other thought results, the third prompt word, the thought results determined before the current time and the target data by the large model.
Fig. 2 is a schematic structural diagram of a question and answer template provided in an embodiment of the present application, and as shown in fig. 2, the structure includes database information, questions, thinking, output SQL, database query, result observation, judgment, and final output. The database information is used for prompting the basic information of the local database to be input into the large model, and comprises a table name to be queried, each field and specific meaning of the field; the problem is used for prompting a target problem; the thinking is used for prompting the problem of the last step to carry out the finest granularity analysis, analyzing the target problem to be solved and determining what problem the step should solve; outputting SQL for prompting the generation of target SQL corresponding to the thinking result according to the thinking result of the previous step; database inquiry is used for stopping outputting after prompting the large model to generate SQL and enabling the electronic equipment to call a database inquiry function to inquire related results according to the target SQL; the result observation is used for prompting the large model to observe according to the target data; judging, which is used for prompting the large model to judge whether the target problem is completely solved according to the observation result, if so, outputting a final result, and if not, continuing to think for a new round; and finally outputting the final result for prompting the large model to output according to all the judgment.
The embodiment of the application mainly combines the large model with the urban voice system, disassembles the complex problem of the urban voice into the simple problem with smaller granularity. And the result of the large model is continuously corrected through the support of an external information source and self-discrimination, and finally the problem of data query of urban voice based on the large model is solved.
The following description is provided in connection with a specific embodiment:
the target question is "how much GDP was in Qingdao city 2022 and" atanan bipna ", which contains the GDP of query 2022 and the comparison of the two intentions, which is a complex question.
The electronic equipment writes the target question into a question-answer template, and the obtained question-answer template with the written completion is:
"database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: how much GDP was in Qingdao, 2022 and Jinan Bina;
thinking: … …;
SQL:……。”
the electronic device inputs the question AND answer template into a large model, the large model carries out first thought on the target problem, determines that the first thought result is "what is the GDP of Qingdao city 2022 year should be queried in the first step", AND determines that the SQL corresponding to the first thought result is "SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and advance = \Shandong province\and city = \Qingdao city\and year = 2022'". Based on this, the output of the large model is:
"database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: how much GDP was in Qingdao, 2022 and Jinan Bina;
thinking: the first step should be to query Qingdao city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total \and precursor = \shandong province \and city = \green island city \and year = 2022'
The electronic equipment searches data corresponding to the SQL in a database of the urban voice system according to a target SQL output by the large model, and splices the data into the output of the large model to be input into the large model again. The inputs of the large model are:
"database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: how much GDP was in Qingdao, 2022 and Jinan Bina;
thinking: the first step should be to query Qingdao city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and precursor = \shandong province\and city = \green island city\and year = 2022';
and (3) observation: 200 billion. "
After the large model receives the input, the large model determines whether the target problem is completely solved, if the large model determines that the target problem is not completely solved, the large model performs a second thinking, determines that the thinking result of the second thinking is "what is the GDP of 2022 year should be queried in jinan city", AND determines AND outputs that the SQL corresponding to the thinking result of the second thinking is "SELECT index_ value FROM fengtest WHERE index _name= \gdp\and index_type= \total amount\and advance = \mountain province\and city = \south city\and year=2022". Based on this, the second output of the large model is:
"database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: how much GDP was in Qingdao, 2022 and Jinan Bina;
thinking: the first step should be to query Qingdao city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and precursor = \shandong province\and city = \green island city\and year = 2022';
and (3) observation: 200 billions;
judging: the user problem is not completely solved
Thinking: the second step should query the Jinan city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and precursor = \shandong province\and city = \jinan city\and year=2022. "
The electronic equipment outputs SQL corresponding to the second thinking according to the large model, searches data corresponding to the SQL corresponding to the second thinking in a database of the urban voice system, splices the data into the latest output of the large model, and inputs the data to the large model again. The inputs of the large model are:
"database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: how much GDP was in Qingdao, 2022 and Jinan Bina;
thinking: the first step should be to query Qingdao city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and precursor = \shandong province\and city = \green island city\and year = 2022';
and (3) observation: 200 billions;
judging: the user problem is not completely solved
Thinking: the second step should query the Jinan city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and precursor = \shandong province\and city = \jinan city\and year = 2022;
And (3) observation: 400 billion. "
After the big model obtains the input, and judges that the target problem is not completely solved, the big model carries out third thinking, the third step is to compare the size relation of GDP in Qingdao city and Jinan city for 2022 years, and the big model determines SQL (structured query language) corresponding to the third thinking without SQL according to the thinking result of the third thinking and directly compares the observed results. The output of the large model is:
"database information: table name to be queried, each field and specific meaning of the field in a database of the urban voice system;
problems: how much GDP was in Qingdao, 2022 and Jinan Bina;
thinking: the first step should be to query Qingdao city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and precursor = \shandong province\and city = \green island city\and year = 2022';
and (3) observation: 200 billions;
judging: the user problem is not completely solved
Thinking: the second step should query the Jinan city for what GDP was in 2022 years
SQL: SELECT index_ value FROM fengtest WHERE index _name = \gdp\and index_type = \total\and precursor = \shandong province\and city = \jinan city\and year = 2022;
And (3) observation: 400 billions;
judging: the user problem is not completely solved
Thinking: the third step should compare the size relationship of GDP in 2022 years in Qingdao and Jinan
SQL: SQL is not needed, and the observation results are directly compared;
and (3) observation: 200 million GDP are available in Qingdao city in 2022, 400 million GDP are available in Jinan city in 2022, and higher GDP are available in Jinan city than in Qingdao city. "
And judging that the user problem is completely solved by the large model according to the observation result, determining a reply result by the large model, and outputting the large model as follows:
judging: the user problem has been completely solved;
results: 200 million GDP are available in Qingdao city in 2022, 400 million GDP are available in Jinan city in 2022, and higher than in Qingdao city. "
Based on the above embodiments, an example of the first thinking of the large model may be "thinking: the first step should query how much \nSQL the Qingdao city 2022 was \nSQL SELECT index_ value FROM fengtest WHERE index _name = \GDP\AND index_type = \total\AND precursor = \Shandong province\AND city = \Qingdao city\AND year = 2022).
On the basis of the above embodiment, an example of the second thinking of the large model is "judgment: the user problem is not completely solved/n thinking: the second step should query the Jinan city of 2022 what GDP is \nSQL: SELECT index_ value FROM fengtest WHERE index _name= \gdp\and index_type = \total\and precursor = \shandong province\and city = \jinan city\and year=2022.
On the basis of the above embodiment, an example of the third thinking of the large model is "judgment: the user problem is not completely solved/n thinking: the third step should compare the size relationship of GDP in 2022 years in Qingdao and Jinan, nSQL: and the observation results are directly compared without SQL.
On the basis of the above embodiment, an example of the large model determination reply message is "200 million GDP in Qingdao city 2022, 400 million GDP in Jinan city 2022, GDP higher/judgment in Jinan city than Qingdao city: the user problem has completely solved/n results: 200 million GDP in Qingdao city 2022 and 400 million GDP in Jinan city 2022, which are higher than in Qingdao city).
In order to improve accuracy of questions and answers, based on the above embodiments, in this embodiment of the present application, the third party database is a database of an urban voice system.
In this embodiment of the present application, the third-party database is an existing database of other non-large models, such as a database of a city voice system, so that the large model can perform question-answering according to the database of the city voice system.
According to the embodiment of the application, under the condition of low labeling, aiming at complex scenes, high accuracy can be realized, and the third party data is fully utilized for replying, so that the reliability of the replying result of the large model is greatly improved; the embodiment of the application has low data annotation quantity, fully utilizes the generalization capability of a large model for massive data, combines the specific characteristics of an urban voice system, utilizes the semantic generalization of the large model for a small amount of annotation data, can realize stronger generalization capability, and furthest reduces manpower annotation cost; the embodiment of the application has high accuracy, fully utilizes the semantic understanding capability and CoT technology of the large model, subdivides and disassembles complex problems, solves the problem that the large model is difficult to solve originally, and improves the overall accuracy; the embodiment of the application has high reliability, and in the process of processing the problems, the result of each step utilizes the third party data as an information source, so that the accuracy of the final result is fully supported.
Based on the foregoing embodiments, fig. 3 is a schematic structural diagram of a big model-based question answering device according to an embodiment of the present application, where the device includes:
the processing module 301 is configured to obtain a pre-configured question-answer template, and write a target question to be replied into the question-answer template; the question-answering template at least comprises database information of a third-party database and a first prompting word for prompting a large model to think about the target problem; inputting the written question-answer templates into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL;
the searching module 302 is configured to search, according to the target SQL output by the large model, target data corresponding to the target SQL in the third party database;
the processing module 301 is further configured to input the target data into the large model, and the large model generates and outputs reply information according to the target data.
In a possible implementation manner, the question and answer template further comprises a first field for writing SQL generated by the large model;
The processing module 301 is specifically configured to write the target SQL into the first field in the question-answer template, and output the question-answer template after the writing is completed.
In a possible implementation manner, the question-answering template further comprises a second prompting word and a second field for prompting writing of data corresponding to SQL;
the processing module 301 is specifically configured to write the target data into the second field of the question-answer template, and input the written question-answer template into the large model, so that the large model reads the target data according to the second prompt word.
In a possible implementation manner, the question-answering template further comprises a third prompting word for prompting the big model to judge whether the target problem is solved according to the thinking result and the target data;
the processing module 301 is further configured to determine, according to the thinking result, the target data, and the third prompt word, whether the target problem is solved by using the big model; and if the large model determines that the target problem is solved, the large model executes the subsequent steps of generating and outputting reply information according to the target data.
In a possible implementation manner, the processing module 301 is further configured to, if the large model determines that the target problem is not solved, repeat the following operations until the large model determines that the target problem is solved: the big model carries out secondary thinking on the target problem according to the first prompt word, the thinking result and target data determined before the current time and the first prompt word, and determines other thinking results; determining and outputting other target SQL according to the other thinking results; according to the other target SQL output by the large model, other target data corresponding to the other target SQL are searched in the third-party database; and inputting the other target data into the large model, and judging whether the target problem is solved or not according to the other target data, the other thought results, the third prompt word, the thought results determined before the current time and the target data by the large model.
In one possible implementation, the third party database is a database of a city voice system.
On the basis of the foregoing embodiments, the embodiment of the present application further provides an electronic device, and fig. 4 is a schematic structural diagram of the electronic device provided in the embodiment of the present application, as shown in fig. 4, including: the processor 401, the communication interface 402, the memory 403 and the communication bus 404, wherein the processor 401, the communication interface 402 and the memory 403 complete communication with each other through the communication bus 404;
The memory 403 has stored therein a computer program which, when executed by the processor 401, causes the processor 401 to perform the steps of the large model based question-answering method as provided by the embodiments described above.
Since the principle of solving the problem by the electronic device is similar to that of the question-answering method based on the large model, the implementation of the electronic device can refer to the embodiment of the method, and the repetition is omitted.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface 402 is used for communication between the electronic device and other devices. The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, a network processor (Network Processor, NP), etc.; but also digital instruction processors (Digital Signal Processing, DSP), application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
On the basis of the above embodiments, the embodiments of the present invention further provide a computer readable storage medium, in which a computer program executable by a processor is stored, which when executed on the processor causes the processor to implement the steps of the big model based question-answering method provided in the above embodiments.
Since the principle of solving the problem by the computer readable storage medium is similar to that of the question-answering method based on the big model, the implementation of the computer readable storage medium can refer to the embodiment of the method, and the repetition is omitted.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A method of question-answering based on a large model, the method comprising:
acquiring a pre-configured question-answer template, and writing a target question to be replied into the question-answer template; the question-answering template at least comprises database information of a third-party database and a first prompting word for prompting a large model to think about the target problem;
inputting the written question-answer templates into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL;
Searching target data corresponding to the target SQL in the third-party database according to the target SQL output by the large model; inputting the target data into the large model, and generating and outputting reply information by the large model according to the target data;
wherein said thinking about said problem comprises: and determining the problem to be solved corresponding to the current step.
2. The method of claim 1, wherein the question-answer template further comprises a first field for writing SQL generated by the large model;
the determining the target SQL corresponding to the target problem based on the thinking result and outputting comprises the following steps:
writing the target SQL into the first field in the question-answer template, and outputting the question-answer template with the written-in completion.
3. The method of claim 2, wherein the question-answering template further comprises a second prompting word and a second field for prompting writing of data corresponding to the SQL;
the inputting the target data into the large model includes:
and writing the target data into the second field of the question-answer template, and inputting the written question-answer template into the large model, so that the large model reads the target data according to the second prompt word.
4. The method according to claim 1, wherein the question-answering template further comprises a third prompting word for prompting the big model to judge whether the target problem is solved according to the thought result and the target data;
before the large model generates and outputs reply information according to the target data, the method comprises the following steps:
the big model judges whether the target problem is solved according to the thinking result, the target data and the third prompt word; and if the large model determines that the target problem is solved, the large model executes the subsequent steps of generating and outputting reply information according to the target data.
5. The method according to claim 4, wherein the method further comprises:
if the large model determines that the target problem is not solved, repeating the following operations until the large model determines that the target problem is solved:
the big model carries out secondary thinking on the target problem according to the first prompt word, the thinking result and target data determined before the current time and the first prompt word, and determines other thinking results; determining and outputting other target SQL according to the other thinking results;
According to the other target SQL output by the large model, other target data corresponding to the other target SQL are searched in the third-party database; and inputting the other target data into the large model, and judging whether the target problem is solved or not according to the other target data, the other thought results, the third prompt word, the thought results determined before the current time and the target data by the large model.
6. The method of claim 1, wherein the third party database is a database of a city voice system.
7. A large model-based question-answering apparatus, the apparatus comprising:
the processing module is used for acquiring a pre-configured question-answer template and writing a target question to be replied into the question-answer template; the question-answering template at least comprises database information of a third-party database and a first prompting word for prompting a large model to think about the target problem; inputting the written question-answer templates into the large model, wherein the large model thinks about the target problem according to the database information and the first prompt word, determines a target SQL corresponding to the target problem based on a thinking result and outputs the target SQL;
The searching module is used for searching target data corresponding to the target SQL in the third-party database according to the target SQL output by the large model;
the processing module is further used for inputting the target data into the large model, and the large model generates and outputs reply information according to the target data;
the processing module is specifically configured to determine a problem to be solved corresponding to the current step.
8. The apparatus of claim 7, wherein the question-answer template further comprises a first field for writing SQL generated by the large model;
the processing module is specifically configured to write the target SQL into the first field in the question-answer template, and output the question-answer template after the writing is completed.
9. An electronic device comprising a processor for implementing the steps of the large model based question-answering method according to any one of claims 1 to 6 when executing a computer program stored in a memory.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the large model based question-answering method according to any one of claims 1-6.
CN202311422458.1A 2023-10-31 2023-10-31 Question and answer method, device, equipment and medium based on large model Active CN117149985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311422458.1A CN117149985B (en) 2023-10-31 2023-10-31 Question and answer method, device, equipment and medium based on large model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311422458.1A CN117149985B (en) 2023-10-31 2023-10-31 Question and answer method, device, equipment and medium based on large model

Publications (2)

Publication Number Publication Date
CN117149985A CN117149985A (en) 2023-12-01
CN117149985B true CN117149985B (en) 2024-03-19

Family

ID=88906550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311422458.1A Active CN117149985B (en) 2023-10-31 2023-10-31 Question and answer method, device, equipment and medium based on large model

Country Status (1)

Country Link
CN (1) CN117149985B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555153A (en) * 2019-08-20 2019-12-10 暨南大学 Question-answering system based on domain knowledge graph and construction method thereof
KR102111735B1 (en) * 2018-11-29 2020-05-15 주식회사 솔트룩스 Automatic Question-Answering system having multiple Question-Answering modules
CN113392202A (en) * 2021-06-22 2021-09-14 中国工商银行股份有限公司 Knowledge graph-based question-answering system and method
CN114253990A (en) * 2021-11-08 2022-03-29 广州广电运通信息科技有限公司 Database query method and device, computer equipment and storage medium
WO2022088671A1 (en) * 2020-10-29 2022-05-05 平安科技(深圳)有限公司 Automated question answering method and apparatus, device, and storage medium
CN114547072A (en) * 2022-02-10 2022-05-27 招商银行股份有限公司 Method, system, equipment and storage medium for converting natural language query into SQL
CN115422323A (en) * 2022-08-23 2022-12-02 中国人民解放军国防科技大学 Intelligent intelligence question-answering method based on knowledge graph
CN115858751A (en) * 2022-11-30 2023-03-28 阳光保险集团股份有限公司 Processing method and device of table question-answer data and electronic equipment
CN116629227A (en) * 2023-07-24 2023-08-22 海信集团控股股份有限公司 Method and equipment for converting text into SQL (structured query language) sentence
CN116821103A (en) * 2023-08-29 2023-09-29 腾讯科技(深圳)有限公司 Data processing method, device, equipment and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130159404A1 (en) * 2011-12-19 2013-06-20 Nokia Corporation Method and apparatus for initiating a task based on contextual information
CN111324715B (en) * 2020-02-18 2023-07-14 北京百度网讯科技有限公司 Method and device for generating question-answering robot

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102111735B1 (en) * 2018-11-29 2020-05-15 주식회사 솔트룩스 Automatic Question-Answering system having multiple Question-Answering modules
CN110555153A (en) * 2019-08-20 2019-12-10 暨南大学 Question-answering system based on domain knowledge graph and construction method thereof
WO2022088671A1 (en) * 2020-10-29 2022-05-05 平安科技(深圳)有限公司 Automated question answering method and apparatus, device, and storage medium
CN113392202A (en) * 2021-06-22 2021-09-14 中国工商银行股份有限公司 Knowledge graph-based question-answering system and method
CN114253990A (en) * 2021-11-08 2022-03-29 广州广电运通信息科技有限公司 Database query method and device, computer equipment and storage medium
CN114547072A (en) * 2022-02-10 2022-05-27 招商银行股份有限公司 Method, system, equipment and storage medium for converting natural language query into SQL
CN115422323A (en) * 2022-08-23 2022-12-02 中国人民解放军国防科技大学 Intelligent intelligence question-answering method based on knowledge graph
CN115858751A (en) * 2022-11-30 2023-03-28 阳光保险集团股份有限公司 Processing method and device of table question-answer data and electronic equipment
CN116629227A (en) * 2023-07-24 2023-08-22 海信集团控股股份有限公司 Method and equipment for converting text into SQL (structured query language) sentence
CN116821103A (en) * 2023-08-29 2023-09-29 腾讯科技(深圳)有限公司 Data processing method, device, equipment and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Efficient SPARQL Queries Generator for Question Answering Systems;YI-HUI CHEN etc.;《IEEE Access》;第99850-99859页 *
Template-based Question Answering using Recursive Neural Networks;Ram G Athreya etc.;《2021 IEEE 15th International Conference on Semantic Computing》;第195-198页 *
基于知识图谱推理的食品安全情报研判系统研究;倪晋超;《中国优秀硕士学位论文全文数据库(工程科技Ⅰ辑)》;第B024-683页 *
知识库问答中复杂问题的结构化查询生成方法研究;赵满;《中国优秀硕士学位论文全文数据库(信息科技辑)》;第I138-665页 *

Also Published As

Publication number Publication date
CN117149985A (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN109284363B (en) Question answering method and device, electronic equipment and storage medium
Yang et al. Gpt4tools: Teaching large language model to use tools via self-instruction
CN111309863B (en) Natural language question-answering method and device based on knowledge graph
CN110517767A (en) Aided diagnosis method, device, electronic equipment and storage medium
CN116629227B (en) Method and equipment for converting text into SQL (structured query language) sentence
CN110276081B (en) Text generation method, device and storage medium
CN117149985B (en) Question and answer method, device, equipment and medium based on large model
CN113127617A (en) Knowledge question answering method of general domain knowledge graph, terminal equipment and storage medium
CN109492086B (en) Answer output method and device, electronic equipment and storage medium
CN117370568A (en) Power grid main equipment knowledge graph completion method based on pre-training language model
CN115964465A (en) Intelligent question and answer method and device and electronic equipment
CN113050933B (en) Brain graph data processing method, device, equipment and storage medium
CN113743074A (en) Data report generation method and device based on robot process automation
CN113434658A (en) Thermal power generating unit operation question-answer generation method, system, equipment and readable storage medium
CN114201582A (en) Text data intelligent extraction method and device based on BilSTM-CRF model
CN117828041A (en) Method, device, equipment and medium for generating reply corpus based on large model
Yan et al. Teaching Quality Evaluation and Software Implementation Based on ID3 Decision Tree Algorithm
CN111859985A (en) AI customer service model testing method, device, electronic equipment and storage medium
CN117874052A (en) SQL sentence generation method, device, equipment and medium based on large model
CN117421415A (en) Data processing method, device, electronic equipment and storage medium
CN117271561B (en) SQL sentence generation method, device and equipment based on large language model
US20230039971A1 (en) Automated return evaluation with anomoly detection
CN116166942A (en) Classification model training method, device, equipment and storage medium
CN117909454A (en) Text completion method, device and equipment based on multi-round dialogue
CN117033554A (en) Data analysis method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant