CN118193733A - Method, device, electronic equipment and storage medium for generating report - Google Patents
Method, device, electronic equipment and storage medium for generating report Download PDFInfo
- Publication number
- CN118193733A CN118193733A CN202410294963.0A CN202410294963A CN118193733A CN 118193733 A CN118193733 A CN 118193733A CN 202410294963 A CN202410294963 A CN 202410294963A CN 118193733 A CN118193733 A CN 118193733A
- Authority
- CN
- China
- Prior art keywords
- report
- information
- language model
- query
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000004458 analytical method Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 14
- 238000013135 deep learning Methods 0.000 abstract description 2
- 239000003795 chemical substances by application Substances 0.000 description 83
- 238000010586 diagram Methods 0.000 description 25
- 238000005457 optimization Methods 0.000 description 14
- 238000012545 processing Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 8
- 238000001914 filtration Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 230000002068 genetic effect Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- HOKDBMAJZXIPGC-UHFFFAOYSA-N Mequitazine Chemical compound C12=CC=CC=C2SC2=CC=CC=C2N1CC1C(CC2)CCN2C1 HOKDBMAJZXIPGC-UHFFFAOYSA-N 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000009897 systematic effect Effects 0.000 description 2
- 235000003197 Byrsonima crassifolia Nutrition 0.000 description 1
- 240000001546 Byrsonima crassifolia Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/353—Clustering; Classification into predefined classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/338—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/34—Browsing; Visualisation therefor
- G06F16/345—Summarisation for human users
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Machine Translation (AREA)
Abstract
The disclosure provides a method, a device, electronic equipment and a storage medium for generating a report, and relates to the technical field of artificial intelligence, in particular to the field of deep learning and NLP. The specific implementation scheme is as follows: generating reports for user input information by adopting a plurality of report agent modules to obtain candidate report sets; rewriting at least part of reports in the candidate report set to obtain a rewritten report, and adding the rewritten report into the candidate report set; and returning to the step of rewriting based on at least part of the reports in the candidate report set until the candidate report set comprises the reports meeting the preset requirements, and obtaining the target report based on the reports meeting the preset requirements under the condition that the candidate report set does not comprise the reports meeting the preset requirements. By adopting the technical scheme disclosed by the invention, the stability and the diversity of the quality of the report output are improved, so that the usability of the report is improved.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular to the field of deep learning and NLP (Natural Language Processing ).
Background
Large language models (Large Language Model, LLM) and their use in the field of artificial intelligence have become a hotspot in technological research. One application is to generate reports using a large language model, which is generally used to perform task decomposition and information retrieval according to user input, and further to summarize retrieved content using the large language model, and generate reports according to report formats. Due to problems such as limitation of model capability, insufficient or missing retrieval information, etc., a situation may occur in which the generated report is not available.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, and storage medium for generating a report.
According to an aspect of the present disclosure, there is provided a method of generating a report, comprising:
generating reports for user input information by adopting a plurality of report agent modules to obtain candidate report sets;
rewriting at least part of reports in the candidate report set to obtain a rewritten report, and adding the rewritten report into the candidate report set;
And returning to the step of rewriting based on at least part of the reports in the candidate report set until the candidate report set comprises the reports meeting the preset requirements, and obtaining the target report based on the reports meeting the preset requirements under the condition that the candidate report set does not comprise the reports meeting the preset requirements.
According to another aspect of the present disclosure, there is provided an apparatus for generating a report, including:
the generating unit is used for generating reports for the user input information by adopting a plurality of report agent modules to obtain candidate report sets;
A rewriting unit, configured to rewrite at least part of the reports in the candidate report set to obtain a rewritten report, and add the rewritten report to the candidate report set;
and the determining unit is used for returning to the step of rewriting at least part of reports in the candidate report set until the candidate report set comprises the reports meeting the preset requirements and obtaining the target report based on the reports meeting the preset requirements under the condition that the candidate report set does not comprise the reports meeting the preset requirements.
According to another aspect of the present disclosure, there is provided an electronic device including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
In the technical scheme of the embodiment of the disclosure, a plurality of report agent modules are adopted to respectively generate reports for user input information, at least part of the reports are rewritten, and then iterative optimization is performed to obtain the reports meeting the preset requirements, and the target reports are obtained based on the reports meeting the preset requirements. Through group intelligence, rewriting and iteration of a plurality of report agent modules, stability and diversity of report output quality are improved, and therefore usability of reports is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a report generation scheme implemented based on a large model proxy in the related art;
FIG. 2 is a flow diagram of a method of generating a report provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a framework of a process for determining query information in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a framework of a process for generating a first candidate report in an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a process for determining a quality report in an embodiment of the present disclosure;
FIG. 6A is a schematic block diagram of one manner of obtaining an overwrite report in an embodiment of the present disclosure;
FIG. 6B is a schematic block diagram of another manner of obtaining an overwrite report in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of reporting iterative optimization in an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a report being rendered in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of one example application of an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of an application scenario of an embodiment of the present disclosure;
FIG. 11 is a schematic block diagram of an apparatus for generating reports provided by an embodiment of the disclosure;
FIG. 12 is a schematic block diagram of an apparatus for generating reports provided by another embodiment of the disclosure;
FIG. 13 is a schematic block diagram of an apparatus for generating reports provided by another embodiment of the disclosure;
fig. 14 is a schematic block diagram of an example electronic device for implementing embodiments of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to facilitate understanding of the method for generating a report provided by the embodiments of the present disclosure, the following description describes related technologies of the embodiments of the present disclosure, and the following related technologies may be optionally combined with the technical solutions of the embodiments of the present disclosure, which all belong to the protection scope of the embodiments of the present disclosure.
In the related art, the scheme for generating the report includes a scheme implemented based on large model Agents (Agents) and a scheme implemented based on rules.
A scheme based on large model Agents implementation is shown in FIG. 1. As shown in FIG. 1, large model Agents include a research question generator (research question generator) 110, a search module 120, and a report generation module (report agent) 130. First, the user's input is considered as a task (task) which is then input into the research question generator 110, and the generation of query information (query) is performed using a large model, i.e., a task is divided into several sub-queries. The search module 120 searches the sub-queries using a search engine (SEARCH ENGINE), and after the search is completed, the search module performs summary of the searched documents, and finally the report generating module 130 generates a final report according to the summary. This solution does not allow precise control and, due to the inadequacy of the information, creates an illusion at the time of report generation (i.e. generating content that is inconsistent with the fact).
The scheme realized based on the rules extracts abstracts, introduction, content of each chapter and reference in the retrieved documents, and finally a report is obtained by rule stitching.
Fig. 2 is a flow chart of a method of generating a report provided by an embodiment of the present disclosure. The method can be applied to the device for generating the report, and the device can be deployed in the electronic equipment. The electronic device is, for example, a stand-alone or multi-machine terminal, server or other processing device. The terminal may be a mobile device, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or other User Equipment (UE). In some possible implementations, the method may also be implemented by way of a processor invoking computer readable instructions stored in a memory. As shown in fig. 2, the method may include the following steps S210 to S230.
S210, generating reports for the user input information by adopting a plurality of report agent modules to obtain candidate report sets.
The embodiment of the disclosure can be applied to various scenes to generate various types of reports. For example, the method can be applied to the financial field to generate daily analysis reports of investment markets; or applied to the field of knowledge base to generate a knowledge report; or applied to the academic field, generating an overview report; or applied to enterprise management to generate reports of the enterprise internal knowledge base.
The user input information is input information provided by a user having a report generation need. The user input information may include questions posed by the user for the desired report, such as "do me should invest in a product", "whether resale of idle merchandise is profitable" or "what is the most interesting attraction in the south of the sea". Alternatively, the user may provide the user input information or the question, and at the same time, may provide information such as a document, an image, or the like related to the user input information or the question.
In the embodiment of the disclosure, the report agent module is a module for generating a report for user input information, and the module may be a program module or a software module, for example, a set of program instructions. Alternatively, the report agent module may include large model Agents in the related art, and may also include modules using other framework structures.
Alternatively, each reporting agent module may output at least one report, which is referred to as a candidate report. Based on the candidate reports output by the reporting agency modules, a candidate report set can be obtained.
S220, rewriting at least part of reports in the candidate report set to obtain a rewriting report, and adding the rewriting report into the candidate report set.
It should be noted that, in the embodiments of the present disclosure, at least part may refer to all or part. At least some of the candidate reports in the candidate report set may include one or more candidate reports. The one or more candidate reports may be selected from a set of candidate reports based on a certain selection strategy or may be randomly sampled.
Alternatively, the large language model may be used to rewrite at least a portion of the report, for example, by instructing the large language model to rewrite the at least a portion of the report in a conversational manner, i.e., by having a conversation with the large language model, resulting in a rewritten report output by the large language model.
In the embodiment of the disclosure, if the rewriting report is added to the candidate report set, the candidate report set includes the report generated by the report agent module and the rewriting report, so that the diversity of the report in the candidate report set can be improved.
And S230, returning to the step of rewriting at least part of reports in the candidate report set until the candidate report set comprises the reports meeting the preset requirements and obtaining the target report based on the reports meeting the preset requirements under the condition that the candidate report set does not comprise the reports meeting the preset requirements.
The preset requirements may include conditions relating to the quality, number of words, format, etc. of the report, for example. In practical applications, for any report, whether the report meets the preset requirement can be determined based on a rule, or whether the report meets the preset requirement can be determined by using a model, for example, whether the report meets the preset requirement is determined by indicating a large language model in a conversational mode, or the report is evaluated by a preset evaluation model.
According to the method, when the candidate report set does not include the report meeting the preset requirement, at least part of the reports in the candidate report set need to be rewritten again. Here, the report for the re-overwriting may be the same as or different from the report for the previous overwriting. For example, when the candidate report set includes reports 1 to 3 and is rewritten for the first time, report 1 is selected for rewriting, so as to obtain a rewritten report, after the rewritten report is added to the candidate report set, if the candidate report set does not include a report meeting a preset requirement, the report is selected again for rewriting, and based on different selection strategies, possible situations may be generated including: re-overwriting report 1, re-overwriting report 2 or 3, re-overwriting the re-written report, etc.
It can be understood that after each overwriting, the overwriting report is added to the candidate report set, and then whether the candidate report set includes a report meeting the preset requirement is judged; if so, a target report can be obtained based on the report meeting the preset requirement; if not, then all or part of the reports in the set are rewritten again to iteratively optimize the reports in the candidate report set.
According to the method, a plurality of report agent modules are adopted to respectively generate reports for user input information, at least part of the reports are rewritten, iterative optimization is performed to obtain reports meeting preset requirements, and target reports are obtained based on the reports meeting the preset requirements. Through group intelligence, rewriting and iteration of a plurality of report agent modules, stability and diversity of report output quality are improved, and therefore usability of reports is improved.
In some embodiments, the candidate report set includes a first candidate report generated by a first reporting agent of the plurality of reporting agents for the user input information. Specifically, the candidate report set may include candidate reports generated by each of the plurality of reporting agent modules for the user input information; wherein the candidate report generated by a first reporting agent of the plurality of reporting agents is noted as a first candidate report. Here, the first reporting agent may be any reporting agent or may be a specific reporting agent. The manner in which the first reporting agent generates the first candidate report may be applicable to other reporting agent generating candidate reports, or may be different from the manner in which other reporting agent generating candidate reports. That is, multiple reporting agent modules may generate reports in the same manner or in different manners, respectively.
In the above embodiment, the manner in which the first report agent module generates the first candidate report includes:
generating N pieces of inquiry information aiming at user input information, wherein N is an integer not less than 2;
obtaining summary content corresponding to at least part of query information in the N query information through retrieval;
and generating a first candidate report based on the summary content corresponding to at least part of the query information.
According to the embodiment, at least two query messages are generated by expanding, and then corresponding summary contents are obtained by searching all or part of generated query messages, and the first candidate report is generated, so that the diversity of the searched contents can be improved, the one-sided summary contents are avoided, and the first candidate report is ensured to have certain stability.
In some embodiments, the manner in which the first reporting agent module generates the first candidate report further comprises:
Calling a first large language model based on a preset role analysis prompt template and user input information to obtain role description related to the user input information;
The role analysis prompt template is used for indicating the first large language model to output role description in a conversational mode; the role description is used for generating N pieces of query information by the first report agent module and/or is used for generating a first candidate report by the first report agent module based on summary content corresponding to at least part of the query information.
The steps described above may be performed before the first reporting agency module generates N query messages for the user input message. The first report agent module firstly determines role description according to user input information, and then generates N inquiry information so as to generate a first candidate report; wherein the role description may be referred to when generating the N query information and/or generating the first candidate report based on summary content corresponding to at least part of the query information.
In the above embodiment, the manner of acquiring the character descriptions is implemented using the first large language model. For example, the user input information may be input to a first large language model with a character analysis prompt template, which may include spoken guided utterances, such that the first large language model outputs character descriptions based on a textual analysis of the character analysis prompt template. Here, the character analysis hint template is a pre-set hint template (template) for guiding character analysis.
For example, a first large language model may be utilized to determine a role, such as financial agent (NANCE AGENT), business agent role (business agent), travel agent role (TRAVEL AGENT), etc., and different roles may have different role descriptions. The specific implementation mode is as follows.
The role analysis prompt template is as follows:
This task involves studying a given topic, whether it is complex or whether there is a definite answer. Research is performed by one specific agent, defined by its type and role, each requiring different instructions.
Agent: agents are determined by the topic area and the names of the particular agents available to study the provided topic. agents are classified according to their professional fields, each agent type being associated with a respective emoticon.
Examples:
task: "I should invest in A product" mock "
response:
{
“agent”:“Finance Agent”,
"Agent_role_sample": "you are an experienced financial analysis AI assistant. The main objective of you is to compose comprehensive, intelligent, fair and systematic financial reports based on the data and trends provided. "
}
Task: "do resale second-hand merchandise beneficial? "
Response:
{
“agent”:“Business Analyst Agent”,
"Agent_role_sample": "you are an experienced AL business analysis assistant. The main objective of you is to formulate comprehensive, visible, fair and systematic business reports based on the business data provided, market trends and strategic analysis. "
}
Task: "what is the most interesting sight in Hainan? "
response:
{
“agent”:“Travel Agent”,
"Agent_role_sample": "you are an AI tour guide assistant around the world. The main task of you is to compose engaging, insight, fair and well-structured travel reports, including historic, scenic spots and cultural insights, about a given location.
}
It can be seen that the role analysis hint template can provide examples of multiple role description outputs that inform the first large language model, through a spoken language, to answer a role (agent) and role description (agent_role_sample) according to an input task. Therefore, the user input information is used as a task, and the task analysis prompt template and the first large language model are input together, so that the character description aiming at the user input information and output by the first large language model can be obtained.
In the above embodiment, the role description is determined by performing role analysis by using the first large language model, so that the subsequent query information generation and report generation processes are guided by using the role description, which is helpful for generating contents within the role range in the generation process, and improving the stability of the candidate report.
In some embodiments, generating N query information for user input information includes:
searching a plurality of summary information matched with user input information in a summary information base;
for each abstract information in the plurality of abstract information, calling a second large language model based on the abstract information, a preset first query prompt template and user input information to obtain M query information corresponding to the abstract information; the first query prompting template is used for indicating the second large language model to output M pieces of query information related to user input information according to the abstract information in a conversational mode; m is a positive integer not greater than N.
In the above embodiment, the query information is obtained by using the second large language model. For example, the user input information, the single summary information, and the first query hint template may be input into a second large language model, and the first query hint template may include a spoken, directed utterance such that the second large language model outputs one or more query information related to the user input information based on the summary information.
For example, the first query hint template is:
"f' { context } based on the above information, 4 search queries were written to form objective opinion from: "{ question }' \
F' you must reply to a list of chinese strings in the following format: [ "query 1", "query2", "query 3", "query 4" ] ";
Wherein context is summary information matched with user input information, question is user input information, and query 1-4 are 4 query information. It can be seen that, for each piece of summary information, the summary information and the user input information are combined with the first query prompt template and then input into the second large language model, so that the second large language model can be guided to reply to a plurality of pieces of query information.
According to the embodiment, the text processing capability of the second large language model can be utilized to expand and generate one or more pieces of query information aiming at each piece of abstract information, so that the diversity of the query information is improved, and the stability of candidate reports is ensured. In addition, by searching the abstract information first and then utilizing the abstract information to generate the query information, the error of processing results caused by inputting overlong text into the second large language model can be avoided, and the stability of the candidate report is further improved.
Alternatively, for each piece of abstract information, the second large language model may be called based on the role description, the abstract information, the first query prompt template and the user input information in the foregoing embodiments, the second large language model is indicated in a conversational manner, and M pieces of query information related to the user information are output according to the abstract information based on the role corresponding to the role description.
For example, the first query hint template may be:
"you are an AI tour guide assistant around the world. The main task of you is to compose engaging, insight, fair and well-structured travel reports, including historic, scenic spots and cultural insights, about a given location.
F' { context } from the above information, 4 search queries were written to form objective opinion from: "{ question }' \
F' you must reply to a list of chinese strings in the following format: [ "query 1", "query2", "query 3", "query 4" ] ";
in the embodiment of the present disclosure, the first large language model, the second large language model, the third large language model, and the like may be different models or the same model, which is not limited in the embodiment of the present disclosure. Because the large language model is generated with randomness factors, the same user input information is processed based on the same large language model, and different processing results can be obtained, namely, if different report agency modules adopt the same large language model for processing, different candidate reports can be obtained.
In some embodiments, generating N query information for user input information may further include:
Calling a third large language model based on the plurality of abstract information, a preset second query prompting template and user input information to obtain comprehensive query information corresponding to the plurality of abstract information; the second query prompting template is used for indicating the third large language model to output comprehensive query information related to the user input information according to the plurality of abstract information in a conversational mode.
In the above embodiment, the query information may also be obtained using a third large language model, and it is understood that, according to the foregoing description, the third large language model may be the same model as or a different model from the second large language model. For example, the user input information, the plurality of summary information, and a second query hint template may be input into a third large language model, the second query hint template may include a spoken, directed utterance such that the third large language model integrally outputs one or more query information related to the user input information according to the plurality of summary information.
For example, the second query hint template is:
"your task is to write 4 comprehensive search queries based on the given context contents, taking into account the context contents. Now the multiple context is { { context } }. You need to comprehensively consider the above information, write 4 comprehensive search queries to form objective opinion from: "{ { question }"
You must reply to a list of chinese strings in the following format: [ "query 1", "query2", "query 3", "query 4" ]. "
Wherein, the content of the plurality of contexts is the summary information, question is the user input information, and query 1-4 is the query information. It can be seen that, for the whole of the plurality of summary information, the summary information is combined with the user input information and the second query prompting template and then is input into the third large language model, so that the third large language model can be guided to reply the plurality of query information, and the plurality of query information is obtained by integrating the plurality of summary information, so that the summary information can be called comprehensive query information.
According to the above embodiment, one or more comprehensive query information for the entirety of the plurality of summary information can be generated using the text processing capability extension of the third large language model. By combining comprehensive query information and query information corresponding to each abstract information, the diversity of the query information can be improved, and the stability of candidate reports can be ensured.
In some embodiments of the present disclosure, a method for constructing the summary information base used in the foregoing embodiments is also provided. Specifically, in some embodiments, the above method may further include:
segmenting a document related to user input information to obtain a plurality of segmented texts;
Aiming at each segmented text in the plurality of segmented texts, calling a fourth large language model to obtain abstract information of each segmented text;
and obtaining a summary information base based on the summary information of each segmented text.
By way of example, documents related to user input information may include documents provided by a user that are associated with user input information. For example, the user may provide an associated document while entering questions related to reporting needs. Optionally, documents related to the user input information may also include documents retrieved based on the user input information.
Alternatively, for each segmented text of the document, a prompt template may be generated based on the segmented text and the abstract to invoke a fourth large language model to obtain abstract information of the segmented text. The abstract generation prompt template is used for guiding a fourth large language model to reply abstract information according to the segmentation text in a conversational mode.
For example, the summary generation hint template is:
"f" "{ chunk }, please summarize the above-mentioned articles in chinese, the summary needs to be generalized, and information unrelated to the content of the articles is not allowed to be output, and the number of words is controlled within 500 words. "
Wherein chunk is the cut text. It can be seen that, for each segmented text, the segmented text is combined with the abstract generation prompt template and then input into the fourth large language model, so that the fourth large language model can be guided to reply abstract information.
In the above embodiment, after the document is segmented into the segmented text with a shorter length, the fourth large language model is called for the segmented text to generate the abstract, so that the accuracy of the abstract can be improved by utilizing the short text processing capability of the fourth large language model, the illusion of model processing is avoided, and the stability of the generated result is improved.
Alternatively, after obtaining summary information for each cut text, a vector generation model, such as embedding model extraction vector, may be used for each summary information to construct a summary information base in the form of a vector database. Therefore, aiming at the user input information, a vector matching mode can be adopted to search a plurality of pieces of summary information matched with the user input information in the summary information base.
FIG. 3 is an exemplary framework diagram of a query information determination process in an embodiment of the present disclosure. As shown in fig. 3, a summary information base can be constructed by extracting outline by using related documents. For the problem contained in the user input information, matching summary information (which may also be referred to as context information) is first found in a summary information base, and the number of the summary information may be plural. Then, query information is generated for the summary information, and the generated query information comprises the query information corresponding to the single summary and comprehensive query information corresponding to a plurality of summaries. And may then be sampled in the generated query information.
In some embodiments, obtaining summary content corresponding to at least part of the query information in the N query information through retrieval includes:
randomly determining L pieces of inquiry information in the N pieces of inquiry information; wherein L is a positive integer not greater than N;
For each query message in the L query messages, K documents related to the query message are obtained through retrieval, and summary contents corresponding to the query message are obtained based on summary information of the K documents.
According to the above embodiment, L pieces of query information are randomly sampled among N pieces of query information. Subsequently, for each query information sampled, the relevant K documents are retrieved separately to generate summary content. Therefore, report generation can be realized based on summary content corresponding to each query message in the L query messages, and the richness of report content is improved, so that the usability of the report is improved.
In some embodiments, the determining manner of summary information of the K documents for obtaining the summary content may include:
Aiming at each document in K documents, segmenting the document to obtain a plurality of segmented texts of the document, and generating abstract information of each segmented text in the plurality of segmented texts;
and splicing the summary information of the K documents based on the summary information of all the segmentation texts associated with the K documents.
Alternatively, the above-mentioned method for generating the segmentation and the abstract for each document in the K documents may be implemented by referring to the method for generating the segmentation and the abstract for the document related to the user input information in the foregoing embodiment, which is not described herein.
It will be appreciated that all of the segmented text associated with the K documents described above includes multiple segmented text per document. Namely, all the segmentation texts are applied to summary information of K documents, so that summary contents corresponding to query information are obtained, and the richness of the summary contents corresponding to the query information is improved.
In some embodiments, obtaining summary content corresponding to the query information based on summary information of K documents includes:
Calling a fifth large language model based on summary information of K documents, a preset summary prompt template and user input information to obtain summary content corresponding to the query information; the summary prompt template is used for indicating the fifth large language model to respond to user input information according to the summary information of the K documents in a conversational mode.
In the above embodiment, the summary content corresponding to the query information may be obtained using the fifth biggest language model. For example, the summary information, the user input information, and the summary prompt template of the K documents may be combined and then input into the fifth large language model, and the summary prompt template may include a spoken guided utterance, so that the fifth large language model responds to the user input information with the spoken utterance according to the summary information of the K documents.
For example, summarizing the hint templates is:
"f'" "{ chunk }" "using the text described above, briefly answer the following questions: "{ question }" if the question cannot be answered using text'
Please briefly summarize the text. "
"Includes all of the fact information, numbers, statistics, etc. (if any). The number of words is controlled within 500 words. ""
Wherein, chunk is summary information of the K documents, that is, information obtained by splicing summary information based on each segmented text. question are user input information such as questions related to report generation needs. It can be seen that the summary information, the user input information and the summary prompt template of the K documents are combined and then input into the fifth large language model, so that the fifth large language model can be guided to reply to the problem in the user input information.
In the above embodiment, by calling the fourth large language model by using the summary prompt template, the accuracy of the summary content can be improved by using the text processing capability of the fourth large language model, thereby improving the usability of the report generation.
In some embodiments, generating a first candidate report based on summary content corresponding to at least a portion of the query information includes:
determining a report schema based at least in part on the query information;
Calling a sixth language model based on the report outline, summary content corresponding to at least part of query information, a preset report prompt template and user input information, and generating a first candidate report; the report prompting template is used for indicating the sixth large language model to refer to the summary content corresponding to at least part of the query information in a conversational mode and responding to the user input information according to the report outline.
Here, the report outline may be used to determine the architecture of the report such that the content integrity, readability of the generated report is improved. Illustratively, the generation of the report schema may be implemented using a large language model.
For example, the at least some query information (e.g., the L query information determined by random sampling in the foregoing embodiments) includes "functions and features of artificial intelligence", "performance and accuracy of artificial intelligence", "application field of artificial intelligence, and future development. The following information can be input into the large language model to implement outline extraction:
"1. Functions and features of artificial intelligence
2. Performance and accuracy of artificial intelligence
3. Application area and future development of artificial intelligence
Please generate a synopsis on { write a report on artificial intelligence study } from the above questions, output using json form. "
Based on the input information, the information replied by the large language model may be:
For example, the report outline, the summary content corresponding to at least part of the query information, the user input information and the report prompt template may be combined and then input into a sixth large language model, and the report prompt template may include a spoken guidance utterance, so that the sixth large language model responds to the user input information according to the structure of the report outline with reference to the summary content.
For example, the report hint template may be as follows:
Using the above information, "f'" "{ research_summary }" "the detailed report answers the following questions or topics according to the set outline { outline }: "{ question }" - - -' - - -
The report should focus on answering the questions, be well structured, be rich in content, include facts and digits (if any), be a minimum of 2000 words, and be in Markdown syntax and APA format. N \
"You must determine their own explicit and valid views based on given information. No general or meaningless conclusion is drawn. N \
F' lists all used source URLs in APA format at the end of the report. N'
Wherein { research_summary } is summary content, { question } is user input information, { outline } is report outline. It can be seen that the summary content, the user input information, the report outline and the report prompt template are combined and then input into the sixth large language model, so that the sixth large language model can be guided to reply to the problem in the user input information.
According to the embodiment, the outline is extracted first, then the candidate report is generated according to the outline by utilizing the large language model, so that the output form is prevented from being too random, and the usability of the output report is improved.
FIG. 4 is an exemplary framework diagram of a first candidate report generation process in an embodiment of the disclosure. As shown in fig. 4, the user input information is first input to the intention understanding module, which outputs the character description using the large language model and applies to the task planning module. The task planning module is used for generating a plurality of inquiry information based on the user input information and the role description. Specifically, the task planning module may output a plurality of query information according to the summary information, the user input information and the role description, which are found in the summary library and matched with the user input information, and sample L query information therein to apply to the subsequent steps. And then, respectively carrying out document retrieval and summarization on the L inquiry information to obtain the summarization content of each inquiry information. Meanwhile, the output content of the task planning module can be applied to outline extraction, and after the outline is extracted, the report writing module can output candidate reports by combining the summary content and the outline by using a large language model.
Embodiments are provided below for exemplary illustration of report rewriting and iterative optimization, respectively, in methods of generating reports of the present disclosure.
In some embodiments, overwriting is performed based on at least a portion of reports in the candidate report set, resulting in an overwrite report, comprising:
determining a high-quality report in the candidate report set based on the score of each report in the candidate report set;
and (5) rewriting the high-quality report to obtain a rewriting report.
According to the above embodiment, the report rewriting range is a good report in the candidate report set, and may be, for example, a report with a score higher than a preset value, or the X reports with the highest score (X is a positive integer). The quality of the rewritten report can be guaranteed to a certain extent by rewriting the report based on the high quality report, so that the report with better quality is added in the rewritten candidate report set, and the poor report is prevented from being introduced, thereby improving the efficiency of obtaining the report meeting the preset requirement from the candidate report set and correspondingly improving the generation efficiency of the target report.
In some embodiments, the method of generating a report further comprises:
deleting reports of format errors from the candidate report set;
For each report in the candidate report set after deleting the report with the format error, calling a seventh large language model based on the report and a scoring standard prompt template to obtain the score of the report; the scoring standard prompting template is used for indicating the seventh large language model to determine the score of the report according to the preset scoring standard in a conversational mode.
The step of deleting the report of the format error and the score of the report may be performed before the step of determining the quality report based on the score. For example, the determination of a quality report may refer to fig. 5. As shown in fig. 5, the candidate report set includes Z reports, and is evaluated first, and then subjected to coarse format filtering, and the report with the format error is deleted. And then scoring and sorting each of the remaining reports, and finally selecting a high-quality report.
For each report after coarse filtering, the report may be combined with a scoring criteria prompt template and input to a seventh biggest language model, for example. The scoring criteria prompt template directs the seventh largest language model to score the report in spoken prompt utterances. Optionally, the seventh large language model may also be invoked based on the report, the user input information, and the scoring criteria prompt template to score the seventh large language model with reference to the user input information.
For example, the scoring criteria hint templates may be as follows:
the "" "now gives you 1 report, you need to score this report strictly according to the following standard, the score is higher the more the standard is met, the score interval is between 0 and 10, you should output a json format, the key values in json are" reasons "and" total_score ", {" reasons "," total_score "," the term "is used
The report was scored as follows:
1. the report format must be carefully examined and the report must be complete, including headlines, abstracts, texts, references, etc., the higher the integrity, the higher the score, which is given a maximum of 4 points.
2. The report content is carefully examined, and the higher the correlation of the report content with the { { query } }, the higher the score, which is given a maximum of 4 points.
3. The report format is carefully examined for the heading with a "#" sign, which is given a maximum score of 2, no "#" given a score of 0, and no "#" given a score of 1.
4. The report format is carefully checked, the end of the title sentence of the report cannot have any Chinese symbol, the end of the title has a Chinese symbol given a score of 0, and the end of the title has no Chinese symbol given a score of 1.
The following are the contents of this report: { content }
You remember that you need to give the scoring reason for each report according to scoring criteria, scoring reason report
And finally, giving a scoring result and a final scoring list.
Your output needs to be output in the following format:
to score this report, I will evaluate according to given criteria. The scoring reason for the report will be based on the following five criteria:
1) whether a title, abstract, text, reference, etc. is contained, 2) the relevance of the content to the question, 3) whether the title has a "#" designation, 4) whether the title has a chinese symbol. "
Here, { content } is a report, and { query } is user input information. It can be seen that combining the report, the user input information, and the scoring criteria prompt template described above, and then entering the seventh largest language model, the seventh largest language model can be directed to score the report.
According to the embodiment, the high-quality report can be determined efficiently through coarse filtering and fine sorting, so that the generation efficiency of the target report is improved.
Alternatively, the above coarse filtering process may also be implemented using a large language model. For example, a large language model may be invoked for each report in the candidate report set based on the report and the format filtering hint template. The format filtering prompt template is used for guiding the large language model to recognize the report of the format error in a conversational mode.
For example, the following format filter hint templates may be set:
"you now give you 1 report, you need to judge whether the report is in markdown format, and give the reason.
You need to output the judgment reason and the judgment result that the report is in the markdown form or the report is not in the markdown format
Your output should be in json form, including two key values, one is "releas", one is "accept",
If you consider the report to be in a markdown form, then "accept" takes a value of true, if you consider the report not to be in a markdown form, then "accept" takes a value of false, you need to determine if the report is in a markdown form, and give the reasons { "releas":, "accept":..
Reporting: { { report } "
In some embodiments, rewriting the quality report to obtain a rewritten report includes:
Calling an eighth language model based on the high-quality report and a preset suggestion prompt template to obtain a rewriting suggestion for the high-quality report; the suggestion prompt template is used for indicating the eighth large language model to provide rewriting suggestions of the high-quality report based on the editing role in a conversational mode;
Based on the overwrite advice and the quality report, an overwrite report is obtained.
For example, the eighth language model may be input after combining the quality report and the suggestion alert template, and the suggestion alert template directs the eighth language model to output the rewriting suggestion based on the editing role through the spoken sentence.
For example, the suggested hint templates are as follows:
"you are a name edit.
You are assigned a task edit to draft written by a non-expert.
Please analyze each paragraph of the report, if this draft is good enough for release, either please accept it, or send it to revision while attaching notes of the problematic paragraphs guiding the revision. You should send the appropriate revision notes please output in json format:
If revision is required, output is in the following format: { "accept": false, "notes": strip enumerates revision suggestions of problematic paragraphs. "}
Otherwise, outputting: { "accept": true, "notes": "}";
According to the embodiment, the eighth large language model is utilized to output the rewriting suggestion in the editing role, so that the eighth large language model can output the accurate suggestion within the range of the editing role, thereby providing the useful suggestion of the rewriting process, improving the quality of the rewriting report, and improving the efficiency of iterative optimization.
In some embodiments, obtaining the overwrite report based on the overwrite suggestion and the quality report includes:
Calling a ninth large language model based on the high-quality report, the rewriting suggestion and a preset first rewriting prompting template to obtain a rewriting report; the first rewrite prompting template is used for indicating a ninth large language model to refer to the rewrite suggestion in a conversational mode and rewriting the high-quality report based on the composer role.
For example, the quality report, the override proposal, and the first override prompt template may be combined and input into a ninth big language model, the first override prompt template directing the ninth big language model to override the quality report based on the composer character through the spoken sentence.
For example, the first overwrite hint template may be as follows:
"you are a professional practitioner. You have been assigned by editing, and need to revise the draft written by a non-expert. You can choose whether to follow the edited remark, as the case may be.
The Chinese output is used, only the local modification of the draft is allowed, and the graining of the draft is not allowed. "
According to the embodiment, the ninth large language model is used for rewriting the high-quality report by the family role, so that the ninth large language model can be rewritten by the family role to obtain the report with better quality, the quality of the rewritten report is improved, and the efficiency of iterative optimization is improved.
Fig. 6A is a schematic block diagram of one manner of obtaining an overwrite report as described above. As shown in fig. 6A, the quality report is input to an Editor Agent (Editor Agent) for the main purpose of having the Editor Agent analyze the report content and make some genetic evolutionary suggestions for the report content. The advice and quality reports are then input to a rewrite agent module (REWRITER AGENT) that locally rewrites the content of the individual report based on the content of the report and the genetic evolution advice.
In some embodiments, the preset requirements include: the eighth language model has no rewrite suggestion output.
Optionally, after adding the rewritten report to the candidate report set, the rewritten report may be combined with the suggestion template and input into the eighth language model to obtain a suggested result output by the eighth language model, and if the eighth language model does not have a suggested output, it may be determined that the rewritten report is a report meeting the preset requirement, and the iterative optimization is stopped. It will be appreciated that since the model has been determined to suggest good quality reports in the candidate report set prior to the previous overwrite, if the model suggests overwrite reports, the model can be considered to suggest all reports, and no report meeting the preset requirements exists in the set. Therefore, only the report is subjected to suggestion evaluation, and whether the report meeting the preset requirement is included in the set can be confirmed.
According to the embodiment, the efficiency of report iteration optimization can be improved.
The disclosed embodiments also provide another way to obtain the overwrite report. Specifically, in some embodiments, rewriting the quality report to obtain a rewritten report includes:
performing paragraph exchange based on at least two high-quality reports to obtain a cross report;
Calling a tenth large language model based on the cross report and a preset second rewriting prompt template to obtain a rewriting report; wherein the second rewrite hint template is to instruct the tenth large language model to rewrite the cross-report based on the composer role in a conversational manner.
Fig. 6B is a schematic block diagram of one manner of obtaining an overwrite report as described above. As shown in fig. 6B, at least two quality reports are input to a Cross Agent module (Cross Over Agent), and the main purpose is to segment (typically, a large segment) at least two segments of the quality report according to a theme, and after the segmentation is completed, select segments to exchange, so as to obtain a Cross report. And then, the rewriting agent module (REWRITER AGENT) rewrites the cross report in a color rendering mode, so that paragraph connection is smoother and more natural.
For example, a tenth large language model may be invoked based on the cross report with the following second override hint template:
"you are a professional practitioner. You have been assigned by editing, and need to modify the draft, written by a non-expert, as required. Please color the contents of the draft, so that the paragraph connection of the draft is more smooth and natural. With chinese output, only small local modifications to the draft are allowed, and no graining of the draft is allowed. "
According to the embodiment, the cross report integrating different high-quality report information can be obtained in a cross mode, so that the form of the rewritten report is different from that of the original report, the quality is ensured, and the efficiency of iterative optimization is improved.
It can be understood that the two ways of obtaining the report may be alternatively implemented, or may be combined, for example, different reports may be generated based on the two ways, and the generated reports may be added to the candidate report set for iterative optimization.
FIG. 7 is a schematic diagram of report iterative optimization in an embodiment of the present disclosure, wherein a rewrite report is obtained using a combination of an edit agent and a rewrite agent. As shown in fig. 7, the sorting agent module (RANKING AGENT) is responsible for eliminating reports which do not meet the format requirement, grading and sorting the reports, ensuring the quality and quantity of report population, the editing agent module (Editor agent) is responsible for evaluating each paragraph and proposing improvement, namely, planning of responsible overwriting (the basis of the overwriting can be obtained by combining with the cross agent module through rules), and the overwriting agent module (REWRITE AGENT) is executed, namely, performs partial overwriting of a single report or overwriting operation after paragraph exchange, so as to perform iterative optimization.
In some embodiments, obtaining the target report based on the report meeting the preset requirements includes:
And calling an eleventh large language model to moisten the report meeting the preset requirement to obtain a target report.
For example, as shown in fig. 8, the following coloring process may be performed on a report that satisfies the preset requirements:
1. Dividing each paragraph, and performing expansion writing on each paragraph by using a large model;
2. Adding a summary at the beginning of the report;
3. Adding keywords at the beginning of the report;
4. references are added at the end of the report and in the middle of the sentence.
According to the embodiment, after the report meeting the preset requirement is obtained, the large language model is further utilized for color rendering, and the generation quality of the target report is improved.
Fig. 9 is a schematic diagram of an application example of an embodiment of the present disclosure. As shown in fig. 9, the method of generating a report of an embodiment of the present disclosure includes three phases: population initialization, evolution, updating and color rendering generation. The group initialization stage uses a plurality of report agent modules to respectively generate candidate reports. The evolution and update stage realizes the optimization update of the report through three interactions of sorting, editing and rewriting. And finally, the report is moistened in the moisten generation stage, so that a target report is obtained. It can be seen that multiple agent combinations are employed in the above application examples for report generation.
Fig. 10 is a schematic diagram of an application scenario of an embodiment of the present disclosure. As shown in fig. 10, in the application scenario, a plurality of agents generate a plurality of reports for user input information, and then, through genetic evolution and color rendering, a target report is obtained. In practical application, the corpus can be utilized in advance to generate vectors so as to construct a summary information base in the form of vectors, thereby facilitating the agent to generate reports and the agent to search related documents in the document database.
According to the scheme, multiple agents are used for generating the report, the randomness factor generated by a large model is utilized, and 1 report is generated through group intelligence, so that multiple reports are generated, and the stability and diversity of the quality of report output are improved. In addition, a hierarchical database building method is introduced, a summary database is introduced as a context in a planning stage, and the illusion that report generation generates is reduced due to the generation of invalid query information. Moreover, by adopting genetic algorithm (iterative optimization strategy) and multi-agent cooperation, not only errors can be revised, but also low-quality reports can be filtered, and higher-quality reports can be generated.
In accordance with an embodiment of the present disclosure, the present disclosure further provides an apparatus for generating a report, and fig. 11 shows a schematic block diagram of an apparatus for generating a report provided in an embodiment of the present disclosure, as shown in fig. 11, the apparatus includes:
A generating unit 1110, configured to generate reports for the user input information by using a plurality of report agent modules, so as to obtain candidate report sets;
A rewriting unit 1120, configured to rewrite at least part of the reports in the candidate report set to obtain a rewritten report, and add the rewritten report to the candidate report set;
And a determining unit 1130, configured to, in a case where the candidate report set does not include a report meeting the preset requirement, return to the step of rewriting based on at least a part of the reports in the candidate report set until the candidate report set includes a report meeting the preset requirement, and obtain a target report based on the report meeting the preset requirement.
In some embodiments, the candidate report set includes a first candidate report generated by a first reporting agent of the plurality of reporting agents for the user input information;
as shown in fig. 12, the generation unit 1110 includes:
an information generation subunit 1210 configured to generate N pieces of query information for the user input information; wherein N is an integer not less than 2;
a summary generation subunit 1220, configured to obtain summary content corresponding to at least some query information in the N query information through retrieval;
the report generating subunit 1230 is configured to generate a first candidate report based on at least a portion of the summary content corresponding to the query information.
In some embodiments, as shown in fig. 11, the generating unit 1110 further includes a character generating subunit 1240, the character generating subunit 1240 being configured to:
Calling a first large language model based on a preset role analysis prompt template and user input information to obtain role description related to the user input information;
The role analysis prompt template is used for indicating the first large language model to output role description in a conversational mode; the role description is used for generating N pieces of query information by the first report agent module and/or is used for generating a first candidate report by the first report agent module based on summary content corresponding to at least part of the query information.
In some embodiments, the information generation subunit 1210 is further configured to:
searching a plurality of summary information matched with user input information in a summary information base;
for each abstract information in the plurality of abstract information, calling a second large language model based on the abstract information, a preset first query prompt template and user input information to obtain M query information corresponding to the abstract information; the first query prompting template is used for indicating the second large language model to output M pieces of query information related to user input information according to the abstract information in a conversational mode; m is a positive integer not greater than N.
In some embodiments, the information generation subunit 1210 is further configured to:
Calling a third large language model based on the plurality of abstract information, a preset second query prompting template and user input information to obtain comprehensive query information corresponding to the plurality of abstract information; the second query prompting template is used for indicating the third large language model to output comprehensive query information related to the user input information according to the plurality of abstract information in a conversational mode.
In some embodiments, as shown in fig. 13, the method further includes a digest-library construction unit 1310, where the digest-library construction unit 1310 is configured to:
segmenting a document related to user input information to obtain a plurality of segmented texts;
Aiming at each segmented text in the plurality of segmented texts, calling a fourth large language model to obtain abstract information of each segmented text;
and obtaining a summary information base based on the summary information of each segmented text.
In some embodiments, the summary generation subunit 1220 is further configured to:
randomly determining L pieces of inquiry information in the N pieces of inquiry information; wherein L is a positive integer not greater than N;
For each query message in the L query messages, K documents related to the query message are obtained through retrieval, and summary contents corresponding to the query message are obtained based on summary information of the K documents.
In some embodiments, the summary generation subunit 1220 is further configured to:
Aiming at each document in K documents, segmenting the document to obtain a plurality of segmented texts of the document, and generating abstract information of each segmented text in the plurality of segmented texts;
and splicing the summary information of the K documents based on the summary information of all the segmentation texts associated with the K documents.
In some embodiments, the summary generation subunit 1220 is further configured to:
Calling a fifth large language model based on summary information of K documents, a preset summary prompt template and user input information to obtain summary content corresponding to the query information; the summary prompt template is used for indicating the fifth large language model to respond to user input information according to the summary information of the K documents in a conversational mode.
In some embodiments, report generating subunit 1230 is also configured to:
determining a report schema based at least in part on the query information;
Calling a sixth language model based on the report outline, summary content corresponding to at least part of query information, a preset report prompt template and user input information, and generating a first candidate report; the report prompting template is used for indicating the sixth large language model to refer to the summary content corresponding to at least part of the query information in a conversational mode and responding to the user input information according to the report outline.
In some embodiments, the rewrite unit 1120 is further to:
determining a high-quality report in the candidate report set based on the score of each report in the candidate report set;
and (5) rewriting the high-quality report to obtain a rewriting report.
In some embodiments, the rewrite unit 1120 is further to:
deleting reports of format errors from the candidate report set;
For each report in the candidate report set after deleting the report with the format error, calling a seventh large language model based on the report and a scoring standard prompt template to obtain the score of the report; the scoring standard prompting template is used for indicating the seventh large language model to determine the score of the report according to the preset scoring standard in a conversational mode.
In some embodiments, the rewrite unit 1120 is further to:
Calling an eighth language model based on the high-quality report and a preset suggestion prompt template to obtain a rewriting suggestion for the high-quality report; the suggestion prompt template is used for indicating the eighth large language model to provide rewriting suggestions of the high-quality report based on the editing role in a conversational mode;
Based on the overwrite advice and the quality report, an overwrite report is obtained.
In some embodiments, the rewrite unit 1120 is further to:
Calling a ninth large language model based on the high-quality report, the rewriting suggestion and a preset first rewriting prompting template to obtain a rewriting report; the first rewrite prompting template is used for indicating a ninth large language model to refer to the rewrite suggestion in a conversational mode and rewriting the high-quality report based on the composer role.
In some embodiments, the preset requirements include: the eighth large language model has no rewrite suggestions for premium reports.
In some embodiments, the rewrite unit 1120 is further to:
performing paragraph exchange based on at least two high-quality reports to obtain a cross report;
Calling a tenth large language model based on the cross report and a preset second rewriting prompt template to obtain a rewriting report; wherein the second rewrite hint template is to instruct the tenth large language model to rewrite the cross-report based on the composer role in a conversational manner.
In some embodiments, the determining unit 1130 is further configured to:
And calling an eleventh large language model to moisten the report meeting the preset requirement to obtain a target report.
For descriptions of specific functions and examples of each module and sub-module of the apparatus in the embodiments of the present disclosure, reference may be made to the related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 14 shows a schematic block diagram of an example electronic device 1400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the apparatus 1400 includes a computing unit 1401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
Various components in device 1400 are connected to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, an optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1401 performs the respective methods and processes described above, for example, a method of generating a report. For example, in some embodiments, the method of generating a report may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1400 via the ROM 1402 and/or the communication unit 1409. When the computer program is loaded into RAM 1403 and executed by computing unit 1401, one or more steps of the method of generating a report described above may be performed. Alternatively, in other embodiments, computing unit 1401 may be configured to perform the method of generating the report by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (37)
1. A method of generating a report, comprising:
generating reports for user input information by adopting a plurality of report agent modules to obtain candidate report sets;
Rewriting at least part of reports in the candidate report set to obtain a rewritten report, and adding the rewritten report into the candidate report set;
And returning to the step of rewriting at least part of the reports based on the candidate report set under the condition that the candidate report set does not comprise the reports meeting the preset requirements, until the candidate report set comprises the reports meeting the preset requirements, and obtaining a target report based on the reports meeting the preset requirements.
2. The method of claim 1, wherein the candidate report set comprises a first candidate report generated by a first reporting agent of the plurality of reporting agents for the user input information;
the manner in which the first reporting agent module generates the first candidate report includes:
Generating N pieces of inquiry information aiming at the user input information; wherein N is an integer not less than 2;
obtaining summary content corresponding to at least part of query information in the N query information through retrieval;
and generating the first candidate report based on the summary content corresponding to the at least part of query information.
3. The method of claim 2, wherein the manner in which the first reporting agent module generates the first candidate report further comprises:
Calling a first large language model based on a preset role analysis prompt template and the user input information to obtain a role description related to the user input information;
the role analysis prompt template is used for indicating the first large language model to output the role description in a conversational mode; the role description is used for the first report agent module to generate the N pieces of query information and/or used for the first report agent module to generate the first candidate report based on summary content corresponding to at least part of query information.
4. A method according to claim 2 or 3, wherein the generating N query information for the user input information comprises:
searching a plurality of pieces of abstract information matched with the user input information in an abstract information base;
For each piece of summary information in the plurality of pieces of summary information, calling a second large language model based on the summary information, a preset first query prompt template and the user input information to obtain M pieces of query information corresponding to the summary information; the first query prompting template is used for indicating the second large language model to output M pieces of query information related to the user input information according to the abstract information in a dialogue mode; m is a positive integer not greater than N.
5. The method of claim 4, wherein the generating N query information for the user input information further comprises:
Calling a third large language model based on the plurality of abstract information, a preset second query prompting template and the user input information to obtain comprehensive query information corresponding to the plurality of abstract information; the second query prompting template is used for indicating the third large language model to output comprehensive query information related to the user input information according to the plurality of abstract information in a conversational mode.
6. The method of claim 4 or 5, further comprising:
Segmenting a document related to the user input information to obtain a plurality of segmented texts;
Aiming at each segmented text in the plurality of segmented texts, calling a fourth large language model to obtain abstract information of each segmented text;
And obtaining the abstract information base based on the abstract information of each segmented text.
7. The method according to any one of claims 2-6, wherein the obtaining, by retrieving, summary content corresponding to at least part of the N pieces of query information includes:
Randomly determining L inquiry information in the N inquiry information; wherein L is a positive integer not greater than N;
And aiming at each query message in the L query messages, obtaining K documents related to the query message through retrieval, and obtaining summary content corresponding to the query message based on the summary information of the K documents.
8. The method of claim 7, wherein the determining the summary information of the K documents includes:
Aiming at each document in the K documents, segmenting the document to obtain a plurality of segmented texts of the document, and generating abstract information of each segmented text in the plurality of segmented texts;
and based on the summary information of all the segmentation texts associated with the K documents, splicing to obtain the summary information of the K documents.
9. The method of claim 8, wherein the obtaining summary content corresponding to the query information based on the summary information of the K documents includes:
Calling a fifth large language model based on the summary information of the K documents, a preset summary prompt template and the user input information to obtain summary content corresponding to the query information; the summary prompt template is used for indicating the fifth large language model to respond to the user input information according to the summary information of the K documents in a conversational mode.
10. The method of any of claims 2-9, wherein the generating the first candidate report based on summary content corresponding to the at least partial query information comprises:
determining a report schema based on the at least partial query information;
calling a sixth language model based on the report outline, the summary content corresponding to at least part of query information, a preset report prompt template and the user input information to generate the first candidate report; the report prompting template is used for indicating the sixth large language model to refer to summary content corresponding to at least part of query information in a conversational mode and responding to the user input information according to the report outline.
11. The method of any of claims 1-10, wherein the overwriting based on at least a portion of reports in the candidate report set, resulting in an overwriting report, comprises:
Determining a quality report in the candidate report set based on the score of each report in the candidate report set;
And rewriting the high-quality report to obtain the rewriting report.
12. The method of claim 11, further comprising:
deleting a report of a format error in the candidate report set;
for each report in the candidate report set after deleting the report with the format error, calling a seventh large language model based on the report and a scoring standard prompt template to obtain the score of the report; the scoring standard prompting template is used for indicating the seventh large language model to determine the score of the report according to a preset scoring standard in a conversational mode.
13. The method of claim 11 or 12, wherein the overwriting the quality report to obtain the overwriting report comprises:
Calling an eighth language model based on the high-quality report and a preset suggestion prompt template to obtain a rewriting suggestion for the high-quality report; wherein the suggestion prompt template is used for instructing the eighth language model to provide a rewriting suggestion of the quality report based on an editing role in a conversational manner;
and obtaining the rewriting report based on the rewriting suggestion and the high-quality report.
14. The method of claim 13, wherein the deriving the rewrite report based on the rewrite suggestion and the quality report comprises:
Calling a ninth language model based on the high-quality report, the rewriting suggestion and a preset first rewriting prompt template to obtain the rewriting report; wherein the first rewrite hint template is to instruct the ninth biggest language model to rewrite the premium report based on a composer role with reference to the rewrite suggestion in a conversational manner.
15. The method of claim 13 or 14, wherein the preset requirements include: the eighth language model has no rewrite suggestion output.
16. The method of any of claims 11-15, wherein the overwriting the quality report to obtain the overwriting report comprises:
performing paragraph exchange based on at least two high-quality reports to obtain a cross report;
Calling a tenth large language model based on the cross report and a preset second rewriting prompting template to obtain the rewriting report; wherein the second rewrite hint template is to instruct the tenth big language model to rewrite the cross report based on a composer role in a conversational manner.
17. The method according to any one of claims 1-16, wherein the deriving a target report based on the report meeting a preset requirement comprises:
and calling an eleventh large language model to moisten the report meeting the preset requirement to obtain the target report.
18. An apparatus for generating a report, comprising:
the generating unit is used for generating reports for the user input information by adopting a plurality of report agent modules to obtain candidate report sets;
a rewriting unit, configured to rewrite at least part of the reports in the candidate report set to obtain a rewritten report, and add the rewritten report to the candidate report set;
And the determining unit is used for returning to the step of rewriting at least part of reports in the candidate report set until the candidate report set comprises the report meeting the preset requirement and obtaining a target report based on the report meeting the preset requirement under the condition that the candidate report set does not comprise the report meeting the preset requirement.
19. The apparatus of claim 18, wherein the candidate report set comprises a first candidate report generated by a first reporting agent of the plurality of reporting agents for the user input information;
the generation unit includes:
An information generation subunit, configured to generate N pieces of query information for the user input information; wherein N is an integer not less than 2;
A summary generation subunit, configured to obtain summary content corresponding to at least part of the query information in the N query information through retrieval;
and the report generation subunit is used for generating the first candidate report based on the summary content corresponding to the at least part of query information.
20. The apparatus of claim 19, wherein the generation unit further comprises a role generation subunit to:
Calling a first large language model based on a preset role analysis prompt template and the user input information to obtain a role description related to the user input information;
the role analysis prompt template is used for indicating the first large language model to output the role description in a conversational mode; the role description is used for the first report agent module to generate the N pieces of query information and/or used for the first report agent module to generate the first candidate report based on summary content corresponding to at least part of query information.
21. The apparatus of claim 19 or 20, wherein the information generation subunit is further configured to:
searching a plurality of pieces of abstract information matched with the user input information in an abstract information base;
For each piece of summary information in the plurality of pieces of summary information, calling a second large language model based on the summary information, a preset first query prompt template and the user input information to obtain M pieces of query information corresponding to the summary information; the first query prompting template is used for indicating the second large language model to output M pieces of query information related to the user input information according to the abstract information in a dialogue mode; m is a positive integer not greater than N.
22. The apparatus of claim 21, wherein the information generation subunit is further configured to:
Calling a third large language model based on the plurality of abstract information, a preset second query prompting template and the user input information to obtain comprehensive query information corresponding to the plurality of abstract information; the second query prompting template is used for indicating the third large language model to output comprehensive query information related to the user input information according to the plurality of abstract information in a conversational mode.
23. The apparatus according to claim 21 or 22, further comprising a digest-library construction unit for:
Segmenting a document related to the user input information to obtain a plurality of segmented texts;
Aiming at each segmented text in the plurality of segmented texts, calling a fourth large language model to obtain abstract information of each segmented text;
And obtaining the abstract information base based on the abstract information of each segmented text.
24. The apparatus of any of claims 19-23, wherein the summary generation subunit is further to:
Randomly determining L inquiry information in the N inquiry information; wherein L is a positive integer not greater than N;
And aiming at each query message in the L query messages, obtaining K documents related to the query message through retrieval, and obtaining summary content corresponding to the query message based on the summary information of the K documents.
25. The apparatus of claim 24, wherein the summary generation subunit is further configured to:
Aiming at each document in the K documents, segmenting the document to obtain a plurality of segmented texts of the document, and generating abstract information of each segmented text in the plurality of segmented texts;
and based on the summary information of all the segmentation texts associated with the K documents, splicing to obtain the summary information of the K documents.
26. The apparatus of claim 25, wherein the summary generation subunit is further configured to:
Calling a fifth large language model based on the summary information of the K documents, a preset summary prompt template and the user input information to obtain summary content corresponding to the query information; the summary prompt template is used for indicating the fifth large language model to respond to the user input information according to the summary information of the K documents in a conversational mode.
27. The apparatus of any of claims 19-26, wherein the report generating subunit is further to:
determining a report schema based on the at least partial query information;
calling a sixth language model based on the report outline, the summary content corresponding to at least part of query information, a preset report prompt template and the user input information to generate the first candidate report; the report prompting template is used for indicating the sixth large language model to refer to summary content corresponding to at least part of query information in a conversational mode and responding to the user input information according to the report outline.
28. The apparatus of any of claims 18-27, wherein the rewrite unit is further to:
Determining a quality report in the candidate report set based on the score of each report in the candidate report set;
And rewriting the high-quality report to obtain the rewriting report.
29. The apparatus of claim 28, the rewrite unit further to:
deleting a report of a format error in the candidate report set;
for each report in the candidate report set after deleting the report with the format error, calling a seventh large language model based on the report and a scoring standard prompt template to obtain the score of the report; the scoring standard prompting template is used for indicating the seventh large language model to determine the score of the report according to a preset scoring standard in a conversational mode.
30. The apparatus of claim 28 or 29, wherein the rewriting unit is further to:
Calling an eighth language model based on the high-quality report and a preset suggestion prompt template to obtain a rewriting suggestion for the high-quality report; wherein the suggestion prompt template is used for instructing the eighth language model to provide a rewriting suggestion of the quality report based on an editing role in a conversational manner;
and obtaining the rewriting report based on the rewriting suggestion and the high-quality report.
31. The apparatus of claim 30, wherein the rewrite unit is further to:
Calling a ninth language model based on the high-quality report, the rewriting suggestion and a preset first rewriting prompt template to obtain the rewriting report; wherein the first rewrite hint template is to instruct the ninth biggest language model to rewrite the premium report based on a composer role with reference to the rewrite suggestion in a conversational manner.
32. The apparatus of claim 30 or 31, wherein the preset requirements include: the eighth language model has no rewrite suggestion output.
33. The apparatus of any of claims 28-32, wherein the rewrite unit is further to:
performing paragraph exchange based on at least two high-quality reports to obtain a cross report;
Calling a tenth large language model based on the cross report and a preset second rewriting prompting template to obtain the rewriting report; wherein the second rewrite hint template is to instruct the tenth big language model to rewrite the cross report based on a composer role in a conversational manner.
34. The apparatus of any one of claims 18-33, wherein the determining unit is further configured to:
and calling an eleventh large language model to moisten the report meeting the preset requirement to obtain the target report.
35. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-17.
36. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-17.
37. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-17.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410294963.0A CN118193733A (en) | 2024-03-14 | 2024-03-14 | Method, device, electronic equipment and storage medium for generating report |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410294963.0A CN118193733A (en) | 2024-03-14 | 2024-03-14 | Method, device, electronic equipment and storage medium for generating report |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118193733A true CN118193733A (en) | 2024-06-14 |
Family
ID=91401050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410294963.0A Pending CN118193733A (en) | 2024-03-14 | 2024-03-14 | Method, device, electronic equipment and storage medium for generating report |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118193733A (en) |
-
2024
- 2024-03-14 CN CN202410294963.0A patent/CN118193733A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111414479B (en) | Label extraction method based on short text clustering technology | |
CN113495900B (en) | Method and device for obtaining structured query language statement based on natural language | |
CN111708869B (en) | Processing method and device for man-machine conversation | |
CN107301170B (en) | Method and device for segmenting sentences based on artificial intelligence | |
CN109726298B (en) | Knowledge graph construction method, system, terminal and medium suitable for scientific and technical literature | |
CN113807098A (en) | Model training method and device, electronic equipment and storage medium | |
CN107798123B (en) | Knowledge base and establishing, modifying and intelligent question and answer methods, devices and equipment thereof | |
CN110222194B (en) | Data chart generation method based on natural language processing and related device | |
CN112579733B (en) | Rule matching method, rule matching device, storage medium and electronic equipment | |
US11709989B1 (en) | Method and system for generating conversation summary | |
CN111651572A (en) | Multi-domain task type dialogue system, method and terminal | |
CN112115252B (en) | Intelligent auxiliary writing processing method and device, electronic equipment and storage medium | |
CN115495555A (en) | Document retrieval method and system based on deep learning | |
US20230008897A1 (en) | Information search method and device, electronic device, and storage medium | |
CN115099239B (en) | Resource identification method, device, equipment and storage medium | |
CN112699645A (en) | Corpus labeling method, apparatus and device | |
CN114860942B (en) | Text intention classification method, device, equipment and storage medium | |
CN116401345A (en) | Intelligent question-answering method, device, storage medium and equipment | |
CN117891930B (en) | Book knowledge question-answering method based on knowledge graph enhanced large language model | |
CN118364053A (en) | LANGCHAIN-based document vectorization and document segmentation method | |
CN113779987A (en) | Event co-reference disambiguation method and system based on self-attention enhanced semantics | |
CN115248890A (en) | User interest portrait generation method and device, electronic equipment and storage medium | |
CN112417875A (en) | Configuration information updating method and device, computer equipment and medium | |
CN110362694A (en) | Data in literature search method, equipment and readable storage medium storing program for executing based on artificial intelligence | |
CN118193733A (en) | Method, device, electronic equipment and storage medium for generating report |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |