CN117807982A - Data computing method, device, equipment and medium based on large language model - Google Patents

Data computing method, device, equipment and medium based on large language model Download PDF

Info

Publication number
CN117807982A
CN117807982A CN202311619029.3A CN202311619029A CN117807982A CN 117807982 A CN117807982 A CN 117807982A CN 202311619029 A CN202311619029 A CN 202311619029A CN 117807982 A CN117807982 A CN 117807982A
Authority
CN
China
Prior art keywords
processed
data
processing
text
language model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311619029.3A
Other languages
Chinese (zh)
Inventor
刘敏
杜兆臣
刘微
田羽慧
杨成喆
宋骋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202311619029.3A priority Critical patent/CN117807982A/en
Publication of CN117807982A publication Critical patent/CN117807982A/en
Pending legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)

Abstract

The present application relates to the field of artificial intelligence technologies, and in particular, to a data computing method, apparatus, device, and medium based on a large language model. The large language model determines the processing requirement of the text to be processed and the data to be processed corresponding to the processing requirement, and performs grouping operation on the data to be processed according to the maximum number and the number of the data to be processed, namely when the number of the data to be processed is greater than the maximum number, the data to be processed is divided into a plurality of groups to perform operation, so that the operation amount is reduced, and a first processing result meeting the processing requirement is obtained; judging whether the processing requirement is the last processing requirement, if so, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement, so that the calculation accuracy of the large language model is improved. The technical scheme has the characteristics of reliability, robustness and generalization, and accords with the credibility characteristic.

Description

Data computing method, device, equipment and medium based on large language model
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a data computing method, apparatus, device, and medium based on a large language model.
Background
With the rapid development of models based on decoder converter (transducer) variants, large language models are gradually replacing traditional natural language models, and the large language models have achieved good results in practical applications. Large language models may support well for most simple text analysis tasks, but for computational class tasks, large language models often cannot accurately compute the results of more data. Therefore, at present, for the calculation type task with a relatively large data volume, only the traditional natural language model can be used for processing. However, with the development of technology, the application scenarios of the large language model will be more and more, so how to improve the computing power of the large language model is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a data calculation method, device, equipment and medium based on a large language model, which are used for solving the problem that the calculation accuracy of the large language model is lower for calculation tasks with more data quantity in the prior art.
In a first aspect, the present application provides a data processing method based on a large language model, the method including:
acquiring a text to be processed;
inputting the text to be processed into the large language model, wherein the large language model determines the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement;
the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement;
judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
In a second aspect, the present application provides a data processing apparatus based on a large language model, the apparatus comprising:
the acquisition module is used for acquiring the text to be processed;
the processing module is used for inputting the text to be processed into the large language model, and the large language model determines the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement; the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement; judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
In a third aspect, the present application further provides an electronic device comprising a processor for implementing the steps of the large language model based data processing method according to any one of the above when executing a computer program stored in a memory.
In a fourth aspect, the present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of a large language model based data processing method as described in any one of the above.
Since in the embodiment of the application, the text to be processed is input into the large language model, the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement are determined by the large language model, and the data to be processed are subjected to grouping operation according to the maximum number of data required for operation and the number of data to be processed, namely the operation amount of the large language model is reduced, and when the number of data to be processed is greater than the maximum number, the operation is performed in multiple groups, so that a first processing result meeting the processing requirement is obtained; judging whether the processing requirement is the last processing requirement corresponding to the text to be processed, if so, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement, so that the calculation accuracy of the large language model is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a data processing process based on a large language model according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a processing requirement according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a mental state diagram according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a data processing procedure according to an embodiment of the present application;
fig. 5 is a schematic diagram of processing a question-answer text according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a data processing apparatus based on a large language model according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The most direct method of the city index question-answering system based on the large language model is to convert the questions input by the user into SQL by using the large language model, inquire the wanted data from the database and calculate correspondingly. The data in the database is typically stored in the form of time granularity months and place granularity zones. What is the tax revenue for the text to be processed "2023, 1 st of Qingdao city, laoshan mountain area? "a piece of data can be queried from the database, and the data can meet the problem of the user, the reply can be generated directly based on the queried data. And what is the tax revenue for the' 2023 Qingdao city? If so, the database may query a plurality of pieces of data, and the query plurality of pieces of data may need to be summed during reply. However, for the problems of calculation class, such as summation, duty ratio, ranking and the like, the accuracy of direct calculation of the large language model is not high. Among these, the problem of computing class may be similar to the following example: for example, the text "what the tax revenue amounts for each region of Qingdao city 2022" corresponds to the calculation category as a sum; the corresponding calculation category of the text "what the tax revenue amount of the Laoshan region 2022 is in the Qingdao city" is the duty ratio; the corresponding calculation category of the text of 'what the total tax revenue amount in the Laoshi city is in the Laoshi mountain region of 2022' is the ranking; the corresponding calculation category of the text "how much higher the total revenue amount of Qingdao city is in the year of the last year" is the variation.
In order to improve the accuracy of calculation of a large language model, the embodiment of the application provides a data calculation method, a device, equipment and a medium based on the large language model, wherein a text to be processed is acquired in the method; inputting the text to be processed into a large language model, and determining the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement by the large language model; the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement; judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
Fig. 1 is a schematic flow chart of a data processing process based on a large language model according to an embodiment of the present application, as shown in fig. 1, the process includes the following steps:
s101: and acquiring a text to be processed.
The data processing method based on the large language model is applied to electronic equipment, and the electronic equipment can be a server, a PC, a mobile terminal and the like.
In this embodiment of the present application, a text to be processed may be obtained, where the text to be processed may be input to an electronic device by a user of the electronic device, and when the text to be processed is input by the user of the electronic device, the text may be input in a text manner or may be input in a voice manner, that is, the user of the electronic device may input the text to be processed through a keyboard connected to the electronic device, or may speak a voice of the text to be processed to the electronic device, and after the voice of the text to be converted is collected by a voice collection module of the electronic device, voice recognition processing is performed on the voice, where the obtained text may be used as the text to be processed. The text to be processed obtained in the embodiment of the application may be sent to the electronic device by other electronic devices connected to the electronic device, or may be a text obtained by performing image recognition processing on an image by the electronic device.
Illustratively, the text to be processed acquired in the embodiment of the present application may be "help me calculate the sum of the following 1000 numbers, where the numbers are respectively: 1. 15, 1225, 154 … …, 1552, 145, 142).
It should be noted that, in the embodiment of the present application, the method for obtaining the text to be processed and the specific content of the text to be processed are not limited, and may be configured by those skilled in the art according to needs.
S102: inputting the text to be processed into the large language model, wherein the large language model determines the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement.
After the text to be processed is obtained, the text to be processed can be input into a large language model, and the large language model processes the text to be processed, so that a target processing result corresponding to the text to be processed is obtained. Wherein, a large language model can be understood as a model based on a transducer architecture; the large language model may also be understood as a machine learning model with a huge parameter scale and complexity, e.g., a neural network model with millions to billions of parameters or billions of parameters; the large language model can also be understood as a deep learning model obtained by training on large-scale training data through semi (weak) supervision, full supervision, self-supervision or non-supervision and other technologies. In the embodiment of the application, the large language model can process a plurality of different tasks, training is generally performed based on training data of a certain target task field when the large language model is trained, and the large language model obtained through training can be migrated to other task fields similar to the target task field for use under the general condition.
Because the large language model has strong language analysis processing capability, after receiving the text to be processed, the large language model can determine the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement. That is, it is determined what processing needs to be performed on the text to be processed, and which data needs to be used when the processing is performed. In this embodiment of the present application, the large language model may decompose the text to be processed, which may be understood as decomposing a complex problem into a plurality of simple sub-problems, that is, decomposing the text to be processed into one or more sub-texts, where each sub-text corresponds to a processing requirement, and different data to be processed corresponding to different processing requirements.
Illustratively, the text to be processed is "what the tax revenue of 2022 Qingdao city is in the Shandong province. After receiving the text to be processed, the large language model may decompose the text to be processed into a plurality of sub-texts, each corresponding to a processing requirement. The sub-texts are respectively: how much tax revenue is in Qingdao city 2022, how much tax revenue is in Shandong province 2022, and how much tax revenue is in Qingdao city 2022 is in Shandong province. The processing requirement corresponding to the "how much tax revenue is in Qingdao city 2022" may be to query tax revenue of each region in Qingdao city 2022, and calculate the sum, and then the data to be processed corresponding to the processing requirement is tax revenue of each region in Qingdao city 2022; the processing requirement corresponding to the "what the tax income in 2022 of Shandong province" is the tax income of each city in 2022 of Shandong province, and the data to be processed corresponding to the processing requirement is the tax income of each city in Shandong province; the processing requirement corresponding to the "how much the tax revenue of the Qingdao city 2022 is in the Shandong province" is to obtain the tax revenue of the Qingdao city 2022 and the tax revenue of the Shandong province 2022, and calculate the ratio, then the data to be processed corresponding to the processing requirement is the tax revenue of the Qingdao city 2022 and the tax revenue of the Shandong province 2022.
S103: and the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement.
After determining the processing requirement corresponding to the text to be processed and the corresponding data to be processed, if a plurality of processing requirements corresponding to the text to be processed are determined, the large language model can perform grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed corresponding to the processing requirements. Wherein the maximum number is preconfigured, which can be understood as the maximum number that a large language model can accurately calculate data at a time. Assuming that the accuracy of 20 pieces of data processed at a time by the large language model is 98% and the accuracy of 19 pieces of data processed at a time by the large language model is 100% through statistics on a large amount of data, then 19 can be determined as the maximum amount of data required for the large language model to operate. It should be noted that, the setting of the maximum number may be configured by those skilled in the art according to the need, which is not limited in the embodiment of the present application.
In this embodiment of the present application, the to-be-processed data corresponding to any processing requirement may be randomly subjected to packet processing, and if the maximum number is 8, then 8 to-be-processed data may be randomly selected from the to-be-processed data as the first packet, and in the unselected to-be-processed data, 8 to-be-processed data may be continuously randomly selected as the second packet, and so on.
After each group corresponding to the data to be processed is obtained, operation can be performed on each group, so that a first processing result meeting the processing requirement is obtained. Which operation is performed on each packet corresponds to the processing requirements. For example, when the processing requirement is summation, then summation processing can be performed on each packet corresponding to the processing requirement; when the processing requirement is sorting, sorting processing can be performed on the data to be processed in each packet corresponding to the processing requirement. Specifically, what kind of operation is specifically corresponding to each processing requirement may be preconfigured, or may be determined in real time by a large language model based on its powerful language analysis capability.
S104: judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
According to the above example, the execution sequence exists between the processing demands, and since the large language model has strong language analysis processing capability, the large language model can determine the execution sequence of each processing demand when decomposing and determining the processing demands. After determining the first processing result of the processing requirement corresponding to the text to be processed, it may be determined whether the processing requirement is the last requirement corresponding to the text to be processed.
If so, the first processing result can be determined as a target processing result of the text to be processed, otherwise, the fact that other processing is needed to be performed on corresponding data is indicated to determine the target processing result corresponding to the text to be processed, and therefore the first processing result can be used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
Specifically, assume that the text to be processed is the quotient between the determined a+b and b+c. Then, the first processing requirement corresponding to the text to be processed can be determined to be the sum of A and B, the second processing requirement is determined to be the sum of B and C, and the third processing requirement is determined to be the quotient of the two sums. As can be seen from the analysis of the three processing requirements, the first processing requirement and the second processing requirement may be in parallel relation, and the result corresponding to the determined first processing requirement and the second processing requirement should be used as the data to be processed corresponding to the third processing requirement, so that the third processing requirement is the next processing requirement after the first processing requirement and the second processing requirement. FIG. 2 is a schematic diagram of a processing requirement according to an embodiment of the present application, where, as shown in FIG. 2, after determining a first processing result of a first processing requirement, the first processing result is sent to a third processing requirement; and while determining the first processing requirement, a first processing result of the second processing requirement may be determined and sent to the third processing requirement. After the third processing requirement receives the two first processing results, the two first processing results can be used as to-be-processed data corresponding to the third processing requirement, so that the first processing result of the third processing requirement is determined according to the to-be-processed data, and the first processing result of the third processing requirement is determined as a target processing result of the to-be-processed text.
Since in the embodiment of the application, the text to be processed is input into the large language model, the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement are determined by the large language model, and the data to be processed are subjected to grouping operation according to the maximum number of data required for operation and the number of data to be processed, namely the operation amount of the large language model is reduced, and when the number of data to be processed is greater than the maximum number, the operation is performed in multiple groups, so that a first processing result meeting the processing requirement is obtained; judging whether the processing requirement is the last processing requirement corresponding to the text to be processed, if so, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement, so that the calculation accuracy of the large language model is improved.
In order to further improve accuracy of data processing, in the embodiment of the present application, after the text to be processed is input to the large language model, before the large language model determines a processing requirement corresponding to the text to be processed, the method further includes:
Performing intention recognition on the text to be processed to obtain a target intention;
if the target intention is the calculation type intention, continuing to execute the step of determining the processing requirement corresponding to the text to be processed by the subsequent large language model.
Since the large language model can process various types of texts, the text to be processed input into the large language model can be of any type, in the embodiment of the present application, after the text to be processed is acquired, before the text to be processed is input into the large language model, intention recognition can be performed on the text to be processed, so as to obtain a target intention corresponding to the text to be processed. And judging whether the target intention is a calculation intention, if so, processing corresponding data based on the data processing provided by the application, that is, continuously executing the step of determining the processing requirement corresponding to the text to be processed by the subsequent large language model.
If the target intention is a non-computational class intention, the text to be processed may be processed normally.
Specifically, when determining the target intention of the text to be processed, the target intention may be determined based on a pre-trained intention recognition model, or the text to be processed may be analyzed by a large language model, so as to determine the target intention. How text is intended to be identified as prior art, which is not limited by the embodiments of the present application, and can be configured as desired by those skilled in the art.
In order to further improve accuracy of data processing, in the embodiments of the present application, the grouping operation is performed on the data to be processed according to a maximum amount of data required for operation and the amount of the data to be processed by the large language model, so as to obtain a first processing result satisfying the processing requirement, where the first processing result includes:
the large language model groups the data to be processed according to the maximum number and the number of the data to be processed to obtain data groups, wherein the number of the data included in any data group is not more than the maximum number;
performing operation corresponding to the processing requirements on each data set to obtain a second processing result corresponding to each data set respectively;
judging whether the number of the data groups obtained by grouping is a plurality of data groups or not;
if not, the second processing result is determined to be a first processing result meeting the processing requirement.
In the embodiment of the present application, when determining the first processing result satisfying the processing requirement, the large language model may group the data to be processed according to the maximum number and the processing of the data to be processed, so as to obtain a plurality of data groups, where the number of data included in any one data group is not greater than the maximum number.
In one possible implementation manner, when determining the data set, the large language model may splice the data to be processed corresponding to the processing requirement with the first prompt text to obtain the first target text, process the first target text after obtaining the first target text, and then determine the data set by the large language model. In order to further improve the accuracy of text processing by the large language model, in the embodiment of the present application, the first target text may also include a first example text, where the first example text may be understood as an example of grouping the large language model in advance.
Illustratively, the first target text obtained by stitching may include the following:
< construction > the following list of digits is divided into a number of sub-lists of maximum length 8, the first list containing the first 8 digits, the second list containing the 9 th-16 th digits, and so on. The format is as follows:
{{
"List 1":[32.3,44,30,50,70.5,32,23,54.6]
"List 2":[26.8,91,42,34,57.6,23.4,33,45.5],
"List 3":[26,9,48,31,59.6,25,23.4,54,2],
"List 4":[69,60,73.5,66,54]
}}</Instruction>
<Example>
Input:[39.5,45.9,32.4,29.1,21.6,48.7,55.3,30.2,43.8,50.6,57.9,26.4,24.8,54.2,22.5,40.3,47.1,34.7,31.0,53.6,23.7,35.6,42.4,51.8,27.3,59.5,36.9,28.2,37.8,44.6]
Output:
{{
"List 1":[39.5,45.9,32.4,29.1,21.6,48.7,55.3,30.2],
"List 2":[43.8,50.6,57.9,26.4,24.8,54.2,22.5,40.3],
"List 3":[47.1,34.7,31.0,53.6,23.7,35.6,42.4,51.8],
"List 4":[27.3,59.5,36.9,28.2,37.8,44.6]
}}
</Example>
Input:{input}
in the above first target text, < Instruction > and </Instruction > included contents can be understood as first hint text; the text between < sample > and </sample > can be understood as the first Example text; input following Input may be understood as the data to be processed.
After each data set is obtained, an operation corresponding to the processing requirement can be performed on each data set, so that a second processing result corresponding to each data set is obtained.
In the embodiment of the application, in determining the operation corresponding to the processing requirement, the operation corresponding to the processing requirement may be determined based on the semantic recognition model after training is completed; the large language model can also determine the operation corresponding to the processing requirement based on the powerful language processing capability. In embodiments of the present application, the arithmetic operations may include summing, differencing, quotient-making, ordering, and the like.
In the embodiment of the application, a second prompt text is stored for each operation in advance, and the second prompt text is used for prompting the large language model to perform which operation on the data in the data set. When the large language model performs operation corresponding to the processing requirement on each data group, the data included in the data group, the second prompt text and the second example text which are pre-stored for the corresponding processing requirement can be spliced for each data group, so that the second target text is obtained. The second example text may be understood as an example of performing the corresponding arithmetic operation for the large language model in advance.
Illustratively, the second target text spliced may include the following:
< construction > you are a calculator, add up the following numbers, give the sum process and the summed number
{{
"List 1":[32.3,44,30,50,70.5,32,23,54.6],
}}</Instruction>
<Example>
Input:[32,44,30,50,70.5,69,60,73.5]
Output:32+44+30+50+70.5+69+60+73.5=429。
</Example>
Input:{input}
In the above second target text, < Instruction > and </Instruction > included contents can be understood as second hint text; the text between < sample > and </sample > can be understood as a second Example text; input following Input may be understood as the data included in the corresponding data set.
After the second processing result corresponding to the data group is obtained, whether the number of the data groups obtained by grouping is multiple or not can be judged, and when the number of the data groups is multiple, the fact that corresponding operation is needed to be carried out according to the second processing result corresponding to each data group is described later. If the data set is not plural, it is indicated that only one result is obtained, the second processing result of the unique data set may be directly determined as the first processing result satisfying the processing requirement.
If the number of the data groups obtained by grouping is a plurality of, updating the to-be-processed data of the processing requirement by using the second processing result corresponding to each data group, namely, taking the second processing result corresponding to each data group as the to-be-processed data of the processing requirement again, and continuing to perform grouping operation again according to the updated to-be-processed data.
Specifically, after determining that the number of the data groups obtained by grouping is multiple, the large language model may aggregate the second processing results corresponding to each data group, and group the aggregated second processing results. And the large language model can splice the third prompt text, the second processing result corresponding to each data set and the third example text to obtain a third target text. The third prompt text is used for prompting the large language model to aggregate the second processing result, and grouping the aggregated second processing text.
Illustratively, the third target text spliced may include the following:
< construction > the following entered digits are combined into a list in order of every 8 digits, each digit being contained in only one list, resulting in at least one combined list of at most 8 digits each.
</Instruction>
<Example>
Input1:123.5Input2:233.5Iput3:562Input4:234Input5:156Input6:123.5
Input7:233.5Input8:562Input9:234Input10:156
Output:out_list1:[123.5,233.5,562,234,156,123.5,233.5,562]。
out_list2:[234,156]
</Example>
Input:{input}
In the above third target text, the content included between < Instruction > and </Instruction > may be understood as a third hint text; the text between < sample > and </sample > may be understood as a third Example text; input following Input can be understood as the second processing result for each data set.
And after the large language model processes the third target text, a new data set can be obtained, and after the new data set is obtained, a second processing result corresponding to the new data set can be continuously determined.
It should be noted that, if the large language model involves calculating the ratio of two numbers, because the calculated amount is relatively small, the large language model may directly splice the two numbers with the preset fourth prompt text and the fourth example text to obtain the fourth target text, so as to process the fourth target text to obtain the final result.
Illustratively, the fourth target text obtained by stitching may include the following:
< Instruction > two numbers entered below, input1 divided by Input2, give a percentage.
</Instruction>
<Example>
Input1:500.6
Input2:2145
Output:500.6/2145=23.34%
</Example>
Input:{input}
In the fourth target text described above, the content included between < Instruction > and </Instruction > may be understood as a fourth hint text; the text between < sample > and </sample > may be understood as a fourth Example text; the { Input } after Input can be understood as two numbers that require the ratio to be calculated, i.e., input1 and Input2.
In order to further improve accuracy of data processing, in the foregoing embodiments, after the obtaining the first processing result satisfying the processing requirement, before determining whether the processing requirement is the last processing requirement corresponding to the text to be processed, the method further includes:
Judging whether the number of the first processing results of the processing requirements obtained currently reaches a preset number threshold;
if yes, selecting a target first processing result from the first processing results of the preset quantity threshold, and continuously executing the step of judging whether the processing requirement is the last processing requirement corresponding to the text to be processed.
In order to further improve the accuracy of data processing, in the embodiment of the present application, the step of determining the first processing result may be repeated multiple times, and after determining the first processing result satisfying the processing requirement, it may be determined whether the number of the first processing results of the currently obtained processing requirement reaches a preset number threshold, that is, whether the preset number threshold is repeated for determining the first processing result. If the number of the obtained first processing results reaches the preset number threshold, selecting a target first processing result from the first processing results of the preset number threshold, and continuing to execute the step of judging whether the processing requirement is the last processing requirement corresponding to the text to be processed. In the embodiment of the present application, when the target first processing result is selected, the mode in the first processing results of the preset number of thresholds may be determined as the target first processing result.
Specifically, assuming that the preset number threshold is 5, the first processing result 1 is 50, the first processing result 2 is 50, the first processing result 3 is 47, the first processing result 4 is 50, and the first processing result 5 is 50. Since 5 first processing results are currently obtained, it can be determined that the number of first processing results reaches the preset number threshold 5. The mode 50 of the 5 first processing results may be determined as the target first processing result and the first processing result 3 may be determined as misunderstanding.
If the number of the first processing results obtained at present does not reach the preset number threshold, continuing to execute the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
For easy understanding, the large language model can be used for visually understanding the data processing process, and the thinking state diagram is a directed diagram and comprises a plurality of nodes, wherein each node represents a thinking state, namely a solution, and edges between the nodes represent the dependency relationship of thinking.
Assuming that the text to be processed is "what the tax revenue of 2022 Qingdao city is in the Shandong province", the large language model may decompose the text to be processed into a plurality of sub-texts after receiving the text to be processed, which are respectively:
Sub-text 1: what is tax revenue in the Qingdao city 2022?
Sub-text 2: what is tax revenue in 2022 in Shandong province?
Sub-text 3: what is tax revenue of the Qingdao city 2022 in proportion to the Shandong province?
It is possible for both sub-text 1 and sub-text 2 to query the database for a plurality of data to be processed. The data to be processed corresponding to the sub-text 1 may be listed as list1: [ a1, a2, … am ] wherein m represents m pieces of data to be processed which are queried in the database based on the sub-text 1, a1 represents a first piece of data to be processed, a2 represents a second piece of data to be processed, and so on. The data to be processed corresponding to the sub-text 2 may be represented by a list2: [ b1, b2, … bm ]. After the list1 and the list2 are obtained, the list1 and the list2 can be subjected to grouping operation, so that first processing results corresponding to the sub-text 1 and the sub-text 2 respectively are obtained, the first processing results are used as data to be processed corresponding to the sub-text 3, and then the two first processing results are subjected to operation processing, so that target processing results of the text to be processed are obtained.
The above data processing process is described below with reference to fig. 3, and fig. 3 is a schematic diagram of a thinking state diagram provided in an embodiment of the present application. Fig. 3 shows in detail the grouping operation procedure for the data to be processed in list1, and the grouping operation procedure for the data to be processed in list2 is similar to the principle of the grouping operation procedure of list1, and therefore, the description thereof is omitted. As shown in fig. 3, the data to be processed in list1 is first grouped according to the maximum amount of data required, resulting in n data groups, n being greater than 1. After the n data sets are obtained, performing operation corresponding to the sub text 1 on each data set, namely calculating the sum value of data to be processed in each data set, wherein the sum value is the second processing result. And the sum is calculated repeatedly 3 times for each data set. And determining the mode as a target first processing result of the corresponding data group in the 3 second processing results corresponding to each data group in the second processing results corresponding to each data group. Since there are n data groups at present, the target first processing result corresponding to each data group can be aggregated again to obtain a list1', where the list1' includes n pieces of data to be processed, and then grouping operation is performed based on the n pieces of data to be processed. Assuming that m data sets are determined based on the n data to be processed, where m is greater than 1, then m target first processing results may be obtained through operation. After the m target first processing results are obtained, the target first processing results corresponding to each current data group are aggregated again to obtain a list1', wherein the list1' comprises m pieces of data to be processed, and then grouping operation is carried out based on the m pieces of data to be processed. Assuming that F data sets are determined based on the m pieces of data to be processed, where f=1, then 1 target first processing result can be obtained through operation, and the target first processing result can be determined as a first processing result corresponding to the sub-text 1.
For list2, a grouping operation similar to list1 may be performed, which is not described in the embodiment of the present application.
After the first processing results corresponding to the sub-text 1 and the sub-text 2 are obtained, the operation corresponding to the sub-text 3 can be performed based on the two first processing results, so that the target processing result of the text to be processed is obtained. In the embodiment of the present application, the operation corresponding to the sub-text 3 is a percentage of the first processing result of the sub-text 1 to the first processing result of the sub-text 2.
Fig. 4 is a schematic diagram of a data processing procedure provided in the embodiment of the present application, as shown in fig. 4, it may be understood that after receiving a text to be processed of a user, the text to be processed is input into a large language model, the large language model may decompose the text to be processed after receiving the text to be processed, determine processing requirements and data to be processed corresponding to each processing requirement, and construct a thinking state diagram based on the determined processing requirements and the data to be processed, that is, determine processing contents corresponding to each step according to predefined processing steps. For each node and the connection between nodes in the thinking state diagram, the corresponding operation steps can be determined, the large language model can determine the target text based on the prestored prompt text and the data to be processed corresponding to each node, and the large language model processes the corresponding target text, so that the corresponding processing result is obtained. After each processing result is obtained, the generated result may be evaluated to consider the quality and applicability of the result to select a final result.
On the basis of the foregoing embodiments, a process of data processing based on a large language model is described below in connection with a specific embodiment, and fig. 5 is a schematic diagram of processing a question-answer text provided in an embodiment of the present application, as shown in fig. 5, after receiving an input from a user to a question text, the intention recognition may be performed on the question text to obtain a target intention corresponding to the question text. And determines whether the target intention is a computation-like intention or other intention. If the target intention is the calculation type intention, the question text can be input into a large language model, and the large language model decomposes the question text to obtain a sub-question 1, a sub-question 2 and a sub-question N, wherein the sub-question is equivalent to the sub-text in the above embodiments. After each sub-problem is obtained, judging whether the sub-problem is a calculation type problem or a query type problem according to each sub-problem, and if the sub-problem is the query type problem, determining SQL sentences corresponding to each sub-problem based on a text2SQL function of a large language model so as to obtain to-be-processed data corresponding to each sub-problem. After each piece of data to be processed is obtained, the corresponding operation processing can be performed on the data to be processed, and detailed description is already made on how to perform the operation processing on the data to be processed in the above embodiments, which are not repeated in the embodiments of the present application. In the process of performing the operation processing, a corresponding prompt (prompt) can be determined based on the thought of the thought state diagram, so as to obtain an answer corresponding to the question text.
The technical scheme has the characteristics of reliability, robustness and generalization, and accords with the credibility characteristic.
Based on the foregoing embodiments, fig. 6 is a schematic structural diagram of a data processing apparatus based on a large language model according to an embodiment of the present application, where the apparatus includes:
an obtaining module 601, configured to obtain a text to be processed;
the processing module 602 is configured to input the text to be processed into the large language model, where the large language model determines a processing requirement corresponding to the text to be processed and data to be processed corresponding to the processing requirement; the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement; judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
In a possible implementation manner, the processing module 602 is specifically configured to perform intent recognition on the text to be processed to obtain a target intent; if the target intention is the calculation type intention, continuing to execute the step of determining the processing requirement corresponding to the text to be processed by the subsequent large language model.
In a possible implementation manner, the processing module 602 is specifically configured to group the data to be processed according to the maximum number and the number of data to be processed by using the large language model, so as to obtain data groups, where the number of data included in any data group is not greater than the maximum number; performing operation corresponding to the processing requirements on each data set to obtain a second processing result corresponding to each data set respectively; judging whether the number of the data groups obtained by grouping is a plurality of data groups or not; if not, the second processing result is determined to be a first processing result meeting the processing requirement.
In a possible implementation manner, the processing module 602 is specifically configured to update the data to be processed of the processing requirement by using the second processing result corresponding to each data group if the number of the data groups obtained by grouping is multiple, and continue to execute the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
In a possible implementation manner, the processing module 602 is specifically configured to determine whether the number of the currently obtained first processing results of the processing requirement reaches a preset number threshold; if yes, selecting a target first processing result from the first processing results of the preset quantity threshold, and continuously executing the step of judging whether the processing requirement is the last processing requirement corresponding to the text to be processed.
In a possible implementation manner, the processing module 602 is specifically configured to, if the number of the currently obtained first processing results of the processing requirement does not reach a preset number threshold, continue to execute the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
On the basis of the foregoing embodiments, an electronic device is further provided in the embodiments of the present application, and fig. 7 is a schematic structural diagram of the electronic device provided in the embodiments of the present application, as shown in fig. 7, including: a processor 701, a communication interface 702, a memory 703 and a communication bus 704, wherein the processor 701, the communication interface 702 and the memory 703 communicate with each other through the communication bus 704;
The memory 703 stores a computer program that, when executed by the processor 701, causes the processor 701 to perform the steps of:
acquiring a text to be processed;
inputting the text to be processed into the large language model, wherein the large language model determines the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement;
the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement;
judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
In a possible implementation, the processor 701 is further configured to: performing intention recognition on the text to be processed to obtain a target intention;
if the target intention is the calculation type intention, continuing to execute the step of determining the processing requirement corresponding to the text to be processed by the subsequent large language model.
In a possible implementation, the processor 701 is further configured to: the large language model groups the data to be processed according to the maximum number and the number of the data to be processed to obtain data groups, wherein the number of the data included in any data group is not more than the maximum number;
performing operation corresponding to the processing requirements on each data set to obtain a second processing result corresponding to each data set respectively;
judging whether the number of the data groups obtained by grouping is a plurality of data groups or not;
if not, the second processing result is determined to be a first processing result meeting the processing requirement.
In a possible implementation, the processor 701 is further configured to: and if the number of the data groups obtained by grouping is a plurality of, updating the data to be processed of the processing requirement by using a second processing result corresponding to each data group, and continuously executing the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
In a possible implementation, the processor 701 is further configured to: judging whether the number of the first processing results of the processing requirements obtained currently reaches a preset number threshold;
If yes, selecting a target first processing result from the first processing results of the preset quantity threshold, and continuously executing the step of judging whether the processing requirement is the last processing requirement corresponding to the text to be processed.
In a possible implementation, the processor 701 is further configured to: if the number of the first processing results of the processing requirements obtained at present does not reach a preset number threshold, continuing to execute the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
Since the principle of solving the problem of the electronic device is similar to that of the data processing method based on the large language model, the implementation of the electronic device can refer to the embodiment of the method, and the repetition is omitted.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface 702 is used for communication between the electronic device and other devices described above. The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, a network processor (Network Processor, NP), etc.; but also digital instruction processors (Digital Signal Processing, DSP), application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
On the basis of the above embodiments, the embodiments of the present invention further provide a computer readable storage medium, in which a computer program executable by a processor is stored, which when executed on the processor causes the processor to implement the steps of:
acquiring a text to be processed;
inputting the text to be processed into the large language model, wherein the large language model determines the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement;
the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement;
judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
In a possible implementation manner, after the text to be processed is input into the large language model, before the large language model determines the processing requirement corresponding to the text to be processed, the method further includes:
performing intention recognition on the text to be processed to obtain a target intention;
if the target intention is the calculation type intention, continuing to execute the step of determining the processing requirement corresponding to the text to be processed by the subsequent large language model.
In one possible implementation manner, the grouping operation is performed on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed by the large language model, so as to obtain a first processing result meeting the processing requirement, where the first processing result includes:
the large language model groups the data to be processed according to the maximum number and the number of the data to be processed to obtain data groups, wherein the number of the data included in any data group is not more than the maximum number;
performing operation corresponding to the processing requirements on each data set to obtain a second processing result corresponding to each data set respectively;
judging whether the number of the data groups obtained by grouping is a plurality of data groups or not;
If not, the second processing result is determined to be a first processing result meeting the processing requirement.
In one possible embodiment, the method further comprises:
and if the number of the data groups obtained by grouping is a plurality of, updating the data to be processed of the processing requirement by using a second processing result corresponding to each data group, and continuously executing the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
In a possible implementation manner, after the first processing result satisfying the processing requirement is obtained, before the determining whether the processing requirement is the last processing requirement corresponding to the text to be processed, the method further includes:
judging whether the number of the first processing results of the processing requirements obtained currently reaches a preset number threshold;
if yes, selecting a target first processing result from the first processing results of the preset quantity threshold, and continuously executing the step of judging whether the processing requirement is the last processing requirement corresponding to the text to be processed.
In one possible embodiment, the method further comprises:
If the number of the first processing results of the processing requirements obtained at present does not reach a preset number threshold, continuing to execute the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
Since the principle of solving the problem by the above-mentioned computer readable storage medium is similar to that of the data processing method based on the large language model, the implementation of the above-mentioned computer readable storage medium can refer to the embodiment of the method, and the repetition is omitted.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A method for processing data based on a large language model, the method comprising:
acquiring a text to be processed;
inputting the text to be processed into the large language model, wherein the large language model determines the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement;
the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement;
judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
2. The method of claim 1, wherein after the entering the text to be processed into the large language model, the large language model is before determining a processing requirement corresponding to the text to be processed, the method further comprising:
Performing intention recognition on the text to be processed to obtain a target intention;
if the target intention is the calculation type intention, continuing to execute the step of determining the processing requirement corresponding to the text to be processed by the subsequent large language model.
3. The method of claim 1, wherein the grouping the data to be processed according to the maximum amount of data required for operation and the amount of data to be processed by the large language model to obtain the first processing result satisfying the processing requirement comprises:
the large language model groups the data to be processed according to the maximum number and the number of the data to be processed to obtain data groups, wherein the number of the data included in any data group is not more than the maximum number;
performing operation corresponding to the processing requirements on each data set to obtain a second processing result corresponding to each data set respectively;
judging whether the number of the data groups obtained by grouping is a plurality of data groups or not;
if not, the second processing result is determined to be a first processing result meeting the processing requirement.
4. A method according to claim 3, characterized in that the method further comprises:
And if the number of the data groups obtained by grouping is a plurality of, updating the data to be processed of the processing requirement by using a second processing result corresponding to each data group, and continuously executing the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
5. The method according to claim 1, wherein after the first processing result satisfying the processing requirement is obtained, before the determining whether the processing requirement is the last processing requirement corresponding to the text to be processed, the method further includes:
judging whether the number of the first processing results of the processing requirements obtained currently reaches a preset number threshold;
if yes, selecting a target first processing result from the first processing results of the preset quantity threshold, and continuously executing the step of judging whether the processing requirement is the last processing requirement corresponding to the text to be processed.
6. The method of claim 5, wherein the method further comprises:
if the number of the first processing results of the processing requirements obtained at present does not reach a preset number threshold, continuing to execute the step of grouping the data to be processed according to the maximum number and the number of the data to be processed.
7. A large language model based data processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring the text to be processed;
the processing module is used for inputting the text to be processed into the large language model, and the large language model determines the processing requirement corresponding to the text to be processed and the data to be processed corresponding to the processing requirement; the large language model performs grouping operation on the data to be processed according to the maximum amount of data required for operation and the amount of the data to be processed, so as to obtain a first processing result meeting the processing requirement; judging whether the processing requirement is the last processing requirement corresponding to the text to be processed; if yes, determining the first processing result as a target processing result of the text to be processed; if not, the first processing result is used as the data to be processed of the next processing requirement after the processing requirement and is sent to the next processing requirement.
8. The apparatus of claim 7, wherein the processing module is specifically configured to perform intent recognition on the text to be processed to obtain a target intent; if the target intention is the calculation type intention, continuing to execute the step of determining the processing requirement corresponding to the text to be processed by the subsequent large language model.
9. An electronic device comprising a processor for implementing the steps of the large language model based data processing method according to any one of claims 1-6 when executing a computer program stored in a memory.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the large language model based data processing method according to any one of claims 1-6.
CN202311619029.3A 2023-11-29 2023-11-29 Data computing method, device, equipment and medium based on large language model Pending CN117807982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311619029.3A CN117807982A (en) 2023-11-29 2023-11-29 Data computing method, device, equipment and medium based on large language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311619029.3A CN117807982A (en) 2023-11-29 2023-11-29 Data computing method, device, equipment and medium based on large language model

Publications (1)

Publication Number Publication Date
CN117807982A true CN117807982A (en) 2024-04-02

Family

ID=90424314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311619029.3A Pending CN117807982A (en) 2023-11-29 2023-11-29 Data computing method, device, equipment and medium based on large language model

Country Status (1)

Country Link
CN (1) CN117807982A (en)

Similar Documents

Publication Publication Date Title
CN111414380B (en) Method, equipment and storage medium for generating SQL (structured query language) sentences of Chinese database
CN111538825B (en) Knowledge question-answering method, device, system, equipment and storage medium
CN112966433B (en) Instant compiling-based neurodynamics simulation method and device
Couch et al. infer: An R package for tidyverse-friendly statistical inference
CN116738974B (en) Language model generation method, device and medium based on generalization causal network
Trotter et al. Modular Assessment of Rainfall–Runoff Models Toolbox (MARRMoT) v2. 1: an object-oriented implementation of 47 established hydrological models for improved speed and readability
Zhong et al. Javascript code suggestion based on deep learning
CN108053033A (en) A kind of function calling sequence generation method and system
CN111563094A (en) Data query method and device, electronic equipment and computer-readable storage medium
CN109656952B (en) Query processing method and device and electronic equipment
CN117807982A (en) Data computing method, device, equipment and medium based on large language model
CN117314139A (en) Modeling method and device for business process, terminal equipment and storage medium
CN116168403A (en) Medical data classification model training method, classification method, device and related medium
Fuenmayor et al. Computational hermeneutics: An integrated approach for the logical analysis of natural-language arguments
CN118153579A (en) Intelligent question-answering method, device, equipment and medium
CN117540004B (en) Industrial domain intelligent question-answering method and system based on knowledge graph and user behavior
CN117149985B (en) Question and answer method, device, equipment and medium based on large model
Bugayenko et al. Automatically Prioritizing Tasks in Software Development
Mascarenhas et al. Mtp: Model transformation profile
CN115291889B (en) Data blood relationship establishing method and device and electronic equipment
CN114297357B (en) Question-answer model construction method and device based on quantum computation and electronic equipment
CN117874052A (en) SQL sentence generation method, device, equipment and medium based on large model
CN109947419B (en) Method and device for realizing logic judgment
CN109905475B (en) Method for outputting cloud computing monitoring data in specified format based on SQL
CN117472431A (en) Code annotation generation method, device, computer equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination