CN117827178A - Code automatic generation method - Google Patents

Code automatic generation method Download PDF

Info

Publication number
CN117827178A
CN117827178A CN202410239437.4A CN202410239437A CN117827178A CN 117827178 A CN117827178 A CN 117827178A CN 202410239437 A CN202410239437 A CN 202410239437A CN 117827178 A CN117827178 A CN 117827178A
Authority
CN
China
Prior art keywords
code
question
prompt
words
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410239437.4A
Other languages
Chinese (zh)
Inventor
李天国
龙榜
杨芷柳
刘新
许刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Farben Information Technology Co ltd
Original Assignee
Shenzhen Farben Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Farben Information Technology Co ltd filed Critical Shenzhen Farben Information Technology Co ltd
Priority to CN202410239437.4A priority Critical patent/CN117827178A/en
Publication of CN117827178A publication Critical patent/CN117827178A/en
Pending legal-status Critical Current

Links

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The invention provides an automatic code generation method, which is used for configuring a plurality of question-answering models; setting a plurality of different prompting words for each question-answering model; debugging each set of prompt words, and storing the prompt words which pass through the debugging into a database; when a question instruction of a user is received, a set of prompt words in a database are loaded to question, question information is input into a question-answer model corresponding to the prompt words, and codes output by the question-answer model are obtained; when the code is qualified, returning the code to the user; when the code is unqualified, repeatedly loading another set of prompt words in the database to ask questions; and when codes generated by all prompt words in the database are unqualified, returning default information to the user. According to the method, a plurality of different prompt words can be used on the same question-answering model, and the question-answering is performed by using a plurality of question-answering models, so that the prompt words are more diversified. The quality of the generated codes can be quantized, so that the answer quality of the question-answer model can be judged, and low-quality answer results in the prior art are avoided.

Description

Code automatic generation method
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to an automatic code generation method.
Background
The current automatic code generation method is that a user asks questions according to prompt words, and a question-answering model automatically generates corresponding codes according to question information. However, in the prior art, one of the techniques of structured prompt words, few sample prompt words or other prompt words is usually adopted, and only one set of prompt word templates is adopted, so that the prompt words are single, and the answer with very low quality is easy to appear.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the code automatic generation method which can combine a plurality of question-answering models and a plurality of different sets of prompt words to question, so that the prompt words are more diversified, and the quality of the generated code is improved.
An automatic code generation method, comprising:
configuring a plurality of question-answering models;
setting a plurality of different prompting words for each question-answering model;
debugging each set of prompt words, and storing the prompt words which pass through the debugging into a database;
when a question instruction of a user is received, a set of prompt words in a database are loaded to question, question information is input into a question-answer model corresponding to the prompt words, and codes output by the question-answer model are obtained;
when the code is qualified, returning the code to the user;
when the code is unqualified, repeatedly loading another set of prompt words in the database to ask questions;
and when codes generated by all prompt words in the database are unqualified, returning default information to the user.
Further, the hint word includes a plurality of sub-items;
sub-items include, but are not limited to: template name, application model, application function, priority, template framework, role, task, goal, requirement, context, example, whether to add a chain of thought or whether to enable.
Further, the step of debugging each set of prompt words specifically comprises the following steps:
inputting debugging content in the sub-items of the prompt words;
inputting a debugging value of the built-in variable; built-in variables include, but are not limited to, input questions, development language, languages, source language, and target language;
inputting the entered prompt words and built-in variables into the corresponding question-answering models;
receiving a code returned by the question-answering model;
when a qualified instruction for code entry is received, the prompt word is debugged and passed.
Further, storing the prompt word passed by the debugging into the database specifically includes:
setting priority for the prompt words passing the debugging;
judging whether the priority is repeated with the existing priority of the database;
if yes, resetting the priority of the prompt word;
if not, the prompt word and the corresponding priority are stored in a database.
Further, the loading method of the prompt word comprises the following steps:
the prompt words are loaded according to the order of priority from high to low.
Further, the method for loading the prompt word to question comprises the following steps:
processing the loaded prompt words;
and receiving question information input according to the processed prompt words.
Further, processing the loaded prompt word specifically includes:
judging whether the prompting words use standard frames or not;
if yes, processing the prompt word by using a standard frame;
if not, obtaining the actual value of the built-in variable, and replacing the value of the built-in variable in the question-answering model by the actual value.
Further, the standard frame includes a structured frame or a few sample frame.
Further, the judging method for judging whether the code is qualified comprises the following steps:
defining that the code is unqualified when the length of the code is smaller than a length threshold value;
when the length of the code is greater than a length threshold value, sequentially extracting each section of sub-code in the code, and acquiring a code specification score and a code security score of each section of sub-code;
when the code specification score or the code security score of the existing sub-code is lower than the score threshold value, defining that the code is unqualified;
and when the code specification score and the code security score of all the sub-codes are equal to or greater than the score threshold value, defining the code to be qualified.
Further, the method further comprises the following steps:
setting a plurality of code specification items and a plurality of code security items for different development languages; wherein each code specification item and code security item is provided with a score;
when the subcode violates a code specification item or a code security item, the corresponding score is deducted from the code specification score or the code security score of the subcode.
According to the technical scheme, the code automatic generation method provided by the invention can use a plurality of different prompt words on the same question-answering model, and can also use a plurality of question-answering models to answer questions, so that the prompt words are more diversified. The method can also quantify the quality of the generated codes so as to judge the answer quality of the question-answer model and select the answer which accords with the quality standard, thereby avoiding the low-quality answer result in the prior art and improving the quality of the generated codes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
Fig. 1 is a flowchart of a code automatic generation method provided in an embodiment.
Fig. 2 is a flowchart of a method for prompting word debugging according to an embodiment.
FIG. 3 is an interface diagram of a hint word debugging according to an embodiment.
FIG. 4 is another interface diagram of the exemplary embodiment of the term debug.
Fig. 5 is a flowchart of a method for processing a hint word according to an embodiment.
Fig. 6 is a flowchart of a processing method of a prompt word under a structured framework according to an embodiment.
Fig. 7 is a flowchart of a processing method of a hint word under a few sample framework according to an embodiment.
Fig. 8 is a flowchart of a code quality judging method according to an embodiment.
Detailed Description
Embodiments of the technical scheme of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and thus are merely examples, and are not intended to limit the scope of the present invention. It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention pertains.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Examples:
a method for automatically generating codes, see fig. 1, comprising:
configuring a plurality of question-answering models;
setting a plurality of different prompting words for each question-answering model;
debugging each set of prompt words, and storing the prompt words which pass through the debugging into a database;
when a question instruction of a user is received, a set of prompt words in a database are loaded to question, question information is input into a question-answer model corresponding to the prompt words, and codes output by the question-answer model are obtained;
when the code is qualified, returning the code to the user;
when the code is unqualified, repeatedly loading another set of prompt words in the database to ask questions;
and when codes generated by all prompt words in the database are unqualified, returning default information to the user.
In this embodiment, the question-answering model may be an existing large language model. One or more different sets of prompt words can be set for each question-answering model, for example, the method can set the prompt words for the accessed question-answering model manually, and each set of prompt words can be realized by using different technologies. The prompt word is mainly used for prompting a user how to enter the requirement. After the prompt words are configured, each set of prompt words is debugged, the prompt words which are debugged are stored in a database, and prompt words which are not debugged are not used. The debugging is mainly used for judging the effect of the prompt words, and the prompt words with satisfactory effect are stored in a warehouse.
In this embodiment, when a user asks a question, a question instruction is initiated, a set of prompt words in a database is loaded to ask a question, question information is input into a question-answer model corresponding to the prompt words, and a code output by the question-answer model is obtained. If the code quality is not qualified, another set of prompt words in the database is repeatedly loaded for questioning. And when codes generated by all prompt words in the database are unqualified, returning default information to the user. For example, default information is "dumb, your question i temporarily cannot answer".
According to the code automatic generation method, a plurality of different prompt words can be used on the same question-answering model, and the question-answering can be performed by using a plurality of question-answering models, so that the prompt words are more diversified. The method can also quantify the quality of the generated codes so as to judge the answer quality of the question-answer model and select the answer which accords with the quality standard, thereby avoiding the low-quality answer result in the prior art and improving the quality of the generated codes.
Further, in some embodiments, the hint word includes a plurality of sub-items;
sub-items include, but are not limited to: template name, application model, application function, priority, template framework, role, task, goal, requirement, context, example, whether to add a chain of thought or whether to enable.
In this embodiment, the left part in fig. 3 and 4 is a sub-item set for the hint word. A set of prompt words can comprise a plurality of sub-items, and the setting of the sub-items in the prompt words is determined according to the actual requirements of users.
Further, in some embodiments, the debugging each set of cue words specifically includes:
inputting debugging content in the sub-items of the prompt words;
inputting a debugging value of the built-in variable; built-in variables include, but are not limited to, input questions, development language, languages, source language, and target language;
inputting the entered prompt words and built-in variables into the corresponding question-answering models;
receiving a code returned by the question-answering model;
when a qualified instruction for code entry is received, the prompt word is debugged and passed.
In this embodiment, referring to fig. 2, when the prompt word is debugged, a question-answer model corresponding to the prompt word to be debugged is selected. Entering debugging content in the sub-items of the prompt word, and starting debugging after entering the debugging value of the built-in variable. Built-in variables include input question { userquest }, development language { development language }, language { answer language }, source language { srcLang }, and target language { dstLang }. When debugging is started, inputting the completely entered prompt words and built-in variables into the corresponding question-answer models; receiving a code returned by the question-answering model; when a qualified instruction for code entry is received, the code effect returned according to the prompt word is satisfied, and the prompt word is debugged and passed. The method can judge whether the debugging effect of the prompt word is satisfactory or not through manpower. For example, fig. 3 and 4 list a set of debug procedures of prompt words, wherein the left part of fig. 3 and 4 is set prompt words, and the right part of the answer is the output content of the question-answer model.
Further, in some embodiments, storing the debug-passed hint words into the database specifically includes:
setting priority for the prompt words passing the debugging;
judging whether the priority is repeated with the existing priority of the database;
if yes, resetting the priority of the prompt word;
if not, the prompt word and the corresponding priority are stored in a database.
In this embodiment, referring to fig. 2, the method may set priorities for the debug-passed hint words, for example, the priorities may be set to five levels, 1-5,1 being the highest level and 5 being the lowest level, respectively. I.e. the method allows the use of 5 sets of hint words. When the prompt words with the priority are put into storage, judging whether the priority of the prompt words to be put into storage is repeated with the priority existing in the database, and if not, normally putting the prompt words to be put into storage. If the priority of the prompt words to be put in storage is repeated, the priority of each set of prompt words in the database can be ensured not to be repeated.
Further, in some embodiments, the method for loading the hint words includes:
the prompt words are loaded according to the order of priority from high to low.
In this embodiment, when the method is used, the prompt words can be loaded in order of priority from high to low, and the corresponding question-answering model is accessed according to the loaded prompt words. For example, when the cue word is loaded for the first time, the cue word with the priority of 1 is loaded, when the quality of the code generated by the cue word with the priority of 1 is not satisfied, the cue word with the priority of 2 is loaded, and so on.
Further, in some embodiments, the method for loading the prompt word to question includes:
processing the loaded prompt words;
and receiving question information input according to the processed prompt words.
In the embodiment, when the prompt word is loaded, the method firstly processes the loaded prompt word, and then receives the question information input according to the processed prompt word. Referring to fig. 5, processing the loaded prompt word specifically includes: judging whether the prompting words use standard frames or not; if yes, processing the prompt word by using a standard frame; if not, obtaining the actual value of the built-in variable, and replacing the value of the built-in variable in the question-answering model by the actual value. For example, standard frames include structured frames or few sample frames. For example, fig. 6 illustrates a method for processing a hint under a structured framework, where different processing is mainly performed according to values of different sub-items. The method comprises the steps of defining sub-items as json arrays, judging whether the sub-items exist or not, and returning processed prompt words if the sub-items do not exist. If there are more sub-items, the name and value of the sub-item are obtained, and processing is carried out according to the sub-item name and value, for example, for the sub-item of the character, the processed sub-item is { }; for the sub-item of a task, the processed sub-item is { } for your task, and so on. Fig. 7 is a processing method of a prompt word under a few sample frame, mainly processing complete and smooth prompt words according to a plurality of input and output examples. The method comprises the steps of defining sub-items as json arrays, judging whether the sub-items exist or not, and returning processed prompt words if the sub-items do not exist. If the sub-item exists, acquiring the name, the type and the value of the sub-item, and judging the name of the sub-item as background, instruction, example or other; if the background and the instruction are the background and the instruction, adding the value of the sub item to the total prompt word; if the result is other, the processing is not performed; in the case of the example, the total prompt word is added with "please refer to the following example for output". And then circularly acquiring all the sub-items of the example, and adding related contents on the total prompt word according to the names of the sub-items. When the name of the sub item is output, "output" is added to the total prompt word: { } \n\n \is disclosed. When the name of the sub item is input, "input" is added to the total prompt word: { } \n ".
Further, in some embodiments, the method for determining whether the code is qualified includes:
defining that the code is unqualified when the length of the code is smaller than a length threshold value;
when the length of the code is greater than a length threshold value, sequentially extracting each section of sub-code in the code, and acquiring a code specification score and a code security score of each section of sub-code;
when the code specification score or the code security score of the existing sub-code is lower than the score threshold value, defining that the code is unqualified;
and when the code specification score and the code security score of all the sub-codes are equal to or greater than the score threshold value, defining the code to be qualified.
In this embodiment, referring to fig. 8, after obtaining the code output by the question-answer model, the method identifies that the code with a shorter code length is poor in quality, that is, the code length is less than the length threshold, and defines that the code is not qualified. The length threshold may be set according to the actual situation of the user, for example, the length threshold is set to 64. For codes with longer lengths, extracting each section of sub-code in the codes, and acquiring a code specification score and a code safety score of each section of sub-code. When the code specification score or the code security score of the existing sub-code is lower than the score threshold value, defining that the code is unqualified; when the code specification score and the code security score of all the sub-codes are larger than or equal to the score threshold value, the code quality is good, and the definition code is qualified. The score threshold values corresponding to the code normalization and the code security can be the same or different. The score threshold may be set according to the actual situation of the user, for example, the score threshold of code normalization and code security is set to 80 points, that is, in the case of 100 points being full, the code allowed to be output is within 20 points in normalization and security. When the code specification score or code security score of a segment of sub-code is less than 80 points, the code quality is poor.
Further, in some embodiments, further comprising:
setting a plurality of code specification items and a plurality of code security items for different development languages; wherein each code specification item and code security item is provided with a score;
when the subcode violates a code specification item or a code security item, the corresponding score is deducted from the code specification score or the code security score of the subcode.
In this embodiment, referring to fig. 8, the method sets a plurality of code specification items and a plurality of code security items for different development languages, wherein the code normalization includes the plurality of code specification items and the code security includes the plurality of code security items. And each code specification item and each code security item are provided with a score, and if the subcode violates a certain code specification item, the code specification score of the subcode deducts the score corresponding to the code specification item. If the subcode violates a certain code security item, the code security score of the subcode deducts the score corresponding to the code security item.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.

Claims (10)

1. An automatic code generation method, comprising:
configuring a plurality of question-answering models;
setting a plurality of different prompting words for each question-answering model;
debugging each set of prompt words, and storing the prompt words which pass through the debugging into a database;
when a question instruction of a user is received, loading a set of prompt words in the database to question, inputting question information into a question-answer model corresponding to the prompt words, and obtaining codes output by the question-answer model;
when the code is qualified, returning the code to a user;
when the code is unqualified, repeatedly loading another set of prompt words in the database to ask questions;
and when codes generated by all prompt words in the database are unqualified, returning default information to the user.
2. The automatic code generation method according to claim 1, wherein,
the prompt word comprises a plurality of sub-items;
the sub-items include, but are not limited to: template name, application model, application function, priority, template framework, role, task, goal, requirement, context, example, whether to add a chain of thought or whether to enable.
3. The method for automatically generating codes according to claim 2, wherein the step of debugging each set of hint words specifically comprises:
inputting debugging content in the sub-items of the prompt words;
inputting a debugging value of the built-in variable; the built-in variables include, but are not limited to, input questions, development language, languages, source language, and target language;
inputting the entered prompt words and built-in variables into the corresponding question-answering models;
receiving a code returned by the question-answering model;
and when a qualified instruction aiming at the code entry is received, the prompt word is debugged and passed.
4. The method for automatically generating codes according to claim 1, wherein said storing the debug-passed hint words in the database comprises:
setting priority for the prompt words passing the debugging;
judging whether the priority is repeated with the existing priority of the database;
if yes, resetting the priority of the prompt word;
if not, the prompt word and the corresponding priority are stored in a database.
5. The method for automatically generating codes according to claim 4, wherein said method for loading a hint word comprises:
the prompt words are loaded according to the order of priority from high to low.
6. The automatic code generating method according to claim 3, wherein the method for loading the prompt word to question comprises:
processing the loaded prompt words;
and receiving the questioning information input according to the processed prompt words.
7. The method for automatically generating codes according to claim 6, wherein said processing the loaded hint words specifically comprises:
judging whether the prompting words use a standard framework or not;
if yes, processing the prompt word by using the standard frame;
if not, acquiring the actual value of the built-in variable, and replacing the value of the built-in variable in the question-answering model by the actual value.
8. The method of automatic code generation according to claim 7, wherein the standard framework comprises a structured framework or a few sample framework.
9. The automatic code generation method according to claim 1, wherein the method for judging whether the code is acceptable comprises:
defining that the code is unqualified when the length of the code is smaller than a length threshold;
when the length of the code is larger than the length threshold value, extracting each section of sub-code in the code in sequence, and obtaining the code specification score and the code security score of each section of sub-code;
defining that the code is unqualified when the code specification score or the code security score of a section of sub-code is lower than a score threshold value;
and defining that the codes are qualified when the code specification scores and the code security scores of all the sub-codes are larger than or equal to the score threshold value.
10. The code automatic generation method according to claim 9, further comprising:
setting a plurality of code specification items and a plurality of code security items for different development languages; wherein each code specification item and code security item is provided with a score;
when the subcode violates the code specification item or the code security item, a corresponding score is deducted from the code specification score or the code security score of the subcode.
CN202410239437.4A 2024-03-04 2024-03-04 Code automatic generation method Pending CN117827178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410239437.4A CN117827178A (en) 2024-03-04 2024-03-04 Code automatic generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410239437.4A CN117827178A (en) 2024-03-04 2024-03-04 Code automatic generation method

Publications (1)

Publication Number Publication Date
CN117827178A true CN117827178A (en) 2024-04-05

Family

ID=90523022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410239437.4A Pending CN117827178A (en) 2024-03-04 2024-03-04 Code automatic generation method

Country Status (1)

Country Link
CN (1) CN117827178A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110377497A (en) * 2019-05-27 2019-10-25 深圳壹账通智能科技有限公司 Code detection method, device, computer installation and storage medium
CN116560642A (en) * 2023-05-11 2023-08-08 中国工商银行股份有限公司 Code generation method and device, electronic equipment and storage medium
CN116894188A (en) * 2023-07-17 2023-10-17 北京有竹居网络技术有限公司 Service tag set updating method and device, medium and electronic equipment
CN116991990A (en) * 2023-07-04 2023-11-03 上海识装信息科技有限公司 Program development assisting method, storage medium and device based on AIGC
US11853196B1 (en) * 2019-09-27 2023-12-26 Allstate Insurance Company Artificial intelligence driven testing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110377497A (en) * 2019-05-27 2019-10-25 深圳壹账通智能科技有限公司 Code detection method, device, computer installation and storage medium
US11853196B1 (en) * 2019-09-27 2023-12-26 Allstate Insurance Company Artificial intelligence driven testing
CN116560642A (en) * 2023-05-11 2023-08-08 中国工商银行股份有限公司 Code generation method and device, electronic equipment and storage medium
CN116991990A (en) * 2023-07-04 2023-11-03 上海识装信息科技有限公司 Program development assisting method, storage medium and device based on AIGC
CN116894188A (en) * 2023-07-17 2023-10-17 北京有竹居网络技术有限公司 Service tag set updating method and device, medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赖茂生等: "网络用户的搜索入口与跳转行为研究", 情报理论与实践, no. 04, 30 April 2009 (2009-04-30) *

Similar Documents

Publication Publication Date Title
CN110543421B (en) Unit test automatic execution method based on test case automatic generation algorithm
CN115328756A (en) Test case generation method, device and equipment
CN116611074A (en) Security information auditing method, device, storage medium and apparatus
CN114186019A (en) Enterprise project auditing method and device combining RPA and AI
CN114675816A (en) Code completion ordering method and system based on user behaviors
CN117827178A (en) Code automatic generation method
CN112114866B (en) Data conversion loading method and device of JSON file and storage medium
CN113805847A (en) On-line codeless development system
CN113778454B (en) Automatic evaluation method and system for artificial intelligent experiment platform
CN115830419A (en) Data-driven artificial intelligence technology evaluation system and method
CN114879965A (en) Data settlement method, device, equipment and storage medium
CN114153447A (en) Method for automatically generating AI training code
CN110309285B (en) Automatic question answering method, device, electronic equipment and storage medium
EP0638862A2 (en) Method and system for processing language
Kreutel et al. Context-dependent interpretation and implicit dialogue acts
CN116701625B (en) Power scheduling statement processing method, device, equipment and medium
CN115188381B (en) Voice recognition result optimization method and device based on click ordering
CN113408597B (en) Java method name recommendation method based on two-stage framework
CN111881266B (en) Response method and device
CN113377644B (en) Testing method for multi-language internationalization translation based on front-end multi-system
CN117762807A (en) Fine tuning sample construction method and device for code large model
CN115934533A (en) Unit test code generation method, device, equipment and medium
CN113535139A (en) Method for realizing function, interpreter and computer readable storage medium
CN117369796A (en) Code construction method, model fine tuning method, device and storage medium
CN112905775A (en) Text processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination