CN116501306B - Method for generating interface document code based on natural language description - Google Patents

Method for generating interface document code based on natural language description Download PDF

Info

Publication number
CN116501306B
CN116501306B CN202310776692.8A CN202310776692A CN116501306B CN 116501306 B CN116501306 B CN 116501306B CN 202310776692 A CN202310776692 A CN 202310776692A CN 116501306 B CN116501306 B CN 116501306B
Authority
CN
China
Prior art keywords
interface document
interface
prompt
natural language
document code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310776692.8A
Other languages
Chinese (zh)
Other versions
CN116501306A (en
Inventor
刘昊臻
徐超
崔嘉杰
王飞扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunliu Technology Guangzhou Co ltd
Shenzhen Yinyun Information Technology Co ltd
Original Assignee
Yunliu Technology Guangzhou Co ltd
Shenzhen Yinyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunliu Technology Guangzhou Co ltd, Shenzhen Yinyun Information Technology Co ltd filed Critical Yunliu Technology Guangzhou Co ltd
Priority to CN202310776692.8A priority Critical patent/CN116501306B/en
Publication of CN116501306A publication Critical patent/CN116501306A/en
Application granted granted Critical
Publication of CN116501306B publication Critical patent/CN116501306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/73Program documentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Document Processing Apparatus (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a method for generating an interface document code based on natural language description, which comprises the following steps: acquiring natural language description input by a user, preprocessing the natural language description, and acquiring a plurality of interface documents to generate prompt words; generating a plurality of interface documents to generate prompt words for vectorization, and obtaining prompt word vector representation; collecting an interface document code sample conforming to openapi3 rules and formatting; constructing a language model, training, and inputting the prompt word vector representation into the trained language model to generate an interface document code; and converting the format of the interface document code to obtain an interface document, and creating a file for storage. The invention greatly improves the efficiency of writing the interface document by means of natural language description to generate the interface document code, reduces the workload and error rate of developers, and can generate the interface document code conforming to different frameworks and language specifications according to the natural language description input by users.

Description

Method for generating interface document code based on natural language description
Technical Field
The invention belongs to the field of computer system design, and particularly relates to a method for generating an interface document code based on natural language description.
Background
In the software development process, an interface document is an essential part, which is a standardized description describing communication between various components in a software system and can be used for guiding a developer to write codes. The traditional interface document writing mode is usually manual writing, and the mode has the problems of low efficiency, easiness in error, large repeated workload and the like. With the development of natural language processing technology, the technology of generating interface document codes by natural language description is becoming a research hotspot.
While some researchers have attempted to generate interface document code using natural language descriptions, the prior art suffers from the following deficiencies: 1) The accuracy is not high, and ambiguity is easy to generate; 2) The expandability is poor, and only specific languages or frames can be supported; 3) The generated interface document code cannot be automatically adjusted according to the description input by the user, lacking in adaptivity. Therefore, there is a need for a method and system for generating interface document code from natural language descriptions with high accuracy, good scalability, and adaptivity.
Disclosure of Invention
The invention aims to provide a method for generating interface document codes based on natural language description, so as to solve the problems in the prior art.
To achieve the above object, the present invention provides a method for generating an interface document code based on natural language description, comprising:
acquiring natural language description input by a user, preprocessing the natural language description, and acquiring a plurality of interface documents to generate prompt words;
generating prompt words from a plurality of interface documents to vectorize, and obtaining prompt word vector representation;
collecting interface document code samples conforming to preset rules and formatting;
constructing a language model, training, and inputting the prompt word vector representation into the trained language model to generate an interface document code;
and converting the format of the interface document code to obtain an interface document, and creating a file for storage.
Optionally, the preprocessing includes: and adding additional prompt words to the natural language description, removing useless words, dividing the natural language description with the useless words removed to obtain a plurality of phrases, classifying the phrases according to the parts of speech, extracting key phrases related to the generation of the interface document according to the parts of speech of the phrases, and generating the prompt words as the interface document.
Optionally, the additional prompt word includes a specific term, a keyword, and a structural element defined in a preset specification.
Optionally, the method for vectorizing the prompt words generated by the interface documents includes pre-trained word vector model conversion and adaptive learning method conversion.
Optionally, the process of pre-trained word vector model conversion includes: constructing a word vector model, training the word vector model by adopting a corpus, and obtaining vector representations of a plurality of interface documents to generate prompt words through the trained word vector model.
Optionally, the process of converting the adaptive learning method includes: and constructing a neural network architecture comprising a word embedding layer, optimizing the neural network through back propagation in the training process, and respectively converting a plurality of interface document generation prompt words into continuous low-dimensional vector representations through the word embedding layer after training is completed.
Optionally, the process of generating the interface document code further includes: and constructing and training the language model by adopting a deep learning framework, inputting the prompt word vector representation into the language model, splitting the prompt word vector representation into a plurality of discrete marks by the language model, and executing an reasoning process to obtain a corresponding interface document code.
The invention has the technical effects that:
the invention mainly aims to provide a method and a system for generating interface document codes based on natural language description, so as to improve the efficiency of software development and reduce the error rate. The invention solves the problems of low efficiency, easy error, large repeated workload and the like of the traditional interface document writing mode, greatly improves the efficiency of interface document writing and reduces the workload and error rate of developers by adopting the mode of generating the interface document codes through natural language description. In addition, the method and the system have expandability and self-adaption, and can generate interface document codes conforming to different frameworks and language specifications according to natural language descriptions input by users. Therefore, the invention has wide application prospect and commercial value.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a flow chart of a method according to an embodiment of the invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
In a first embodiment of the present invention,
as shown in fig. 1, in this embodiment, a method for generating an interface document code based on a natural language description is provided, including:
the preprocessing module can process natural language description input by a user, remove useless words, segmentation and part-of-speech labels, and extract key word groups as prompt words generated by interface documents. These preprocessing steps help to improve the accuracy of the generated results and the degree to which they meet the needs of the user.
Adding additional hints limits the scope and format of the generated content: in the preprocessing stage, additional prompt words can be added to limit the generated interface document content to conform to openapi3 specification. These additional hints may include specific terms, keywords, or necessary structural elements defined in the openapi3 specification to ensure that the generated interface document complies with the specification.
And (5) removing useless words: in the natural language description entered by the user, some insignificant words, such as articles, prepositions, etc., may be included. The preprocessing module removes these useless words to reduce noise and extract the core description information.
Word segmentation: and dividing the description subjected to the useless word removal processing into words or phrases to form phrases. This step helps to better understand the structure and semantics of sentences in subsequent processing.
Part of speech tagging: part-of-speech tags are added to each phrase so that subsequent processing can better understand the structure of the sentence. The part-of-speech labels can classify different types of phrases, such as nouns, verbs, adjectives, and the like, and help understand the roles and relationships of the phrases in generating interface documents.
Extracting key word groups as prompting words generated by interface documents: and extracting the key phrase related to the interface document generation by the preprocessing module according to the key phrase and the part-of-speech label in the user description, and taking the key phrase as an input prompt word for language model reasoning in the subsequent step. These key phrases may be used to instruct the interface document generation module to generate an interface document that meets the needs of the user.
The prompt word vectorization module: the hint words obtained by the preprocessing module can be converted into machine understandable and processed vector representations. These vector representations can be used in subsequent language model reasoning modules to generate interface document code that complies with openapi3 rules.
a. The prompt words obtained by the preprocessing module are converted into vector representations by utilizing a pre-trained word vector model or an adaptive learning method:
the goal of hint word vectorization is to convert each hint word into a vector representation so that a computer can understand and process them. The vectorization is selected using a pre-trained word vector model or an adaptive learning method. The pre-trained word vector model is trained on a large corpus, and can capture semantic relations among words. The self-adaptive learning method uses the neural network model to train according to your data, and can be better adapted to specific tasks.
b. For the pre-trained word vector model, the model is loaded and the hint words are mapped to the corresponding word vectors:
first, a pre-training word vector model is loaded. These models are typically provided in the form of binary or text files. The model is then loaded into memory for use. For each hint word, a vector representation of the word may be obtained by querying a pre-trained word vector model. These vectors typically have a fixed dimension and have been learned to map similar words to a similar vector space during the training process.
c. For the adaptive learning method, the prompt word is converted into a vector representation using a neural network model:
if the adaptive learning method is selected, the prompt word is converted into a vector representation by using a neural network model. In this approach, a neural network architecture is designed that includes a word embedding layer (Word Embedding Layer). The word embedding layer is responsible for converting the vocabulary into a continuous low-dimensional vector representation. The neural network may be optimized by back propagation during training to enable the vector representation of the cue words to better adapt to the task.
The language model reasoning module is capable of generating interface document codes conforming to openapi3 rules according to the pre-trained language model, and the codes can be further used for generating an interface document file.
a. Preparing training data: to train the language model, interface document code samples conforming to openapi3 rules are collected. These samples may be from published API documents or other available resources. It is ensured that these samples are formatted into an input form suitable for the language model, for example, converted into a sequence of text or token.
b. Training a language model: after the training data is ready, you can build a language model using an appropriate deep learning framework (e.g., tensorFlow, pyTorch, etc.). The language model may be based on a recurrent neural network (e.g., LSTM, GRU) or a variant of the Transformer model. The language model is trained using the prepared training data to optimize model parameters by minimizing loss functions (e.g., cross entropy loss) to enable better prediction of interface document code.
c. Loading a trained language model and carrying out reasoning: after training is completed, the saved model is loaded into a memory for code generation. The hint word vector is input into the loaded language model and an inference process is performed. The language model generates interface document codes conforming to openapi3 rules according to the input prompt words. The reasoning process may be performed by generating the token one by one or sampling the token.
The Chinese translation of the token is a mark or a token; in the fields of natural language processing and machine learning, a "token" is a basic unit of text processing, typically the splitting of a piece of text into individual words or symbols.
In generating interface document code, "token" refers to splitting the input hint word vector representation into a series of discrete tokens for processing one by the language model. This splitting process may be performed according to specific rules, such as splitting by space or punctuation. Each mark represents a portion of the input text and may be a word, a character, or a special symbol. These labels are fed into a language model for an inference process to generate corresponding interface document codes.
By decomposing the input hint word vector representation into tokens, the language model is better able to understand and process the input text information, thereby generating accurate interface document code.
d. Control the diversity and accuracy of the generated code: parameters of the language model may be adjusted in order to control the diversity and accuracy of the generated code. One of the important parameters is the temperature parameter (temperature parameter), which controls the degree of randomness in the generation process. Higher temperature values may result in more randomness and diversity, while lower temperature values may tend to produce more accurate but relatively conservative codes. By properly setting the temperature parameters, the diversity and accuracy of the generated codes can be balanced to meet your needs.
The code generation module transcodes the interface document generated by the language model into YAML format and creates a file to save the interface document. Thus, you can describe the API interface using YAML files conforming to openapi3 specifications, which contains information about the structure, parameters, paths, etc. of requests and responses for developers and other related personnel to review and use.
a. Transcoding the interface document into YAML format: after the language model reasoning module generates the interface document code, the code generation module is responsible for converting it into YAML format. YAML is a human-readable data serialization format commonly used to represent interface documents. You need to write code to process the language model generated code and convert it to yacl format. Ensuring that the conversion process is consistent with the openapi3 specification, including the correct structure, fields and attributes.
b. Creating a file for storing an interface document: after converting the interface document code into YAML format, you need to create a file to save the interface document. Appropriate naming convention and directory structures are selected to facilitate management and maintenance of the interface document files. Typically, you can use one file to save documents of one interface, or organize documents of multiple interfaces in the same file. The interface document file is organized and stored in a manner selected as appropriate based on project requirements and conventions.
In experiments, a set of 10 randomly generated natural language descriptions, which relate to different API functions and interface specifications, were tested using the method of the present embodiment. Comparing the manually written interface document code with the interface document code generated using the method of the present embodiment, the quality and accuracy of the generated code is evaluated by the following index:
interface specification compliance: it is determined whether the generated interface document code meets openapi3 specification and has a difference from the manually written interface document code.
Document integrity: it is determined whether the generated interface document code contains a complete API function description and request/response parameter description.
Code consistency: and judging whether the generated interface document codes are consistent or not, wherein the interface document codes comprise information such as a request method, a request path, request parameters, a request header, response parameters and the like.
Error rate: and judging whether the generated interface document code has errors, such as request path errors, parameter missing or errors and the like.
Experimental results show that the method can efficiently and accurately generate the interface document code conforming to openapi3 standard, and compared with the interface document code written manually, the generated code has higher conformity, integrity, consistency and lower error rate.
Data 1: interface for creating a list of acquired goods "
Data 2: interface for adding a delete order "
Data 3: what is the interface to update the user information? "
Data 4: "Add an interface for acquiring article content"
Data 5: "how does an interface to get a list of users be created? "
Data 6: "I need an interface to create an order"
Data 7: interface for adding an uploaded picture "
Data 8: "how to create an interface for updating the content of an article? "
Data 9: "Add an interface for acquiring detail of goods"
Data 10: "create a delete user interface"
Assume the following natural language input description:
"create a GET request, access user data on API server. The URL of the API server is https:// example. Com/API/users, the request should include an Authorization header with a value of Token { Token }, and the returned data format is JSON. "
And a pretreatment module: the program preprocesses the natural language description to obtain the following generated prompt words: creation, GET request, API server, URL, authorization header, value, token, return, data format, JSON.
Prompting word vectorization: the prompt words are converted into corresponding vector representations through the word vectorization module, so that the language model can conveniently recognize and infer.
Reasoning: the program inputs the vector into a trained language model for reasoning, generates an interface document code conforming to openapi3 standard according to a pre-trained model, and comprises the information of a request method GET, a request path/api/users, a request header containing an organization, a value Token { Token }, a return format JSON and the like.
By the mode, the program can quickly generate the interface document code conforming to openapi3 specification, so that the workload and errors of traditional manual writing of the interface document are avoided, and the efficiency and accuracy of writing the interface document are improved. The technical scheme is suitable for different frameworks and language specifications, and has wide applicability and commercial value.
The method of the embodiment improves the efficiency of writing interface documents: the traditional manual interface document writing method is low in efficiency and needs to consume a large amount of time and labor, and the method for generating the interface document codes by natural language description is adopted, so that the workload of writing the interface document manually is avoided, and the efficiency of writing the interface document is improved.
The method of the embodiment reduces the error in writing interface documents: the problems of omission, errors and the like easily occur in the traditional manual interface document writing, and the language model reasoning can generate an interface document code conforming to openapi3 rules according to natural language description input by a user, so that errors in the manual interface document writing are avoided, and the accuracy in interface document writing is improved.
The method of the embodiment has adaptability and expandability: the prompt word vectorization and language model reasoning have adaptability and expandability, can generate interface document codes according to different frameworks and language specifications, and have wide applicability and flexibility.
The method solves the problems of low efficiency, easy error, large repeated workload and the like of the traditional interface document writing mode, improves the efficiency of interface document writing and reduces the workload and error rate of developers by adopting a mode of generating the interface document codes through natural language description. In addition, the method and the system have expandability and self-adaption, and can generate interface document codes conforming to different frameworks and language specifications according to natural language descriptions input by users. Therefore, the invention has wide application prospect and commercial value.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (1)

1. A method for generating interface document code based on natural language description, comprising the steps of:
acquiring natural language description input by a user, preprocessing the natural language description, and acquiring a plurality of interface documents to generate prompt words;
generating prompt words from a plurality of interface documents to vectorize, and obtaining prompt word vector representation;
collecting interface document code samples conforming to preset rules and formatting;
constructing a language model, training, and inputting the prompt word vector representation into the trained language model to generate an interface document code; in the process of generating the interface document code, the input prompt word vector representation is split into a plurality of discrete marks so that the language model can be processed one by one; splitting according to specific rules including, but not limited to, splitting according to spaces or punctuation marks;
obtaining the language model based on a recurrent neural network or a transducer model;
converting the format of the interface document code to obtain an interface document, and creating a file for storage;
the pretreatment process comprises the following steps: adding additional prompt words to the natural language description, removing useless words, dividing the natural language description with the useless words removed to obtain a plurality of phrases, classifying the phrases according to parts of speech, extracting key phrases related to the generation of interface documents according to the parts of speech of the phrases, and generating the prompt words as the interface documents;
the additional prompt words comprise specific terms, keywords and structural elements defined in preset specifications;
the method for vectorizing the prompting words generated by the interface documents comprises the steps of pre-training word vector model conversion and self-adaptive learning method conversion;
the process of pre-trained word vector model conversion includes: constructing a word vector model, training the word vector model by adopting a corpus, and obtaining vector representations of a plurality of interface documents to generate prompt words through the trained word vector model in sequence;
the adaptive learning method conversion process comprises the following steps: constructing a neural network architecture comprising a word embedding layer, optimizing the neural network through back propagation in the training process, and respectively converting a plurality of interface document generation prompt words into continuous low-dimensional vector representations through the word embedding layer after training is completed;
the process of generating interface document code further includes: and constructing and training the language model by adopting a deep learning framework, inputting the prompt word vector representation into the language model, splitting the prompt word vector representation into a plurality of discrete marks by the language model, and executing an reasoning process to obtain a corresponding interface document code.
CN202310776692.8A 2023-06-29 2023-06-29 Method for generating interface document code based on natural language description Active CN116501306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310776692.8A CN116501306B (en) 2023-06-29 2023-06-29 Method for generating interface document code based on natural language description

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310776692.8A CN116501306B (en) 2023-06-29 2023-06-29 Method for generating interface document code based on natural language description

Publications (2)

Publication Number Publication Date
CN116501306A CN116501306A (en) 2023-07-28
CN116501306B true CN116501306B (en) 2024-03-26

Family

ID=87317033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310776692.8A Active CN116501306B (en) 2023-06-29 2023-06-29 Method for generating interface document code based on natural language description

Country Status (1)

Country Link
CN (1) CN116501306B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193889B (en) * 2023-08-02 2024-03-08 上海澜码科技有限公司 Construction method of code example library and use method of code example library
CN117032722B (en) * 2023-08-18 2024-04-26 上海澜码科技有限公司 Code generation method based on API (application program interface) document
CN116820429B (en) * 2023-08-28 2023-11-17 腾讯科技(深圳)有限公司 Training method and device of code processing model, electronic equipment and storage medium
CN117009249A (en) * 2023-09-15 2023-11-07 天津赛象科技股份有限公司 Test method, system and medium for automatically generating interface use cases and codes
CN117111916A (en) * 2023-10-19 2023-11-24 天津赛象科技股份有限公司 Automatic interface code generation method and system based on AI and modularized framework

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510762B1 (en) * 2011-10-12 2013-08-13 Google Inc. Generate custom client library samples based on a machine readable API description
CN110489110A (en) * 2019-08-20 2019-11-22 腾讯科技(深圳)有限公司 A kind of code generating method and device based on deep learning
CN110955416A (en) * 2019-10-12 2020-04-03 平安普惠企业管理有限公司 Interface document generation method, device, equipment and computer storage medium
CN114168190A (en) * 2020-09-11 2022-03-11 腾讯科技(深圳)有限公司 Interface document generation method and device, computer equipment and storage medium
CN115202640A (en) * 2022-07-26 2022-10-18 上海交通大学 Code generation method and system based on natural semantic understanding
CN115390806A (en) * 2022-09-06 2022-11-25 大连理工大学 Software design mode recommendation method based on bimodal joint modeling
CN115576536A (en) * 2022-11-11 2023-01-06 中信百信银行股份有限公司 Method and system for automatically generating interface document by analyzing byte codes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893385B2 (en) * 2021-02-17 2024-02-06 Open Weaver Inc. Methods and systems for automated software natural language documentation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510762B1 (en) * 2011-10-12 2013-08-13 Google Inc. Generate custom client library samples based on a machine readable API description
CN110489110A (en) * 2019-08-20 2019-11-22 腾讯科技(深圳)有限公司 A kind of code generating method and device based on deep learning
CN110955416A (en) * 2019-10-12 2020-04-03 平安普惠企业管理有限公司 Interface document generation method, device, equipment and computer storage medium
CN114168190A (en) * 2020-09-11 2022-03-11 腾讯科技(深圳)有限公司 Interface document generation method and device, computer equipment and storage medium
CN115202640A (en) * 2022-07-26 2022-10-18 上海交通大学 Code generation method and system based on natural semantic understanding
CN115390806A (en) * 2022-09-06 2022-11-25 大连理工大学 Software design mode recommendation method based on bimodal joint modeling
CN115576536A (en) * 2022-11-11 2023-01-06 中信百信银行股份有限公司 Method and system for automatically generating interface document by analyzing byte codes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
代码自动生成及代码上下文分析研究综述;李振;数据通信;20200428(第02期);全文 *
朝乐门.数据分析原理与实践:基于经典算法及Python编程实现.机械工业出版社,2022,全文. *
黑马程序员.Python数据分析与应用:从数据获取到可视化.中国铁道出版社,2019,全文. *

Also Published As

Publication number Publication date
CN116501306A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN116501306B (en) Method for generating interface document code based on natural language description
CN112541337B (en) Document template automatic generation method and system based on recurrent neural network language model
CN110442880B (en) Translation method, device and storage medium for machine translation
CN116661805B (en) Code representation generation method and device, storage medium and electronic equipment
CN111814451A (en) Text processing method, device, equipment and storage medium
CN117056531A (en) Domain knowledge driven large language model fine tuning method, system, equipment and storage medium
CN110633456B (en) Language identification method, language identification device, server and storage medium
CN111859950A (en) Method for automatically generating lecture notes
CN115497477A (en) Voice interaction method, voice interaction device, electronic equipment and storage medium
CN107622047B (en) Design decision knowledge extraction and expression method
CN110633468B (en) Information processing method and device for object feature extraction
CN111831624A (en) Data table creating method and device, computer equipment and storage medium
CN116644168A (en) Interactive data construction method, device, equipment and storage medium
CN114254657B (en) Translation method and related equipment thereof
CN115906835A (en) Chinese question text representation learning method based on clustering and contrast learning
CN112015891A (en) Method and system for classifying messages of network inquiry platform based on deep neural network
CN116484010B (en) Knowledge graph construction method and device, storage medium and electronic device
CN116720502B (en) Aviation document information extraction method based on machine reading understanding and template rules
CN114238070B (en) Test script generation method and system based on semantic recognition
Todosiev et al. The Conceptual Modeling System Based on Metagraph Approach
CN115965028A (en) Feature extraction method, feature extraction device, electronic equipment and storage medium
Chebanyuk Multilingual Question-Driven Approach and Software System to Obtaining Information from Texts
CN117216226A (en) Knowledge positioning method, device, storage medium and equipment
CN117669532A (en) Improved WMD text similarity calculation method and device
CN117688123A (en) Method and device for generating document structure tree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant