CN111459491B - Code recommendation method based on tree neural network - Google Patents

Code recommendation method based on tree neural network Download PDF

Info

Publication number
CN111459491B
CN111459491B CN202010185481.3A CN202010185481A CN111459491B CN 111459491 B CN111459491 B CN 111459491B CN 202010185481 A CN202010185481 A CN 202010185481A CN 111459491 B CN111459491 B CN 111459491B
Authority
CN
China
Prior art keywords
code
recommended
annotation
information
abstract syntax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010185481.3A
Other languages
Chinese (zh)
Other versions
CN111459491A (en
Inventor
陶传奇
林锴
黄志球
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010185481.3A priority Critical patent/CN111459491B/en
Publication of CN111459491A publication Critical patent/CN111459491A/en
Application granted granted Critical
Publication of CN111459491B publication Critical patent/CN111459491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/43Checking; Contextual analysis
    • G06F8/436Semantic checking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/447Target code generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a code recommendation method based on a tree neural network, which comprises the following steps: preprocessing a data set formed by code blocks and constructing a training set; processing the code part above the code line to be recommended of each sample in the training set, analyzing the code part into an abstract syntax tree, extracting path information from a root node to the code line to be recommended, and adjusting the abstract syntax tree and annotation information of the samples; training an AST-Transformer model obtained by improving a Transformer model, taking an abstract syntax tree, annotation information and path information from a root node to a recommendation point as input, and taking a recommended code line sequence as output; and extracting the code written by the user and the related annotation information thereof, and predicting the next possible code. The method improves the transformer model to adapt to the tree structure, uses the annotation information in the code, better helps the model to learn semantic information, and makes high-quality recommendation.

Description

Code recommendation method based on tree neural network
Technical Field
The invention belongs to the field of code recommendation of intelligent software development, and particularly relates to a code recommendation method based on a tree neural network.
Background
With the continuous development of the internet of things, the number of users is continuously increased, and the requirements of the users on software functions are continuously increased, so that the scale of software development is continuously increased, and in addition, the development difficulty of developers is also continuously increased due to the continuous update of a programming language. In the software development process, developers often need to search codes of other developers through methods such as a search engine and the like as references to complete own tasks, but reference codes obtained by the search engine cannot well fit own written codes. Developers usually need to read own codes and modify certain reference codes to obtain the forms desired by the developers, which is extremely time and energy consumption for the developers, and greatly prolongs the software development period.
At present, deep learning develops rapidly and is gradually applied to a software development process, and at present, some deep learning methods are applied to code recommendation, such as a recurrent neural network or a transform (transformer) model. However, these models rarely obtain the structural information in the code sufficiently, and have the problem of information loss for long sequences, which makes the recommended code inaccurate and unable to help developers well. The other representation form of the code block is an abstract syntax tree, the structure can well reflect the structure information, and the abstract syntax tree structure is adapted by improving the transform model, so that the problem of insufficient acquisition of the structure information can be solved, the problem of loss of long sequence information is solved, and the recommendation precision is improved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a code recommendation method based on a tree neural network to solve the problems that the traditional deep learning method cannot well capture code structure information and lose long sequence information; the method improves the transformer model to adapt to the tree structure, uses the annotation information in the code, better helps the model to learn semantic information, and makes high-quality recommendation.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention relates to a code recommendation method based on a tree neural network, which comprises the following steps:
step 1) preprocessing a data set formed by code blocks, randomly selecting a code line to be recommended from the code blocks, using a code line sequence to be recommended as a label, and using codes above the code line to be recommended and related annotation information thereof as attributes to construct a training set;
step 2) processing the code part above the code line to be recommended of each sample in the training set, analyzing the code part into an abstract syntax tree, extracting path information from a root node to the code line to be recommended, and adjusting the abstract syntax tree and annotation information of the samples;
step 3) training an AST-Transformer model obtained by improving a Transformer model, taking an abstract syntax tree, annotation information and path information from a root node to a recommendation point as input, and taking a recommended code line sequence as output;
and 4) extracting the written codes and the related annotation information of the user, and predicting the next possible code by using the trained AST-transformer model obtained in the step 3) and combining with a beam search algorithm.
Further, the step 1) specifically includes:
11) filtering out all code blocks with the number of lines less than or equal to 5 lines from the data set;
12) performing lexical analysis on the codes to form word sequences, calculating similarity among code blocks, detecting repeated codes, and removing the repeated codes;
13) randomly selecting a line from the code block as a code line to be recommended, using the upper code of the code line to be recommended and the annotation information related to the code line to be recommended as input features, and using a triple form of < the upper code, the annotation information and the code line to be recommended > as a sample to construct a training set.
Further, the step 2) specifically includes:
21) analyzing the codes above the code line to be recommended obtained in the step 13) to form an abstract syntax tree, and representing each node value into a vector form by using a hump detection and subword embedding mode to obtain an abstract syntax tree represented by a vector;
22) representing each word into a vector form by using a word2vec word embedding mode for the annotation information corresponding to each sample in the training set to obtain a vector sequence corresponding to the annotation information;
23) and extracting the path information from the root node to the code line to be recommended from the abstract syntax tree in the step 21).
Further, the AST-Transformer model in step 3) specifically includes:
31) an AST-Transformer encoder part which comprises two parts, wherein one part is an abstract syntax tree which is obtained by a tree encoder and receives the vector representation obtained in the step 21); one is that the common encoder uses the vector sequence corresponding to the annotation information as input;
32) an AST-transform decoder portion comprising two sub-layers: the annotation decoder sublayer takes two linear transformations output by the encoder corresponding to the annotation information as key and value, and the query is obtained by the output obtained by the decoder; the encoder sub-layer corresponding to the abstract syntax tree takes two linear transformations output by the tree encoder as key and value, and takes the output of the annotation decoder sub-layer as query; the AST-Transformer also represents the first input as part of the decoder in a vector of root nodes to recommendation points.
Further, the step 4) specifically includes:
41) selecting information compiled by a user as the code information, and obtaining an abstract syntax tree in a vector form by using the methods in the step 2) and the step 31); if a behavior annotation is in the current position, selecting the annotation as annotation information; if the function corresponding to the current position has an annotation, taking the annotation as annotation information, and obtaining a vector sequence by using the method in the step 32);
42) and taking the abstract syntax tree and the vector sequence in the vector form obtained in the step 41) as the input of the trained AST-Transformer, and obtaining a recommended code line through a beam search algorithm.
The invention has the beneficial effects that:
the invention utilizes a transformer machine translation model in natural language processing, adjusts the transformer model to adapt to the structural characteristics of a programming language, learns the semantics of the existing context environment in the code development process, generates a next line of high-quality codes and is convenient for a developer to program; has the following advantages:
(1) the invention utilizes the tree structure and the self-attention mechanism to overcome the problem of information loss on long sequences in the traditional deep learning model (such as a recurrent neural network model), and can still well learn the semantic information of longer code segments.
(2) The invention improves the transformer model, the traditional transformer aims at the problem of natural language, the programming language is different from the natural language, the programming language has standard structure information, and the structure information can not be captured sufficiently if the traditional transformer model is directly trained in a code sequence. Therefore, the code is expressed into an abstract syntax tree form, and the tree is learned by a transformer, so that the recommendation result with high accuracy is generated by capturing the structural information.
Drawings
FIG. 1 is a process architecture diagram of the present invention.
FIG. 2 is a diagram showing the overall structure of the AST-transducer in the present invention.
Fig. 3 is an overall configuration diagram of a general encoder.
Fig. 4 is a diagram illustrating an overall structure of an encoder based on an abstract syntax tree.
Fig. 5 is an overall structural diagram of a decoding layer.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.
Referring to fig. 1, a code recommendation method based on a tree neural network according to the present invention includes the following steps:
step 1) pre-processing a data set consisting of code blocks, i.e. detecting and deleting duplicate codes as well as short codes. And randomly selecting a code line to be recommended from the code block, taking a code line sequence to be recommended as a label, and taking the codes above the code line to be recommended and the related annotation information thereof as attributes to construct a training set. Wherein the content of the first and second substances,
11) filtering out all code blocks with the number of lines less than or equal to 5 lines from the data set;
12) performing lexical analysis on the code blocks to form word sequences, selecting each pair of word sequences < token _ A and token _ B >, and calculating the similarity:
Figure BDA0002414029690000031
the word sequence token _ a and the word sequence token _ B represent the number of words commonly owned by two word sequences, | token _ a | represents the number of words owned by the word sequence token _ a, | token _ B | represents the number of words owned by the word sequence token _ B. When Similarity (tokens _ a, tokens _ B) is greater than some defined threshold θ, the pair of codes corresponding to the word sequence is considered as a repeated code. Clustering the code set, and only one repeated code is reserved;
13) and randomly selecting a line from the code block as a code line to be recommended, and selecting the codes on the code line to be recommended as an input characteristic according to the position of the code line to be recommended. And meanwhile, selecting annotation information related to the code line to be recommended as an input feature, wherein the annotation information preferentially selects an annotation (if the annotation exists) on the previous line of the code line to be recommended, because the annotation on the previous line is most related to the code of the code line to be recommended, if the annotation does not exist, the annotation of the function where the code line to be recommended is located is selected, and finally the annotation of the class where the annotation is located is considered. Finally, constructing a training set by taking a triple form of < the above code, the annotation information and the code line to be recommended > as a sample;
Figure BDA0002414029690000032
Figure BDA0002414029690000041
for the code example, the code line to be recommended is underlined, and the following triples can be constructed according to the rule:
<
public<T extends Comparable<T>>T[]sort(T[]array){int length=array.length;int gap=1;while(gap<length/3){,
This method implements Generic Shell Sort.,
gap=3*gap+1
>。
and 2) processing the code part above the code line to be recommended of each sample in the training set, analyzing the code part into an abstract syntax tree, extracting path information from a root node to the code line to be recommended, and adjusting the abstract syntax tree and annotation information of the samples.
21) Analyzing the codes on the lines of the codes to be recommended obtained in the step 13) to form an abstract syntax tree corresponding to the abstract syntax tree, and representing each node value into a vector form by using a hump detection and sub-word embedding (the sub-words are embedded by word2 vec) mode to obtain a vector-represented abstract syntax tree vecAST.
22) Representing each word into a vector form V by using a word2vec word embedding mode for annotation information corresponding to each sample in a training set, and obtaining a vector sequence V-V (V is a vector sequence corresponding to the annotation information1,v2,...,vm)。
23) And extracting the path information from the root node to the code line to be recommended from the abstract syntax tree in the step 21).
Step 3) training AST-Transformer model obtained by transform model improvement, as shown in FIG. 2. And taking the abstract syntax tree, the annotation information and the path information from the root node to the code line to be recommended as input, and taking the recommended line sequence as output. Wherein the content of the first and second substances,
31) AST-Transformer encoder part, which comprises two parts, one is tree encoder (as shown in FIG. 4), receives the abstract syntax tree vecAST obtained in the step 31), and outputs vector sequence Output formed by all the representations of non-terminal nodestree. The tree encoder learns each non-terminal node vector representation through the normal encoder substructure (see FIG. 3) and the data compression substructure in the order of the subsequent traversal of the tree, and the encoding is performedThe decoder sub-structure constructs the input sequence X with the values of the non-terminal nodes and the vector representations of the child nodes.
Figure BDA0002414029690000051
Wherein v isparentValue of a non-terminal node (vector form), pchild1Is a vector representation of the child node.
The other is a common encoder (as shown in fig. 3) which is composed of four substructures such as a multi-head self-attention model, a feedforward network and a standardized model, and uses a vector sequence V (V ═ corresponding to the annotation information as the common encoder1,v2,...,vm) As input, the sequence of outputs corresponding to each word is used as Output of the encodercomment
32) AST-Transformer decoder portion, which consists of two decoder sublayers (see FIG. 5): the decoder sub-layer and the encoder sub-layer corresponding to the abstract syntax tree are annotated. Each decoder sublayer consists of six substructures: a self-attention model of a multi-headed mask, a multi-headed self-attention model, a feed-forward network, and three standardized models. Comment decoder sublayer corresponding encoder Output with comment informationcommentThe two linear transformations are used as key and value, and the query is obtained by the linear transformation of the output generated by the decoder; the encoder sub-layer corresponding to the abstract syntax tree outputs Output by the tree-type encodertreeAs key and value to note that the output of the decoder sub-layer goes through the linear transform as query. AST-Transformer also uses the vector v from root node to recommendation pointpathRepresenting the first input as part of the decoder.
Step 4) extracting the written codes and related comments of the user in the actual recommendation process, and predicting the next possible codes by using the AST-transformer model trained in the step 3) and the beam search algorithm;
41) in the actual recommendation process, the information written by the user is selected as the code information, and the vector-form abstract syntax tree vecAS is obtained by using the methods of step 2) and step 31)T; if a behavior annotation is in the current position, selecting the annotation as annotation information; otherwise, using the annotation as annotation information, using the method of step 32) to obtain the vector sequence V ═ (V ═ V)1,v2,...,vm)。
42) And taking the abstract syntax tree and the vector sequence in the vector form obtained in the step 41) as the input of the trained AST-Transformer, and obtaining a recommended code line through a beam search algorithm.
While the invention has been described in terms of its preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (2)

1. A code recommendation method based on a tree neural network is characterized by comprising the following steps:
step 1) preprocessing a data set formed by code blocks, randomly selecting a code line to be recommended from the code blocks, using a code line sequence to be recommended as a label, and using codes above the code line to be recommended and related annotation information thereof as attributes to construct a training set;
step 2) processing the code part above the code line to be recommended of each sample in the training set, analyzing the code part into an abstract syntax tree, extracting path information from a root node to the code line to be recommended, and adjusting the abstract syntax tree and annotation information of the samples;
step 3) training an AST-Transformer model obtained by improving a Transformer model, taking an abstract syntax tree, annotation information and path information from a root node to a recommendation point as input, and taking a recommended code line sequence as output;
step 4) extracting the code written by the user and the related annotation information thereof, and predicting the next possible code by using the trained AST-transformer model obtained in the step 3) and combining with a beam search algorithm;
the step 1) specifically comprises the following steps:
11) filtering out all code blocks with the number of lines less than or equal to 5 lines from the data set;
12) performing lexical analysis on the codes to form word sequences, calculating similarity among code blocks, detecting repeated codes, and removing the repeated codes;
13) randomly selecting a line from a code block as a code line to be recommended, using an upper code of the code line to be recommended and annotation information related to the code line to be recommended as input characteristics, and using a triple form of < the upper code, the annotation information and the code line to be recommended > as a sample to construct a training set;
the step 2) specifically comprises the following steps:
21) analyzing the codes above the code line to be recommended obtained in the step 13) to form an abstract syntax tree, and representing each node value into a vector form by using a hump detection and subword embedding mode to obtain an abstract syntax tree represented by a vector;
22) representing each word into a vector form by using a word2vec word embedding mode for the annotation information corresponding to each sample in the training set to obtain a vector sequence corresponding to the annotation information;
23) extracting path information from the root node to a code line to be recommended from the abstract syntax tree in the step 21);
the AST-Transformer model in the step 3) specifically comprises the following steps:
31) an AST-Transformer encoder part which comprises two parts, wherein one part is an abstract syntax tree which is obtained by a tree encoder and receives the vector representation obtained in the step 21); one is that the common encoder uses the vector sequence corresponding to the annotation information as input;
32) an AST-transform decoder portion comprising two sub-layers: the annotation decoder sublayer takes two linear transformations output by the encoder corresponding to the annotation information as key and value, and the query is obtained by the output obtained by the decoder; the encoder sub-layer corresponding to the abstract syntax tree takes two linear transformations output by the tree encoder as key and value, and takes the output of the annotation decoder sub-layer as query; the AST-Transformer also represents the first input as part of the decoder in a vector of root nodes to recommendation points.
2. The tree neural network-based code recommendation method according to claim 1, wherein the step 4) specifically comprises:
41) selecting information compiled by a user as the code information, and obtaining an abstract syntax tree in a vector form by using the methods in the step 2) and the step 31); if a behavior annotation is in the current position, selecting the annotation as annotation information; if the function corresponding to the current position has an annotation, taking the annotation as annotation information, and obtaining a vector sequence by using the method in the step 32);
42) and taking the abstract syntax tree and the vector sequence in the vector form obtained in the step 41) as the input of the trained AST-Transformer, and obtaining a recommended code line through a beam search algorithm.
CN202010185481.3A 2020-03-17 2020-03-17 Code recommendation method based on tree neural network Active CN111459491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010185481.3A CN111459491B (en) 2020-03-17 2020-03-17 Code recommendation method based on tree neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010185481.3A CN111459491B (en) 2020-03-17 2020-03-17 Code recommendation method based on tree neural network

Publications (2)

Publication Number Publication Date
CN111459491A CN111459491A (en) 2020-07-28
CN111459491B true CN111459491B (en) 2021-11-05

Family

ID=71683181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010185481.3A Active CN111459491B (en) 2020-03-17 2020-03-17 Code recommendation method based on tree neural network

Country Status (1)

Country Link
CN (1) CN111459491B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112035165B (en) * 2020-08-26 2023-06-09 山谷网安科技股份有限公司 Code clone detection method and system based on isomorphic network
CN112114791B (en) * 2020-09-08 2022-03-25 南京航空航天大学 Code self-adaptive generation method based on meta-learning
CN112162775A (en) * 2020-10-21 2021-01-01 南通大学 Java code annotation automatic generation method based on Transformer and mixed code expression
CN112397155B (en) * 2020-12-01 2023-07-28 中山大学 Single-step reverse synthesis method and system
CN112925563B (en) * 2021-02-24 2022-01-04 南通大学 Code reuse-oriented source code recommendation method
CN113296784B (en) * 2021-05-18 2023-11-14 中国人民解放军国防科技大学 Container base mirror image recommendation method and system based on configuration code characterization
CN114461196B (en) * 2022-02-21 2022-09-27 广州图创计算机软件开发有限公司 Intelligent auxiliary method and system for software development
CN116521133A (en) * 2023-06-02 2023-08-01 北京比瓴科技有限公司 Software function safety requirement analysis method, device, equipment and readable storage medium
CN117648079B (en) * 2024-01-29 2024-05-14 浙江阿里巴巴机器人有限公司 Task processing, code completion, code question answering and task processing model training method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664237A (en) * 2018-05-14 2018-10-16 北京理工大学 It is a kind of that method is recommended based on heuristic and neural network non-API member
CN108717470A (en) * 2018-06-14 2018-10-30 南京航空航天大学 A kind of code snippet recommendation method with high accuracy
CN108717423A (en) * 2018-04-24 2018-10-30 南京航空航天大学 A kind of code segment recommendation method excavated based on deep semantic
CN108733359A (en) * 2018-06-14 2018-11-02 北京航空航天大学 A kind of automatic generation method of software program
CN109522011A (en) * 2018-10-17 2019-03-26 南京航空航天大学 A kind of code line recommended method of context depth perception live based on programming
CN109614103A (en) * 2018-10-19 2019-04-12 北京硅心科技有限公司 A kind of code completion method and system based on character
CN109739494A (en) * 2018-12-10 2019-05-10 复旦大学 A kind of API based on Tree-LSTM uses code building formula recommended method
CN109783079A (en) * 2018-12-21 2019-05-21 南京航空航天大学 A kind of code annotation generation method based on program analysis and Recognition with Recurrent Neural Network
CN109799990A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Source code annotates automatic generation method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10671355B2 (en) * 2018-01-21 2020-06-02 Microsoft Technology Licensing, Llc. Code completion with machine learning
US20200034707A1 (en) * 2018-07-27 2020-01-30 drchrono inc. Neural Network Encoders and Decoders for Physician Practice Optimization
US11157384B2 (en) * 2019-06-27 2021-10-26 Intel Corporation Methods, systems, articles of manufacture and apparatus for code review assistance for dynamically typed languages
CN110442514B (en) * 2019-07-11 2024-01-12 扬州大学 Method for realizing defect repair recommendation based on learning algorithm
CN110750240A (en) * 2019-08-28 2020-02-04 南京航空航天大学 Code segment recommendation method based on sequence-to-sequence model

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109799990A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Source code annotates automatic generation method and system
CN108717423A (en) * 2018-04-24 2018-10-30 南京航空航天大学 A kind of code segment recommendation method excavated based on deep semantic
CN108664237A (en) * 2018-05-14 2018-10-16 北京理工大学 It is a kind of that method is recommended based on heuristic and neural network non-API member
CN108717470A (en) * 2018-06-14 2018-10-30 南京航空航天大学 A kind of code snippet recommendation method with high accuracy
CN108733359A (en) * 2018-06-14 2018-11-02 北京航空航天大学 A kind of automatic generation method of software program
CN109522011A (en) * 2018-10-17 2019-03-26 南京航空航天大学 A kind of code line recommended method of context depth perception live based on programming
CN109614103A (en) * 2018-10-19 2019-04-12 北京硅心科技有限公司 A kind of code completion method and system based on character
CN109739494A (en) * 2018-12-10 2019-05-10 复旦大学 A kind of API based on Tree-LSTM uses code building formula recommended method
CN109783079A (en) * 2018-12-21 2019-05-21 南京航空航天大学 A kind of code annotation generation method based on program analysis and Recognition with Recurrent Neural Network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Java API文档缺陷自动检测和修复方法研究;王长志;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200215;全文 *
Natural Software Revisited;Musfiqur Rahman; Dharani Palani; Peter C. Rigby;《2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)》;20190826;全文 *
SENSORY: Leveraging Code Statement Sequence Information for Code Snippets Recommendation;Lei Ai; Zhiqiu Huang; Weiwei Li; Yu Zhou; Yaoshen Yu;《2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC)》;20190709;全文 *
SLAMPA: Recommending Code Snippets with Statistical Language Model;Shufan Zhou; Hao Zhong; Beijun Shen;《2018 25th Asia-Pacific Software Engineering Conference (APSEC)》;20190523;全文 *
基于程序分析和神经网络语言模型的代码推荐研究;张俊男;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180115;全文 *

Also Published As

Publication number Publication date
CN111459491A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111459491B (en) Code recommendation method based on tree neural network
US11657230B2 (en) Referring image segmentation
WO2019085779A1 (en) Machine processing and text correction method and device, computing equipment and storage media
CN111709518A (en) Method for enhancing network representation learning based on community perception and relationship attention
CN113221571B (en) Entity relation joint extraction method based on entity correlation attention mechanism
CN113657123A (en) Mongolian aspect level emotion analysis method based on target template guidance and relation head coding
CN113190219A (en) Code annotation generation method based on recurrent neural network model
CN115983274B (en) Noise event extraction method based on two-stage label correction
CN116661805B (en) Code representation generation method and device, storage medium and electronic equipment
CN113761893A (en) Relation extraction method based on mode pre-training
CN112507337A (en) Implementation method of malicious JavaScript code detection model based on semantic analysis
CN115391563B (en) Knowledge graph link prediction method based on multi-source heterogeneous data fusion
CN115438709A (en) Code similarity detection method based on code attribute graph
CN116402066A (en) Attribute-level text emotion joint extraction method and system for multi-network feature fusion
CN117521672A (en) Method for generating continuous pictures by long text based on diffusion model
CN114880307A (en) Structured modeling method for knowledge in open education field
CN117251522A (en) Entity and relationship joint extraction model method based on latent layer relationship enhancement
KR20220066554A (en) Method, apparatus and computer program for buildding knowledge graph using qa model
CN112270358A (en) Code annotation generation model robustness improving method based on deep learning
CN116663567A (en) Aspect-level emotion triplet extraction method and system based on semantic enhancement double encoders
CN116258147A (en) Multimode comment emotion analysis method and system based on heterogram convolution
CN115934883A (en) Entity relation joint extraction method based on semantic enhancement and multi-feature fusion
CN113296784B (en) Container base mirror image recommendation method and system based on configuration code characterization
CN115422945A (en) Rumor detection method and system integrating emotion mining
CN115169363A (en) Knowledge-fused incremental coding dialogue emotion recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant