US11262984B2 - Multi-lingual line-of-code completion system - Google Patents

Multi-lingual line-of-code completion system Download PDF

Info

Publication number
US11262984B2
US11262984B2 US16/680,328 US201916680328A US11262984B2 US 11262984 B2 US11262984 B2 US 11262984B2 US 201916680328 A US201916680328 A US 201916680328A US 11262984 B2 US11262984 B2 US 11262984B2
Authority
US
United States
Prior art keywords
neural
source code
token
tokens
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/680,328
Other versions
US20210034335A1 (en
Inventor
Alexey Svyatkovskiy
Shengyu Fu
Neelakantan Sundaresan
Shao Kun Deng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC. reassignment MICROSOFT TECHNOLOGY LICENSING, LLC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DENG, Shao Kun, FU, SHENGYU, SUNDARESAN, NEELAKANTAN, SVYATKOVSKIY, Alexey
Priority to US16/680,328 priority Critical patent/US11262984B2/en
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Priority to CN202080054713.XA priority patent/CN114585999A/en
Priority to EP20750843.3A priority patent/EP4007951B1/en
Priority to PCT/US2020/037102 priority patent/WO2021021322A2/en
Publication of US20210034335A1 publication Critical patent/US20210034335A1/en
Priority to US17/580,609 priority patent/US11809842B2/en
Publication of US11262984B2 publication Critical patent/US11262984B2/en
Application granted granted Critical
Priority to US18/232,326 priority patent/US20240028306A1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/33Intelligent editors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9027Trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/42Syntactic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/35Creation or generation of source code model driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/75Structural analysis for program understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0454
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • Software development environments are often used to aid software developers (i.e., users, programmers, etc.) to develop program code.
  • the software development environment may include a source code editor and other tools that a developer utilizes to write and test their programs.
  • Some software development environments include a code completion feature that provides assistance while the developer is editing code by automatically presenting a list of possible candidates based on one or more characters (e.g., letters, symbols, etc.) that a developer has typed into a source code editor. A popup menu may appear with several suggested code elements that the developer may utilize. This assistance is beneficial since it speeds up the development time and reduces common errors, such as typos.
  • the automatic code completion feature may be problematic when the code completion system does not recognize an out-of-vocabulary code element, requires a lot of memory, takes too long to generate a list of candidates, and/or generates a list of candidates that are not relevant.
  • a multi-lingual line-of-code completion system is used to generate the most likely candidates to complete a line of source code during a source code editing session.
  • a predicted string of characters to complete the line of code may include various types of elements, such as, local variables, methods, arguments, keywords, and delimiters arranged in an ordered sequence.
  • the system uses a model to predict the ordered sequence which is trained using a conditional language modeling objective on a large unsupervised dataset that includes source code programs written in different programming languages (e.g., C, Java, Python, C++).
  • Each source code program in the training dataset does need not be written in the same programming language.
  • the training dataset may be composed of numerous source code programs, each of which may be written in a different programming language.
  • Each source code program in the training dataset is encoded into a sequence composed of tokens and/or subtokens.
  • the frequently-used elements in a programming language are encoded into tokens and the less frequently-occurring elements are encoded into combinations of characters referred to as subtokens. This reduces the need to store a large vocabulary and provides better accuracy for out-of-vocabulary tokens.
  • the multi-lingual line-of-code completion system is based on a neural transformer model.
  • the neural transformer model is comprised of multiple decoder blocks.
  • a decoder block includes a multi-head self-attention layer coupled to a multi-layer one-dimensional convolutional neural network. Layer normalization is applied before and after the multi-head self-attention layer in order to reduce the training time of the neural transformer model.
  • a beam search is used to generate candidate sequences.
  • the beam search uses the top k subtokens/tokens, identified from each iteration of the neural transformer model, to expand a partial candidate sequence of tokens/subtokens likely to complete a line of source code.
  • the beam search generates a search tree but only keeps the top k nodes at each inference level to expand the search. The search ends when the end-of-line token appears as the most probable prediction.
  • FIG. 1 illustrates an exemplary code completion system having a training phase that generates a neural transformer model and an inference phase that uses the neural transformer model to predict one or more candidate sequences to complete a line-of-code.
  • FIGS. 2A-2B are schematic diagrams illustrating an exemplary system and method to train the neural transformer model for line-of-code completion.
  • FIG. 3 is a schematic diagram illustrating an exemplary architecture of the transformer block shown in FIG. 2 .
  • FIGS. 4A-4B are schematic diagrams illustrating an exemplary architecture of the inference phase.
  • FIGS. 5A-5B are flow diagrams illustrating an exemplary method for training the neural transformer model for code completion.
  • FIGS. 6A-6B are flow diagrams illustrating an exemplary method for utilizing the neural transformer model in the inference phase.
  • FIG. 7 is a schematic diagram illustrating an exemplary user interface showing code completion candidates for a line of code in an exemplary source code program.
  • FIG. 8 is a schematic diagram illustrating an exemplary beam search that generates a search tree of candidate sequences.
  • FIG. 9 is a block diagram illustrating an operating environment.
  • a line of source code may consist of various elements (e.g., keywords, delimiters, variables, methods, constants, operators, etc.) that are combined in a particular order in accordance with the grammar of the underlying programming language to form an expression.
  • the line of source code may be a method invocation, a program statement, a definition, an expression, and so forth.
  • a line of source code does not include a blank line or a comment line and ends with an end-of-line character.
  • the code completion tool uses a neural network machine learning model to predict the next string of code elements to complete a line of source code.
  • a line of source code refers to a physical line of source code that ends with an end-of-line character and which excludes blank lines and comment lines.
  • the model is trained on an unsupervised dataset that may include source code from different programming languages (i.e., multi-lingual). Unsupervised learning draws inferences from datasets consisting of input data without labeled responses.
  • a vocabulary is formed from these datasets that includes tokens and/or subtokens found in the source code files.
  • a token is a single element in the grammar of a programming language such as a keyword, variable name, operator, delimiter, etc.
  • a subtoken is a portion of a token that is in between a token and a single character.
  • the subtokens are used to account for rare or unseen tokens (i.e., out-of-vocabulary tokens) that may appear in a target source code program.
  • the use of the subtokens allows the model to learn and generate the out-of-vocabulary tokens.
  • Byte pair encoding is a data compression technique in which most frequently co-occurring pairs of Unicode characters throughout the training source code dataset are substituted with an out-of-vocabulary character.
  • the byte pair encoding results in an extraction of token/subtokens in sequences of frequently co-occurring Unicode characters.
  • byte pair encoding is used to extract ordered sequences of Unicode characters to form tokens and subtokens from a syntactic representation of the source code programs of the training dataset.
  • An ordered sequence represents a source code fragment having T tokens/subtokens.
  • the ordered sequences of tokens/subtokens are translated into token/subtoken embeddings and positional embeddings which are vector representations of a source code fragment.
  • the neural network machine learning model is a multi-layer transformer model.
  • a transformer is a neural network architecture that handles dependencies between its input and output with attention and convolution and without using recurrent neural networks (RNN) (e.g., long short-term memory (LSTM) network).
  • RNN recurrent neural networks
  • a shortcoming of a RNN-based system is the sequential nature of the RNN where each hidden state relies on the previous hidden state. This makes the RNN-based systems hard to parallelize and unable to take advantage of fast computing devices, such as graphics processing units. Furthermore, RNN-based systems cannot learn long-range dependencies within the input and output sequences for long periods.
  • the transformer overcomes these obstacles with attention. Attention is a mechanism that identifies which parts of an input sequence are relevant to each token/subtoken in the output sequence. The attention mechanism allows the transformer to access the entire input sequence all at once.
  • a transformer may act as an encoder or a decoder where the encoder maps an input sequence of symbol representations to a sequence of continuous representations and the decoder generates an output sequence of symbols from the sequence of continuous representations.
  • the encoder-decoder architecture is not a good fit for conditional code generation or code completion tasks and is better suited for machine translation and patch generation type tasks.
  • a variant of the transformer model is used that is composed of decoder blocks having masked self-attention and convolutional layers.
  • a beam search is used to generate one or more candidate sequences to complete a line of source code.
  • the beam search uses the probability distribution generated by the neural transformer model to identify the top k tokens/subtokens likely to be the next token or subtoken in a candidate sequence.
  • the beam search expands the search by instantiating new partial sequences using each of the selected tokens/subtokens identified by the neural transformer model's probability distribution.
  • the search continues generating new partial sequences from the top k tokens/subtokens identified by the output distributions from the neural transformer model until the search ends. The search may end when the end-of-line token appears as the most probable next token.
  • a multi-layer transformer-decoder neural network model with multi-head self-attention is utilized to estimate this probability distribution for a source code corpus using an unsupervised autoregressive (AR) technique.
  • the modeling objective is to maximize the following likelihood: ⁇ i (log m i
  • the parameters may include attention lengths, the number of attention heads, the number of decoder blocks, embedding dimensions, embedding matrices, and the number of hidden units per layer which are trained using a stochastic gradient descent optimization procedure.
  • n is the number of layers
  • T is the ordered sequence length
  • W e is the token/subtoken embedding matrix
  • W p is the position embedding matrix
  • e represents an embedding for a token/subtoken in the vocabulary
  • p represents an embedding for a position of a token/subtoken.
  • FIG. 1 illustrates a block diagram of an exemplary code completion system 100 in which various aspects of the invention may be practiced.
  • system 100 includes a training phase 102 which trains a transformer model 122 and an inference phase 104 that utilizes the transformer model 122 in a line-of-code completion system.
  • the training phase 102 may utilize a source code repository 106 , a source code extraction component 108 , a syntactic analyzer 112 , a token/subtoken sequence extraction component 116 , and a model training and validation component 120 .
  • the training phase 102 pre-trains a transformer model from a diverse corpus of unlabeled source code programs. This is referred to as unsupervised learning since the model draws inferences from the input data without labeled responses.
  • the source code extraction component 108 extracts selected source code programs 110 from the source code repository 106 to obtain the training and validation datasets.
  • the source code repository 106 may be a file archive and web hosting facility that stores large amounts of source code either privately or publicly.
  • the source code repository 106 can be structured as a version control system, such as GIT, Mercurial, etc.
  • the source code programs residing in the source code repository 106 vary and may be written in different programming languages.
  • the source code extraction component 108 obtains several selected source code programs 110 which may be written in the same or different programming languages.
  • a programming language utilizes a context-free grammar that is a set of rules that describe all possible strings in a formal programming language.
  • the selected source code programs 110 can come from different domains, such as without limitation, scientific computing, web development, dataflow programming, machine learning, and the like.
  • a syntactic analyzer 112 transforms each of the selected source code programs 110 into a concrete syntax tree 114 .
  • the concrete syntax tree 114 represents the source code text in the parsed form.
  • the concrete syntax tree 114 may also be a parse tree.
  • the syntactic analyzer 112 may be a parser, part of a front-end compiler, part of a language compiler, or part of a compilation tool.
  • a concrete syntax tree 114 represents the syntactic structure of a program in a hierarchical or tree structure.
  • the concrete syntax tree 114 is an n-ary tree data structure that includes nodes that represent a construct in the grammar of the programming language of a program.
  • the concrete syntax tree 114 includes one root node, multiple internal nodes, and multiple terminal nodes.
  • the terminal nodes represent the tokens.
  • a token is a symbol that represents an operand or an operator.
  • the concrete syntax tree 114 differs from an abstract syntax tree where the terminal nodes represent operands.
  • the concrete syntax tree 114 for a selected source code program 110 is passed to the token/subtoken sequence extraction component 116 .
  • the token/subtoken sequence extraction component 116 parses the concrete syntax tree 114 of each source code program and outputs a sequence of T tokens and/or subtokens.
  • the token/subtoken sequence extraction component 116 performs byte pair encoding to extract frequently-occurring tokens and to extract subtokens from less-occurring tokens.
  • a subtoken is a portion of a token.
  • the token “reduce” has been split into the subtokens “red” and “uce” and the token “square” has been split into the subtokens “squ” and “are”.
  • the T-ordered sequences of tokens are then mapped into numeric vectors and then into an embedding.
  • An embedding is a learned representation for the text-based tokens/subtokens where tokens or subtokens that have a common meaning have a common representation.
  • the token/subtoken embedding represents the learned representation for the token/subtoken.
  • the transformer model does not read each token/subtoken sequentially and as such, has no knowledge of the token/subtoken's position in a sequence without additional position information.
  • the position embedding is used to embed position information about a token/subtoken's position in a sequence into the transformer model.
  • the token/subtoken embeddings are input into the model training and validation component 120 .
  • the neural transformer model 122 is used in the inference phase 104 of the code completion system.
  • the inference phase 104 may be embodied as a function or feature integrated into a source code editor, integrated development environment (IDE), and/or stand-alone application.
  • Code completion may be embodied as a tool or feature that can be an add-on, plug-in, extension and/or component of a source code editor and/or IDE.
  • the inference phase 104 includes a source code editor 130 , a code completion component 142 , and the model 122 .
  • a source code editor 130 may include a user interface 132 and a parser 134 .
  • the user interface 132 includes a set of features or functions for developing (e.g., writing, editing, testing) a source code program.
  • the user interface 132 may utilize a pop-up window to present a list of possible candidates 136 for completion thereby allowing a developer to browse through the candidates and to select one from the list.
  • the candidates may appear inline with the current source code line as the user is typing characters into the source code program.
  • the parser 134 reads the characters entered into a source code program through the source code editor 130 and generates a corresponding concrete syntax tree 140 .
  • the parser 134 also updates the concrete syntax tree 140 as the developer creates and edits the source code in the source code editor 130 .
  • the user interface 132 will request candidates to complete the current line of source code.
  • the user interface may detect that the user has entered a particular character or string of characters and automatically initiate a request for candidates to complete a line-of-code. This character is referred to as a marker character.
  • the user interface 132 will then send a request 138 for candidates from the code completion component 142 to present to the developer.
  • the user may request candidates by entering a particular keystroke or sequence of keystrokes, such as the combination of the CTRL key with the whitespace key.
  • the system may automatically display, in a dimmed color, a single top candidate at the end of the current source code line regardless of a marker character.
  • the system builds and continuously updates a tree of candidates in the background regardless of whether the user decides to trigger the candidate or not.
  • the candidate is automatically displayed in the user interface when the user has been idle for a period of time. If the user wants to accept the candidate, the user may type in a particular keystroke or combination of keystrokes (e.g., CTRL and I) to accept the candidate. In this case, the cursor position will advance to the end of the suggested code sequence and the dimmed color of the candidate code will change to the normal color of the code. If the user does not want to use the candidate, the candidate disappears when the user continues typing. In this case, the system would refine the code sequence based on the pre-fix filter of the tree of candidates based on the newly typed code.
  • the code completion component 142 tracks the characters that are input into the source code editor and services requests for candidates to complete a line of source code.
  • the code completion component uses the model 122 to generate candidates based on the current context of the source code in the editor.
  • the candidates are ranked according to their respective probability with the candidates having the highest probability at the top.
  • a select number of candidates 136 is then returned to the source code editor 130 and displayed in the user interface 132 .
  • FIG. 1 shows components of the system in one aspect of an environment in which various aspects of the invention may be practiced.
  • the exact configuration of the components shown in FIG. 1 may not be required to practice the various aspects and variations in the configuration shown in FIG. 1 and the type of components may be made without departing from the spirit or scope of the invention.
  • the training phase 102 may be executed in one computing environment and the inference phase 104 may be executed in the same computing environment or in a separate computing environment as the training phase 102 .
  • the various computing environments are described in further detail below.
  • FIG. 2A illustrates further details of the components and process 200 used to train the neural transformer model.
  • the source code extraction component 108 obtains source code programs for use as the training and validation datasets.
  • Each selected source code file 202 is parsed into a concrete syntax tree 204 by a syntactic analyzer 112 .
  • the concrete syntax tree 204 is traversed by the token/subtoken sequence extraction component 116 .
  • the token/subtoken sequence extraction component 116 may utilize a tokenizer 206 to extract tokens from each line of source code represented by the concrete syntax tree.
  • byte pair encoding is used as the tokenizer 206 .
  • Byte pair encoding is used to build a vocabulary of tokens/subtokens. Although its name uses the word “byte”, byte pair encoding operates on Unicode code points and not byte sequences. This encoding technique partitions less-occurring tokens into subtokens and the more frequently occurring tokens are left intact.
  • T is 1024 tokens with each sequence consisting of 1024 token/subtokens and representing a particular context of the source code program.
  • the sequences from the various source code programs are then input to the model training and validation component 210 .
  • Neural networks are trained iteratively, making multiple passes over the training dataset before converging to a minimum.
  • An epoch represents the entire training dataset passed forwards and backwards through the neural network once. Since the training dataset is very large, it is partitioned into smaller batches. The training is iterative and the entire dataset is passed through the neural network in multiple iterations. Each training iteration includes forward propagation, loss calculation, backpropagation steps followed by updating the weights.
  • the neural network has multiple layers so that more detailed relationships within the data are learned as well as how the features interact with each other on a non-linear level.
  • the model architecture, training procedure, data normalization and vocabulary encoding procedures are hyperparameters that are tailored to meet a particular objective. The values of the hyperparameters influence how the parameters are learned.
  • the hyperparameters may include the following: (1) token/subtoken and position embedding layers of dimensions: 30000 ⁇ 768, and 1024 ⁇ 768 respectively; (2) twelve transformer blocks, with each block consisting of two convolutions, masked self-attention and layer normalization layers; (3) for the training procedure: auto-regressive, with a cross-entropy loss optimization objective; the sequence length is 1024 tokens/subtokens; the mini-batch size is 8; the gradient accumulation steps for each weight update is 8; the Adam stochastic optimization procedure is used to train the neural network; and the learning rate is 0.0001; (4) the data normalization procedure: normalize all string and numerical literals, keeping the ten most frequent; and (5) the vocabulary encoding procedure: extract joint subtoken vocabulary from the multi-lingual code corpus using byte-pair encoding, preserve the ten most frequent string and numerical literals encoding them as a single token during byte-pair encoding procedure; and introduce special control flow tokens to denote end-of-
  • the training dataset is partitioned into batches with each batch of sequences running through the training process.
  • the sequences are initially transformed into numeric vectors and then embeddings.
  • An embedding is a mapping of discrete categorical variables to a vector of continuous numbers.
  • the token/subtoken embeddings represent the tokens and/or subtokens in a sequence and the positional embeddings represents the order of a token/subtoken in a sequence.
  • Each token/subtoken embedding 212 and its corresponding positional embedding 214 are combined to form a context tensor 216 .
  • a tensor is a mathematical object that has indices and components that follow certain transformation rules. The tensor is a partially defined computation. It is a generalization of vectors and matrices and represented as an n-dimensional array. The tensor in this instance represents a context of a source code program.
  • the size of the context tensor 216 is T ⁇ size of the embedding vector (e.g., embedding size), where T is the length of the token/subtoken sequence.
  • the token/subtoken embeddings 212 are learned together with the parameters of the neural transformer model.
  • the output hidden state of neural transformer model 228 is then multiplied by the linear projection matrix A 230 .
  • the hidden state vector h T ⁇ R dh encodes information learned by neural transformer model 224 from the context tensors 216 .
  • ⁇ 1 is the bias vector, and then normalizing them using softmax function 236 .
  • the neural transformer model 224 may be composed of one or more transformer blocks 226 A, 226 B.
  • a transformer block 226 may configured with encoder and decoder blocks and/or with only decoder blocks.
  • FIG. 3 shows one aspect of the neural transformer model 224 configured with multiple decoder blocks 306 A, 306 N.
  • a decoder block 306 A, 306 N may include a first normalization layer 308 , followed by a masked self-attention layer 310 , followed by a second normalization layer 312 , and two layers of a one-dimensional convolutional neural network 314 A, 314 B.
  • Layer normalization normalizes the inputs across the features.
  • the mean and standard deviation is computed across the feature dimensions.
  • Each token/subtoken flows through all the decoder blocks 306 A, 306 N along its own path.
  • the masked self-attention layer 310 allows the neural network to focus on certain features or inputs. Attention is described in “Attention Is All You Need,” by Vaswani et al., in 31 st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, Calif., as “mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.”
  • the masked self-attention layer 310 consists of two or more attention heads, 316 A, 316 B.
  • the multi-head self-attention heads run through the scaled dot product attention multiple times in parallel.
  • Each attention head 316 A, 316 B operates on a portion of the context tensor 302 .
  • Attention head 316 A operates on a first segment 318 A and attention head 316 B operates on a second segment 320 A.
  • each attention head 316 A, 316 B consists of a query matrix 320 A, 320 B and a key matrix 322 A, 322 B, both of dimension, T ⁇ d x , where T is the code sequence length and d x is the embedding dimension.
  • the dot product is generated from the query matrix 320 with all the keys from the key matrix 322 , with the softmax function applied to obtain the weights, W 0 . . . W T , 324 A, 324 B, on the values resulting in a respective value matrix 326 A, 326 B.
  • the resulting values from the two value matrices are then concatenated 328 and then linearized 330 .
  • the concatenation layer 328 takes T ⁇ d v dimensional key matrices from each attention head to form a T ⁇ d v dimensional matrix.
  • Layer normalization 312 is then applied to the output of the masked self-attention layer 310 .
  • the output of layer normalization 312 is then applied to the first neural network layer.
  • the output of the neural network at the last temporal step T is the hidden state vector h T 228 which encodes information learned by the transformer blocks 226 A,B relevant to the token/subtokens.
  • the hidden state vector h T ⁇ R dh 228 encodes information learned by neural transformer model from the context tensors.
  • the inference phase utilizes a beam search to find the most likely candidate sequences.
  • a beam search iteratively generates tokens/subtokens by invoking the neural transformer model.
  • the output of the neural transformer model is a matrix of token probabilities for each position in a candidate sequence.
  • the beam search concentrates on the k most probable tokens at each iteration to get the best path to the most likely candidate sequence.
  • each of the k most probable tokens are concatenated with the tokens in the preceding iterations to form a partial candidate sequence.
  • a beam search uses a breadth-first search to build a search tree.
  • the search tree is composed of nodes at one or more inference levels. Each node represents a probability distribution generated by the neural transformer model for the tokens/subtokens in the model vocabulary. At each level, only the top k tokens/subtokens having the highest probabilities from the output distribution generated by the neural transformer model are expanded to the next inference level.
  • the variable k is preconfigured and also referred to as the beam width.
  • Each of the k subtokens/tokens is then expanded into a search that updates the current context sequence with the selected subtoken/token to input into the neural transformer model to generate an additional probability distribution for the next token in a sequence. This process is repeated until the end of a line token is predicted as being the next likely token candidate.
  • FIG. 4A there is shown components of the inference phase 400 .
  • a code snippet 402 is entered into a source code editor which is transformed into a corresponding concrete syntax tree 404 .
  • the concrete syntax tree 404 is traversed, by a tokenizer 406 , to extract tokens and/or subtokens. Ordered sequences of length T are formed and vectorized 408 .
  • the beam search 410 uses the context vector 408 to initiate an inference process 412 using the probability distribution generated from the neural transformer model, P 0 . . . P
  • the beam search 410 ends when the end-of-line token is selected as the most likely candidate to complete a partial candidate sequence.
  • FIG. 4B illustrates an exemplary search process 412 .
  • An embedding vector for each token and subtoken in a sequence 408 is obtained from the token/subtoken embedding matrix 428 and its corresponding positional vector from the positional embedding matrix 430 .
  • the token/subtoken embedding vector and its corresponding positional embedding vector are combined to form a context tensor 432 which is input into the neural transformer model 434 .
  • the output of the neural transformer model 434 is the vector with components h 0 . . . h dh 436 .
  • the output of the transformer is multiplied by the linear projection layer 438 to generate the predicted embedding vectors 440 .
  • the token/subtoken embedding vectors 448 are used as the output classification matrix to generate the unnormalized predictions or logits V 0 . . . V
  • the logits 442 are normalized using the softmax function 444 to generate the softmax prediction 446 P 0 . . . P
  • FIGS. 5A-5B illustrate an exemplary method 500 illustrating usage of a neural transformer model for code completion.
  • a set of hyperparameters is selected randomly.
  • a hyperparameter is a parameter associated with the neural network model architecture, the training algorithms, and data normalization, and is set before the start of the model training.
  • a hyperparameter is not learned by the deep learning or neural network.
  • the hyperparameters are selected at random from a set of categorical values or, for real valued hyperparameters like learning rate, drawn at random from a given range. Hyperparameters are tuned based on the performance of the neural transformer model when tested using the validation dataset.
  • the training of the neural transformer model is a computationally intensive effort which requires parallel data processing.
  • One or more clusters may be used to train the neural transformer model where each cluster contains a set of loosely or tightly coupled computers (e.g., processors, processing units, cores) that perform the same task simultaneously under the control of distributed controller.
  • Each computer works off the same copy of the neural transformer model and uses distributed data parallel training algorithms to synchronize the processing between the clusters.
  • the neural transformer model is trained using batching where the training dataset is partitioned into batches of a certain size and processed before the model is updated.
  • the size of a batch must be more than or equal to one and less than or equal to the number of samples in the training dataset.
  • one or more source code repositories 106 are searched for source code programs. Each source code program may be written in the same or in different programming languages.
  • the source code repositories 106 can be widely-used code repositories, such as GitHub, internal code repositories, and/or combinations thereof.
  • the source code extraction component 108 extracts a number and type of source code programs that meet an intended objective, such as source code programs that are accessed frequently, source code programs that utilize a particular function (e.g., database operations, computer graphics programs, asynchronous methods, etc.), and the like. These source code programs are used to generate training and validation datasets (collectively, block 502 ).
  • Each selected source code program 110 is then parsed and/or compiled by the compilation component 112 to produce a concrete syntax tree (block 504 ).
  • Byte pair encoding is used to generate an ordered sequence of tokens/subtokens representing a context of the source code program.
  • the serialized sequence of syntax nodes and tokens is obtained from traversing the concrete syntax tree.
  • the concrete syntax tree is traversed in depth first order (i.e., depth first search, depth first traversal).
  • depth first traversal starts at a root node and traverses the tree in a single path until it reaches a terminal or leaf node. The traversal then backtracks until it can choose another path to traverse. This process is repeated until all nodes are visited.
  • the token/subtoken sequences are transformed into numeric vectors. (Collectively, block 506 ).
  • a portion of the sequences are used as the training dataset and another portion is used as the validation dataset.
  • the training dataset is partitioned into epochs and then the sequences in each epoch are partitioned into batches.
  • Each sequence in each batch (block 510 ) in each epoch (block 508 ) is then used to train the neural transformer model (block 514 ).
  • Initial values are generated for the token/sequence and position embeddings of each sequence which are then used to form a context tensor (block 512 ).
  • a first layer normalization is applied to the context tensor (block 522 ) followed by masked self-attention (block 524 ).
  • the output of the masked self-attention is input into a second layer normalization (block 526 ).
  • the output of the second layer normalization is input into the first one-dimensional convolutional neural network layer (block 528 ).
  • the output of the first one-dimensional convolutional neural network layer is then input into the second one-dimensional convolutional neural network layer (block 530 ).
  • the neural networks are trained iteratively, making multiple passes over the training dataset before converging to a minimum.
  • Each training iteration includes forward propagation (blocks 528 - 530 ), loss calculation (block 532 ), backpropagation steps (block 534 ) followed by updating the weights by calculating the weight gradients (block 536 ).
  • the loss function estimates the loss or error which is used to compare how good or bad the predicted results are.
  • a categorical cross-entropy loss function is used. Once the loss is calculated, it is propagated backwards to the hidden layer that contributed directly to the output. In backpropagation, the partial derivatives of the loss function with respect to the trainable parameters are determined. The weight gradients are calculated as the difference between the old values and the new values of the weights. The weights are adjusted to make the loss as close as possible to zero using a gradient descent technique.
  • a Stochastic Gradient Descent (SGD) method is the optimization algorithm used to find the values of parameters of the function that minimizes the loss function.
  • a backpropagation through time (BPTT) algorithm may be used to update the weights.
  • N ACCUM is a gradient accumulation frequency and in one aspect has a value of 8.
  • the parameters include the token/subtoken embeddings, the positional embeddings which are stored in a respective embedding matrix.
  • Other parameters include the parameters of the attention layers and the convolutional layers.
  • the neural transformer model is validated. Before the neural transformer model is trained, a set of hyperparameters is selected randomly and then tuned to achieve a desired performance. The neural transformer model is tested using a validation dataset to determine the appropriate hyperparameters settings to achieve a desired goal. When the desired goal is not achieved, one or more hyperparameters are adjusted and the training is repeated until the target goal is achieved (collectively, block 518 ).
  • Evaluation metrics are used to test the quality of the candidate recommendations.
  • a top-k accuracy method is used using mean reciprocal rank (MRR) to perform the evaluation.
  • Top-k accuracy is defined as:
  • MMR is defined as:
  • N top-k denotes the number of relevant recommendations in the top k suggestions
  • Q represents the total number of test data samples
  • rank i is the prediction rank of a recommendation
  • top-1 indicates how often the top recommendation is correct
  • top-5 accuracy indicates how often the top three recommendations in the list contain the candidate the user is looking for.
  • the MRR captures the rank of the result, thus providing information outside of the top candidate.
  • a larger value of the MRR indicates the overall smaller rank numbers of correct recommendations. (collectively, block 518 ).
  • FIGS. 6A-6B illustrate an exemplary method 600 , 608 of line-of-code completion utilizing the neural transformer model.
  • code completion is performed in a development environment such as a source code editor 130 .
  • the source code editor 130 is configured to interact with a code completion component 142 that performs a beam search that utilizes the neural transformer model.
  • the source code editor 130 performs a background parsing process that monitors the characters input into the source coe editor and continuously parses the source code to update the concrete syntax tree representing the source code of the current line of code (block 602 ).
  • the user interface 132 of the source code editor 130 detects a request for candidate sequences to finish the current line of source code.
  • the user may request candidates by entering a particular keystroke or sequence of keystrokes, such as the combination of the CTRL key with the whitespace key.
  • the system may automatically display, in a dimmed color, a single top candidate at the end of the current source code line regardless of a marker character.
  • the system builds and continuously updates a tree of candidates in the background regardless of whether the user decides to trigger the candidate or not.
  • the candidate is automatically displayed in the user interface when the user has been idle for a period of time. If the user wants to accept the candidate, the user may type in a particular keystroke or combination of keystrokes (e.g., CTRL and I) to accept the candidate.
  • a particular keystroke or combination of keystrokes e.g., CTRL and I
  • the cursor position will advance to the end of the suggested code sequence and the dimmed color of the candidate code will change to the normal color of the code. If the user does not want to use the candidate, the candidate disappears when the user continues typing. In this case, the system would refine the code sequence based on the pre-fix filter of the tree of candidates based on the newly typed code. (Collectively, block 604 ).
  • the concrete syntax tree is parsed to extract tokens/subtokens from the current code segment. Embeddings are obtained from the token/subtoken embedding matrix and the positional matrix. A context tensor is generated from the embeddings. (Collectively, block 606 ).
  • a beam search is then performed until the probability distribution indicates that the next likely token is the end-of-line token (block 608 ).
  • the beam search uses the neural transformer model with the context tensor to generate a probability distribution for the token/subtoken vocabulary (block 614 ). If the probability distribution indicates that the next likely token is the end-of-line token, then the beam search is finished (block 616 —yes) and the top k candidate sequences are output (block 618 ).
  • the top k tokens/subtokens to complete a partial sequence are selected (block 620 ).
  • Each of the selected tokens/subtokens is then input in a respective context vector and has a separate data path through the neural transformer model again.
  • the context vector utilizes the selected token/subtoken in the current context vector with the last token/subtoken removed.
  • the new context vector will consist of T token/subtokens with the selected token/subtoken c k added to the beginning of the sequence with the last token/subtoken removed from the sequence. If the current context vector consists of a token/subtoken sequence consisting of c 0 , c 1 , . . . , c T , then the new context vector will consist of c k , c 0 , c 1 , . . . , c T ⁇ 1 . (Collectively, block 622 ).
  • the beam search keeps track of the generated sequences in the search tree and returns the top candidate sequences to the user interface component for display to the user (block 610 ).
  • a user may select one of the candidates which is then input into the source code program to complete the line of source code (block 612 ). Alternatively, the user may disregard the candidate sequences and continue typing. The process is repeated (blocks 602 - 612 ) until the user closes the source code program, exits the source code editor or terminates the code completion tool.
  • FIG. 7 there is shown a source code program being edited in a source code editor.
  • the user interface shows lines 10 - 36 of the source code program 702 .
  • the pop-up window 704 contains five candidate sequences to complete the line of code at line 36 .
  • the five candidates 706 - 714 are shown in a ranked order from highest probability to least probability. Each candidate is an ordered sequence of tokens that is likely to complete the expression of line 36 .
  • FIG. 8 is an illustration of a search tree 800 generated from a beam search for the source code snippet shown in FIG. 7 .
  • the search tree 800 tracks all states generated by the neural transformer model in the nodes of the search tree.
  • the beam width is set to four (4).
  • the beam search generates a root node 816 with a probability distribution for each token/subtoken in the vocabulary.
  • the top four tokens/subtokens are then selected, which are, “tf”, “gradient”, “gan”, and “gd”.
  • Each selected token is added to a separate context vector which is then used in a subsequent execution of the neural transformer model.
  • the probability distribution resulting from each invocation of the neural transformer model 818 A- 818 D is shown for each of the token/subtokens in the second inference level 804 .
  • FIG. 8 shows search tree 800 resulting from the first seven inference levels, 802 , 804 , 806 , 808 , 810 , 812 , 814 .
  • the candidate sequence tf.train.AdamOptimizer (learning_rate is composed of tokens/subtokens tf inferred in the root node 816 , the token/subtoken “.” inferred from a node 818 A in the second inference level 804 , the token/subtoken train inferred from node 820 at the third inference level 806 , the token/subtoken “.” inferred from node 822 at the fourth inference level 808 , the token/subtoken AdamOptimizer inferred from node 824 at the fifth inference level 810 , the token/subtoken “(” inferred from a node 826 at the sixth inference level 812 , and the token/subtoken learning inferred from node 828 at the seventh inference level 814 .
  • FIG. 9 illustrates an exemplary operating environment 900 in which one or more computing devices 902 is used to train the neural transformer model and a second computing device 904 uses the neural transformer model for code completion.
  • the aspects disclosed herein is not constrained to any particular configuration of devices. Any one of the computing devices 902 , 904 may utilize the neural transformer model in its own code completion system and computing device 904 may generate and test the neural transformer model as well.
  • Computing devices 902 may be configured as a cloud service that generates the neural transformer model as a service for other code completion systems. It should be noted that the operating environment is not limited to any particular configuration and other configurations are possible.
  • the computing devices 902 , 904 may be any type of electronic device, such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, or combination thereof.
  • the operating environment 900 may be configured in a network environment, a distributed environment, a multi-processor environment, or a stand-alone computing device having access to remote or local storage devices.
  • the computing devices 902 , 904 may include one or more processors 908 , 940 , one or more communication interfaces 910 , 942 , one or more storage devices 912 , 944 , one or more input/output devices 914 , 949 , and one or more memory devices 919 , 948 .
  • a processor 908 , 940 may be any commercially available or customized processor and may include dual microprocessors and multi-processor architectures.
  • a communication interface 910 , 942 facilitates wired or wireless communications between the computing device 902 , 904 and other devices.
  • a storage device 912 , 944 may be computer-readable medium that does not contain propagating signals, such as modulated data signals transmitted through a carrier wave.
  • Examples of a storage device 912 , 944 include without limitation RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, all of which do not contain propagating signals, such as modulated data signals transmitted through a carrier wave.
  • the input/output devices 914 , 946 may include a keyboard, mouse, pen, voice input device, touch input device, display, speakers, printers, etc., and any combination thereof.
  • a memory device 916 , 948 may be any non-transitory computer-readable storage media that may store executable procedures, applications, and data.
  • the computer-readable storage media does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. It may be any type of non-transitory memory device (e.g., random access memory, read-only memory, etc.), magnetic storage, volatile storage, non-volatile storage, optical storage, DVD, CD, floppy disk drive, etc. that does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave.
  • a memory 916 , 948 may also include one or more external storage devices or remotely located storage devices that do not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave.
  • Computing device 904 may utilize an integrated development environment (IDE) 954 that allows a user (e.g., developer, programmer, designer, coder, etc.) to design, code, compile, test, run, edit, debug or build a program, set of programs, web sites, web applications, and web services in a computer system.
  • Software programs can include source code files, created in one or more source code languages (e.g., Visual Basic, Visual J#, C++. C#, J#, Java Script, APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme, Smalltalk and the like).
  • the IDE 954 may provide a native code development environment or may provide a managed code development that runs on a virtual machine or may provide a combination thereof.
  • the IDE 954 may provide a managed code development environment using the .NET framework. It should be noted that this operating embodiment is not constrained to providing the source code development services through an IDE and that other tools may be utilized instead, such as a stand-alone source code editor and the like.
  • a user can create and/or edit the source code program files 952 according to known software programming techniques and the specific logical and syntactical rules associated with a particular source language via a user interface 958 and a source code editor 956 in the IDE 954 . Thereafter, the source code program files 952 can be compiled via a compilation component 960 generating data structures representing the syntactic structure and semantic model of the source code.
  • the memory device 948 of computing device 904 may contain instructions, components, and data.
  • a component is a software program that performs a specific function and is otherwise known as a module, program, and/or application.
  • the memory device 948 may include an operating system 950 , one or more source code program files 952 , an IDE 954 that may include a source code editor 956 , a user interface 958 , a compilation component 960 , a code completion component 962 and a neural transformer model 964 and other applications and data 966 .
  • the memory device 916 of the computing devices 902 may include an operating system 918 , a source code extraction component 920 , a token/subtoken sequence extraction component 922 , a syntactic analyzer 924 , a model training and testing component 926 , a neural transformer model 928 , a source code repository 930 , and other applications and data 932 .
  • the computing devices 902 , 904 may be communicatively coupled via a network 909 .
  • the network 909 may be configured as an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan network (MAN), the Internet, a portions of the Public Switched Telephone Network (PSTN), plain old telephone service (POTS) network, a wireless network, a WiFi® network, or any other type of network or combination of networks.
  • VPN virtual private network
  • LAN local area network
  • WLAN wireless LAN
  • WAN wide area network
  • WWAN wireless WAN
  • MAN metropolitan network
  • PSTN Public Switched Telephone Network
  • POTS plain old telephone service
  • the network 909 may employ a variety of wired and/or wireless communication protocols and/or technologies.
  • Various generations of different communication protocols and/or technologies that may be employed by a network may include, without limitation, Global System for Mobile Communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (W-CDMA), Code Division Multiple Access 2000, (CDMA-2000), High Speed Downlink Packet Access (HSDPA), Long Term Evolution (LTE), Universal Mobile Telecommunications System (UMTS), Evolution-Data Optimized (Ev-DO), Worldwide Interoperability for Microwave Access (WiMax), Time Division Multiple Access (TDMA), Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Band (UWB), Wireless Application Protocol (WAP), User Datagram Protocol (UDP), Transmission Control Protocol/Internet Protocol (TCP/IP), any portion of the Open Systems Interconnection (OSI) model protocols, Session Initiated Protocol/Real-Time
  • a system comprising one or more processors and a memory that stores one or more programs that are configured to be executed by the one or more processors.
  • the one or more programs including instructions that: track a sequence of characters entered into a line of a source code program during an editing session; and at a position in the line of the source code program, generate a candidate sequence to complete the line of source code using a neural transformer model, wherein the neural transformer model is trained on an unsupervised dataset of source code programs written in one or more different programming languages.
  • the system includes further instructions that when executed by the one or more processors: initiate a beam search to build a search tree to generate the candidate sequence, wherein the search tree includes one or more nodes at one or more inference levels, each node represents an output probability distribution for a set of tokens of a vocabulary of the neural transformer model, wherein the output probability distribution is generated from the neural transformer model, each node expands k tokens/subtokens to a next inference level.
  • the beam search iteratively expands the search tree by invoking the neural transformer model to predict a next token given a sequence of tokens representing a partial candidate to complete the line-of-code.
  • the neural transformer model is composed of only decoder blocks.
  • the neural transformer model includes at least one decoder block having a masked self-attention layer.
  • the neural transformer model includes at least one one-dimensional convolutional neural network layer.
  • the system tracks the sequence of characters entered into the line of the source code program by obtaining a sequence of tokens/subtokens representing a current context of the line of code and finding token/subtoken embedding vectors and positional embedding vectors for the sequence of tokens/subtokens.
  • the token/subtoken embedding vectors and the positional embedding vectors are pre-trained.
  • the system includes instructions that input the token/subtoken embedding vectors and positional embedding vectors into the neural transformer model.
  • the neural transformer model generates a probability distribution for the tokens/subtokens of a model vocabulary.
  • a method comprising: monitoring each token input into a line-of-code of a source code program during a source code development session; iteratively executing a beam search to generate token candidates to complete the line-of-code as a new token is input into the line-of-code, wherein the beam search generates a token candidate using a matrix of token probabilities generated from a neural transformer model; concatenating the token candidates into candidate sequences to complete the line-of-code; and outputting at least one candidate sequence upon detection of a marker character input in the line-of-code during the source code development session.
  • the method further comprises invoking the neural transformer model to predict a next token given a context vector representing a context of the line-of-code including the new token.
  • the neural transformer model includes a self-attention layer and a convolutional neural network.
  • the self-attention layer is preceded by layer normalization and layer normalization is applied to the outputs of the self-attention layer.
  • the neural transformer model utilizes token embeddings and positional embeddings representing a context of the line-of-code, wherein the token embeddings and the positional embeddings are pre-trained.
  • the monitoring of each token input into the source code program further comprises: parsing the input into a concrete syntax tree; performing byte pair encoding to extract tokens from the concrete syntax tree; and concatenating ordered sequences of tokens of length T.
  • a device comprising at least one processor coupled to a memory device.
  • the at least one processor is configured to: extract one or more ordered sequences of tokens from a plurality of source code programs, wherein an ordered sequence of tokens represents a context of a segment of source code from a select one of the plurality of source code programs; and utilize the ordered sequences of tokens to train a neural transformer model to predict a next token to complete a partial sequence of tokens, wherein the partial sequence of tokens is used to produce a candidate sequence of tokens that complete a line-of-code in a target source code program, wherein the neural transformer model includes an attention layer and at least one convolutional neural network layer.
  • the ordered sequence of tokens includes one or more subtokens.
  • the neural transformer block is a decoder-only transformer.
  • at least two of the plurality of source code programs are written in a different programming language and the ordered sequences of tokens are an unsupervised training dataset.
  • the neural transformer model generates a matrix of token probabilities that are used to predict a next token to succeed in a predicted candidate sequence.

Abstract

A code completion tool uses a neural transformer model to generate candidate sequences to complete a line of source code. The neural transformer model is trained using a conditional language modeling objective on a large unsupervised dataset that includes source code programs written in several different programming languages. The neural transformer model is used within a beam search that predicts the most likely candidate sequences for a code snippet under development.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims the benefit of the earlier filed provisional application having Ser. No. 62/881,736 filed on Aug. 1, 2019.
BACKGROUND
Software development environments are often used to aid software developers (i.e., users, programmers, etc.) to develop program code. The software development environment may include a source code editor and other tools that a developer utilizes to write and test their programs. Some software development environments include a code completion feature that provides assistance while the developer is editing code by automatically presenting a list of possible candidates based on one or more characters (e.g., letters, symbols, etc.) that a developer has typed into a source code editor. A popup menu may appear with several suggested code elements that the developer may utilize. This assistance is beneficial since it speeds up the development time and reduces common errors, such as typos.
However, the automatic code completion feature may be problematic when the code completion system does not recognize an out-of-vocabulary code element, requires a lot of memory, takes too long to generate a list of candidates, and/or generates a list of candidates that are not relevant.
SUMMARY
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
A multi-lingual line-of-code completion system is used to generate the most likely candidates to complete a line of source code during a source code editing session. A predicted string of characters to complete the line of code may include various types of elements, such as, local variables, methods, arguments, keywords, and delimiters arranged in an ordered sequence. The system uses a model to predict the ordered sequence which is trained using a conditional language modeling objective on a large unsupervised dataset that includes source code programs written in different programming languages (e.g., C, Java, Python, C++).
Each source code program in the training dataset does need not be written in the same programming language. The training dataset may be composed of numerous source code programs, each of which may be written in a different programming language. Each source code program in the training dataset is encoded into a sequence composed of tokens and/or subtokens. The frequently-used elements in a programming language are encoded into tokens and the less frequently-occurring elements are encoded into combinations of characters referred to as subtokens. This reduces the need to store a large vocabulary and provides better accuracy for out-of-vocabulary tokens.
The multi-lingual line-of-code completion system is based on a neural transformer model. In one aspect, the neural transformer model is comprised of multiple decoder blocks. A decoder block includes a multi-head self-attention layer coupled to a multi-layer one-dimensional convolutional neural network. Layer normalization is applied before and after the multi-head self-attention layer in order to reduce the training time of the neural transformer model.
A beam search is used to generate candidate sequences. The beam search uses the top k subtokens/tokens, identified from each iteration of the neural transformer model, to expand a partial candidate sequence of tokens/subtokens likely to complete a line of source code. The beam search generates a search tree but only keeps the top k nodes at each inference level to expand the search. The search ends when the end-of-line token appears as the most probable prediction.
These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 illustrates an exemplary code completion system having a training phase that generates a neural transformer model and an inference phase that uses the neural transformer model to predict one or more candidate sequences to complete a line-of-code.
FIGS. 2A-2B are schematic diagrams illustrating an exemplary system and method to train the neural transformer model for line-of-code completion.
FIG. 3 is a schematic diagram illustrating an exemplary architecture of the transformer block shown in FIG. 2.
FIGS. 4A-4B are schematic diagrams illustrating an exemplary architecture of the inference phase.
FIGS. 5A-5B are flow diagrams illustrating an exemplary method for training the neural transformer model for code completion.
FIGS. 6A-6B are flow diagrams illustrating an exemplary method for utilizing the neural transformer model in the inference phase.
FIG. 7 is a schematic diagram illustrating an exemplary user interface showing code completion candidates for a line of code in an exemplary source code program.
FIG. 8 is a schematic diagram illustrating an exemplary beam search that generates a search tree of candidate sequences.
FIG. 9 is a block diagram illustrating an operating environment.
DETAILED DESCRIPTION
Overview
The subject matter disclosed pertains to the generation of candidates to automatically complete a line of source code in a program development environment. Code completion is a tool that attempts to predict the next string of characters that a developer (e.g., user, end-user, programmer, etc.) may type into a source code editor. A line of source code may consist of various elements (e.g., keywords, delimiters, variables, methods, constants, operators, etc.) that are combined in a particular order in accordance with the grammar of the underlying programming language to form an expression. The line of source code may be a method invocation, a program statement, a definition, an expression, and so forth. A line of source code does not include a blank line or a comment line and ends with an end-of-line character.
The code completion tool uses a neural network machine learning model to predict the next string of code elements to complete a line of source code. A line of source code refers to a physical line of source code that ends with an end-of-line character and which excludes blank lines and comment lines. The model is trained on an unsupervised dataset that may include source code from different programming languages (i.e., multi-lingual). Unsupervised learning draws inferences from datasets consisting of input data without labeled responses. A vocabulary is formed from these datasets that includes tokens and/or subtokens found in the source code files. A token is a single element in the grammar of a programming language such as a keyword, variable name, operator, delimiter, etc.
Unlike a natural language (e.g., English, etc.), programmers use, at times, arbitrary, complex and long names to represent a variable, function or other code elements which may result in an extremely large vocabulary for the model when a large number of source code programs are used to train the model. To reduce the size of the vocabulary, less-frequently occurring tokens are split into subtokens. A subtoken is a portion of a token that is in between a token and a single character. The subtokens are used to account for rare or unseen tokens (i.e., out-of-vocabulary tokens) that may appear in a target source code program. The use of the subtokens allows the model to learn and generate the out-of-vocabulary tokens.
Byte pair encoding is a data compression technique in which most frequently co-occurring pairs of Unicode characters throughout the training source code dataset are substituted with an out-of-vocabulary character. When applied recursively, the byte pair encoding results in an extraction of token/subtokens in sequences of frequently co-occurring Unicode characters. In one aspect, byte pair encoding is used to extract ordered sequences of Unicode characters to form tokens and subtokens from a syntactic representation of the source code programs of the training dataset. An ordered sequence represents a source code fragment having T tokens/subtokens. The ordered sequences of tokens/subtokens are translated into token/subtoken embeddings and positional embeddings which are vector representations of a source code fragment.
In one aspect, the neural network machine learning model is a multi-layer transformer model. A transformer is a neural network architecture that handles dependencies between its input and output with attention and convolution and without using recurrent neural networks (RNN) (e.g., long short-term memory (LSTM) network). A shortcoming of a RNN-based system is the sequential nature of the RNN where each hidden state relies on the previous hidden state. This makes the RNN-based systems hard to parallelize and unable to take advantage of fast computing devices, such as graphics processing units. Furthermore, RNN-based systems cannot learn long-range dependencies within the input and output sequences for long periods. The transformer overcomes these obstacles with attention. Attention is a mechanism that identifies which parts of an input sequence are relevant to each token/subtoken in the output sequence. The attention mechanism allows the transformer to access the entire input sequence all at once.
A transformer may act as an encoder or a decoder where the encoder maps an input sequence of symbol representations to a sequence of continuous representations and the decoder generates an output sequence of symbols from the sequence of continuous representations. The encoder-decoder architecture is not a good fit for conditional code generation or code completion tasks and is better suited for machine translation and patch generation type tasks. A variant of the transformer model is used that is composed of decoder blocks having masked self-attention and convolutional layers.
A beam search is used to generate one or more candidate sequences to complete a line of source code. The beam search uses the probability distribution generated by the neural transformer model to identify the top k tokens/subtokens likely to be the next token or subtoken in a candidate sequence. The beam search expands the search by instantiating new partial sequences using each of the selected tokens/subtokens identified by the neural transformer model's probability distribution. The search continues generating new partial sequences from the top k tokens/subtokens identified by the output distributions from the neural transformer model until the search ends. The search may end when the end-of-line token appears as the most probable next token.
The task of line-of-code sequence completion is to predict a sequence of response tokens/subtokens, mt, t=0 . . . N, conditioned on an ordered sequence of tokens/subtokens ct, t=0 . . . T, corresponding to a context of code snippet C as the product of conditional probabilities by estimating a conditional probability distribution P(Output|Input) as follows: P(m0, m1, . . . , mN|c0, c1, . . . , cT)=Πi=1 NP(mi|c0, c1, . . . cT, m0, . . . mi−1). In one aspect, a multi-layer transformer-decoder neural network model with multi-head self-attention is utilized to estimate this probability distribution for a source code corpus using an unsupervised autoregressive (AR) technique. The modeling objective is to maximize the following likelihood:
Σi(log m i |c 0 ,c 1 , . . . c T ,m i−k ,m i−k+1 , . . . m i−1; Θ),
where k is the size of the context window, and the conditional probability P is modeled using a neural transformer model with parameters Θ. The parameters may include attention lengths, the number of attention heads, the number of decoder blocks, embedding dimensions, embedding matrices, and the number of hidden units per layer which are trained using a stochastic gradient descent optimization procedure.
The multi-layer transformer decoder produces an output distribution over the tokens/subtokens as follows:
h 0 =U·W e +W p,
h 1=transformer block (h t−1),l=1 . . . n,
P(C)=softmax (h n ·W e T),
where C=c−k, c−k+1, . . . , c−k is the context vector of tokens/subtokens, n is the number of layers, T is the ordered sequence length, We is the token/subtoken embedding matrix and Wp is the position embedding matrix, e represents an embedding for a token/subtoken in the vocabulary, and p represents an embedding for a position of a token/subtoken.
Attention now turns to a further discussion of the system, devices, components, and methods utilized in the code completion system.
Machine Learning Code Completion System
FIG. 1 illustrates a block diagram of an exemplary code completion system 100 in which various aspects of the invention may be practiced. As shown in FIG. 1, system 100 includes a training phase 102 which trains a transformer model 122 and an inference phase 104 that utilizes the transformer model 122 in a line-of-code completion system. The training phase 102 may utilize a source code repository 106, a source code extraction component 108, a syntactic analyzer 112, a token/subtoken sequence extraction component 116, and a model training and validation component 120.
In one aspect, the training phase 102 pre-trains a transformer model from a diverse corpus of unlabeled source code programs. This is referred to as unsupervised learning since the model draws inferences from the input data without labeled responses. The source code extraction component 108 extracts selected source code programs 110 from the source code repository 106 to obtain the training and validation datasets. The source code repository 106 may be a file archive and web hosting facility that stores large amounts of source code either privately or publicly. The source code repository 106 can be structured as a version control system, such as GIT, Mercurial, etc. The source code programs residing in the source code repository 106 vary and may be written in different programming languages.
The source code extraction component 108 obtains several selected source code programs 110 which may be written in the same or different programming languages. A programming language utilizes a context-free grammar that is a set of rules that describe all possible strings in a formal programming language. The selected source code programs 110 can come from different domains, such as without limitation, scientific computing, web development, dataflow programming, machine learning, and the like.
A syntactic analyzer 112 transforms each of the selected source code programs 110 into a concrete syntax tree 114. The concrete syntax tree 114 represents the source code text in the parsed form. The concrete syntax tree 114 may also be a parse tree. The syntactic analyzer 112 may be a parser, part of a front-end compiler, part of a language compiler, or part of a compilation tool. A concrete syntax tree 114 represents the syntactic structure of a program in a hierarchical or tree structure. The concrete syntax tree 114 is an n-ary tree data structure that includes nodes that represent a construct in the grammar of the programming language of a program. The concrete syntax tree 114 includes one root node, multiple internal nodes, and multiple terminal nodes. The terminal nodes represent the tokens. A token is a symbol that represents an operand or an operator. The concrete syntax tree 114 differs from an abstract syntax tree where the terminal nodes represent operands.
The concrete syntax tree 114 for a selected source code program 110 is passed to the token/subtoken sequence extraction component 116. The token/subtoken sequence extraction component 116 parses the concrete syntax tree 114 of each source code program and outputs a sequence of T tokens and/or subtokens. In one aspect, the token/subtoken sequence extraction component 116 performs byte pair encoding to extract frequently-occurring tokens and to extract subtokens from less-occurring tokens. A subtoken is a portion of a token.
For example, the following line of source code:
loss=tf.reduce_sum(tf.square(linear_model−y))
can be partitioned into the following sequence of tokens/subtokens, each of which are separated by the character “|”:
loss|=|tf|.|red|uce|_|sum|(|tf|.|squ|are|(|linear|_|model|−|y|)|)|
In this example, the token “reduce” has been split into the subtokens “red” and “uce” and the token “square” has been split into the subtokens “squ” and “are”.
The T-ordered sequences of tokens are then mapped into numeric vectors and then into an embedding. An embedding is a learned representation for the text-based tokens/subtokens where tokens or subtokens that have a common meaning have a common representation. There is an embedding for each token/subtoken in the vocabulary and a position embedding. The token/subtoken embedding represents the learned representation for the token/subtoken. The transformer model does not read each token/subtoken sequentially and as such, has no knowledge of the token/subtoken's position in a sequence without additional position information. The position embedding is used to embed position information about a token/subtoken's position in a sequence into the transformer model. The token/subtoken embeddings are input into the model training and validation component 120.
The neural transformer model 122 is used in the inference phase 104 of the code completion system. In one or more aspects, the inference phase 104 may be embodied as a function or feature integrated into a source code editor, integrated development environment (IDE), and/or stand-alone application. Code completion may be embodied as a tool or feature that can be an add-on, plug-in, extension and/or component of a source code editor and/or IDE. In one aspect, the inference phase 104 includes a source code editor 130, a code completion component 142, and the model 122.
In one aspect, a source code editor 130 may include a user interface 132 and a parser 134. The user interface 132 includes a set of features or functions for developing (e.g., writing, editing, testing) a source code program. The user interface 132 may utilize a pop-up window to present a list of possible candidates 136 for completion thereby allowing a developer to browse through the candidates and to select one from the list. Alternatively, the candidates may appear inline with the current source code line as the user is typing characters into the source code program.
The parser 134 reads the characters entered into a source code program through the source code editor 130 and generates a corresponding concrete syntax tree 140. The parser 134 also updates the concrete syntax tree 140 as the developer creates and edits the source code in the source code editor 130.
At certain points in the editing process, the user interface 132 will request candidates to complete the current line of source code. The user interface may detect that the user has entered a particular character or string of characters and automatically initiate a request for candidates to complete a line-of-code. This character is referred to as a marker character. In one aspect, the marker character may be an equal sign “=” or a period “.” The user interface 132 will then send a request 138 for candidates from the code completion component 142 to present to the developer. Alternatively, the user may request candidates by entering a particular keystroke or sequence of keystrokes, such as the combination of the CTRL key with the whitespace key.
In yet another aspect, the system may automatically display, in a dimmed color, a single top candidate at the end of the current source code line regardless of a marker character. The system builds and continuously updates a tree of candidates in the background regardless of whether the user decides to trigger the candidate or not. The candidate is automatically displayed in the user interface when the user has been idle for a period of time. If the user wants to accept the candidate, the user may type in a particular keystroke or combination of keystrokes (e.g., CTRL and I) to accept the candidate. In this case, the cursor position will advance to the end of the suggested code sequence and the dimmed color of the candidate code will change to the normal color of the code. If the user does not want to use the candidate, the candidate disappears when the user continues typing. In this case, the system would refine the code sequence based on the pre-fix filter of the tree of candidates based on the newly typed code.
The code completion component 142 tracks the characters that are input into the source code editor and services requests for candidates to complete a line of source code. The code completion component uses the model 122 to generate candidates based on the current context of the source code in the editor. The candidates are ranked according to their respective probability with the candidates having the highest probability at the top. A select number of candidates 136 is then returned to the source code editor 130 and displayed in the user interface 132.
It should be noted that FIG. 1 shows components of the system in one aspect of an environment in which various aspects of the invention may be practiced. However, the exact configuration of the components shown in FIG. 1 may not be required to practice the various aspects and variations in the configuration shown in FIG. 1 and the type of components may be made without departing from the spirit or scope of the invention. For example, the training phase 102 may be executed in one computing environment and the inference phase 104 may be executed in the same computing environment or in a separate computing environment as the training phase 102. The various computing environments are described in further detail below.
Attention now turns to FIG. 2A which illustrates further details of the components and process 200 used to train the neural transformer model. Referring to FIGS. 1 and 2A, the source code extraction component 108 obtains source code programs for use as the training and validation datasets. Each selected source code file 202 is parsed into a concrete syntax tree 204 by a syntactic analyzer 112. The concrete syntax tree 204 is traversed by the token/subtoken sequence extraction component 116. The token/subtoken sequence extraction component 116 may utilize a tokenizer 206 to extract tokens from each line of source code represented by the concrete syntax tree.
In one aspect, byte pair encoding is used as the tokenizer 206. Byte pair encoding is used to build a vocabulary of tokens/subtokens. Although its name uses the word “byte”, byte pair encoding operates on Unicode code points and not byte sequences. This encoding technique partitions less-occurring tokens into subtokens and the more frequently occurring tokens are left intact.
The tokens and subtokens of each line of source code are then aggregated into an ordered sequence of token/subtokens consisting of T token/subtokens 208. In one aspect, T is 1024 tokens with each sequence consisting of 1024 token/subtokens and representing a particular context of the source code program. The sequences from the various source code programs are then input to the model training and validation component 210.
Neural networks are trained iteratively, making multiple passes over the training dataset before converging to a minimum. An epoch represents the entire training dataset passed forwards and backwards through the neural network once. Since the training dataset is very large, it is partitioned into smaller batches. The training is iterative and the entire dataset is passed through the neural network in multiple iterations. Each training iteration includes forward propagation, loss calculation, backpropagation steps followed by updating the weights.
The neural network has multiple layers so that more detailed relationships within the data are learned as well as how the features interact with each other on a non-linear level. The model architecture, training procedure, data normalization and vocabulary encoding procedures are hyperparameters that are tailored to meet a particular objective. The values of the hyperparameters influence how the parameters are learned.
In one aspect, the hyperparameters may include the following: (1) token/subtoken and position embedding layers of dimensions: 30000×768, and 1024×768 respectively; (2) twelve transformer blocks, with each block consisting of two convolutions, masked self-attention and layer normalization layers; (3) for the training procedure: auto-regressive, with a cross-entropy loss optimization objective; the sequence length is 1024 tokens/subtokens; the mini-batch size is 8; the gradient accumulation steps for each weight update is 8; the Adam stochastic optimization procedure is used to train the neural network; and the learning rate is 0.0001; (4) the data normalization procedure: normalize all string and numerical literals, keeping the ten most frequent; and (5) the vocabulary encoding procedure: extract joint subtoken vocabulary from the multi-lingual code corpus using byte-pair encoding, preserve the ten most frequent string and numerical literals encoding them as a single token during byte-pair encoding procedure; and introduce special control flow tokens to denote end-of-line, end-of-file, decent, and indent symbols.
The training dataset is partitioned into batches with each batch of sequences running through the training process. The sequences are initially transformed into numeric vectors and then embeddings. An embedding is a mapping of discrete categorical variables to a vector of continuous numbers. There is a token/subtoken embedding 212 and a positional embedding 214 for each sequence. The token/subtoken embeddings represent the tokens and/or subtokens in a sequence and the positional embeddings represents the order of a token/subtoken in a sequence.
Initially, random values are used for the initial values of each token/subtoken embedding and positional embedding. Thereafter, the neural transformer model 224 learns the values for each embedding. Upon the completion of the training phase, the embeddings for each token/subtoken and the position embeddings are saved into respective matrices 218, 220 for later use in the interference phase. There is a token/subtoken embedding matrix, We, 218 that contains an embedding vector for each token/subtoken Ci, i=0 . . . V, and a positional embedding matrix, Wp, 220 that contains an embedding vector Pj, j=0 . . . T, for each position, where V is the size of the vocabulary and T is the length of the token/subtoken sequence.
Each token/subtoken embedding 212 and its corresponding positional embedding 214 are combined to form a context tensor 216. A tensor is a mathematical object that has indices and components that follow certain transformation rules. The tensor is a partially defined computation. It is a generalization of vectors and matrices and represented as an n-dimensional array. The tensor in this instance represents a context of a source code program. The size of the context tensor 216 is T×size of the embedding vector (e.g., embedding size), where T is the length of the token/subtoken sequence.
Turning FIG. 2B, the token/subtoken embeddings 212 are learned together with the parameters of the neural transformer model. The output hidden state of neural transformer model 228 is then multiplied by the linear projection matrix A 230. The linear projection matrix A is defined as A=aijϵRdh×dx. The hidden state vector hTϵRdh encodes information learned by neural transformer model 224 from the context tensors 216. Finally, a probability distribution for each token/subtoken P |v| 238 is generated by getting the unnormalized logits predictions 234 as ykjlkjlj pred+bk, where bk, k=0 . . . |V|−1 is the bias vector, and then normalizing them using softmax function 236.
Turning back to FIG. 2A, the neural transformer model 224 may be composed of one or more transformer blocks 226A, 226B. Referring to FIG. 3, a transformer block 226 may configured with encoder and decoder blocks and/or with only decoder blocks. FIG. 3 shows one aspect of the neural transformer model 224 configured with multiple decoder blocks 306A, 306N. A decoder block 306A, 306N may include a first normalization layer 308, followed by a masked self-attention layer 310, followed by a second normalization layer 312, and two layers of a one-dimensional convolutional neural network 314A, 314B.
The training of a neural network is a time-consuming task. In order to reduce the training time, layer normalization is used. Layer normalization normalizes the inputs across the features. The mean and standard deviation is computed across the feature dimensions. There is a first layer normalization 308 that precedes the masked self-attention layer 310 and a second layer normalization 312 that follows the masked self-attention layer 310.
Each token/subtoken flows through all the decoder blocks 306A, 306N along its own path. The masked self-attention layer 310 allows the neural network to focus on certain features or inputs. Attention is described in “Attention Is All You Need,” by Vaswani et al., in 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, Calif., as “mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.”
In one aspect of the disclosure, the masked self-attention layer 310 consists of two or more attention heads, 316A, 316B. The multi-head self-attention heads run through the scaled dot product attention multiple times in parallel. Each attention head 316A, 316B operates on a portion of the context tensor 302. Attention head 316A operates on a first segment 318A and attention head 316B operates on a second segment 320A. Each attention head 316A, 316B operates on an input sequence x=(x1, . . . , xn) of n elements and computes a new sequence of the same length z=(z1, . . . , zn). Each output element zi is computed as a weighted sum of linearly transformed input elements:
z ij=1 n a ij(z j W V).
Each weight coefficient, is computed using softmax function:
α ij = exp e i j k = 1 n exp e i k ,
where eij is the scaled dot product
e i j = ( x i W Q ) ( x j W K ) T d Z .
The input into each attention head 316A, 316B consists of a query matrix 320A, 320B and a key matrix 322A, 322B, both of dimension, T×dx, where T is the code sequence length and dx is the embedding dimension. The dot product is generated from the query matrix 320 with all the keys from the key matrix 322, with the softmax function applied to obtain the weights, W0 . . . WT, 324A, 324B, on the values resulting in a respective value matrix 326A, 326B. The resulting values from the two value matrices are then concatenated 328 and then linearized 330. The concatenation layer 328 takes T×dv dimensional key matrices from each attention head to form a T×dv dimensional matrix. The linear layer 330 takes the output of the concatenation layer 328 and applies a linear transformation according to: output=input●WT+b, where the input is a T×dv matrix, W is a dx×dv dimensional matrix, b is a T×dx dimensional matrix, and output is the T×dx dimensional matrix obtained as a result of matrix multiplication and addition.
Layer normalization 312 is then applied to the output of the masked self-attention layer 310. The output of layer normalization 312 is then applied to the first neural network layer. In one aspect, there are two neural network layers with each layer consisting of a one-dimensional convolutional neural network. Given an input tensor of dimensions (dx, T), and convolutional kernel g, the 1D convolution operation is defined as: output=bias+τk=0 dx−1 g(4dx, k)*input, where the operation * is the sliding dot-product operation.
Turning to FIG. 2B, the output of the neural network at the last temporal step T is the hidden state vector h T 228 which encodes information learned by the transformer blocks 226A,B relevant to the token/subtokens. The output hidden state of neural transformer model 224 is then multiplied by the linear projection matrix A 230 defined as A=aijξRdh×dx. The hidden state vector hTξRdh 228 encodes information learned by neural transformer model from the context tensors. Finally, a probability distribution for each token/subtoken P |V| 238 is generated by getting the unnormalized logits predictions 234 as ykjlkjlj pred+bk, where bk, k=0 . . . |V|−1 is the bias vector, and then normalizing them using softmax function 236.
Attention now turns to a description of the components of the model used in the inference phase. The inference phase utilizes a beam search to find the most likely candidate sequences. A beam search iteratively generates tokens/subtokens by invoking the neural transformer model. The output of the neural transformer model is a matrix of token probabilities for each position in a candidate sequence. The beam search concentrates on the k most probable tokens at each iteration to get the best path to the most likely candidate sequence. At each iteration, each of the k most probable tokens are concatenated with the tokens in the preceding iterations to form a partial candidate sequence.
A beam search uses a breadth-first search to build a search tree. The search tree is composed of nodes at one or more inference levels. Each node represents a probability distribution generated by the neural transformer model for the tokens/subtokens in the model vocabulary. At each level, only the top k tokens/subtokens having the highest probabilities from the output distribution generated by the neural transformer model are expanded to the next inference level. The variable k is preconfigured and also referred to as the beam width. Each of the k subtokens/tokens is then expanded into a search that updates the current context sequence with the selected subtoken/token to input into the neural transformer model to generate an additional probability distribution for the next token in a sequence. This process is repeated until the end of a line token is predicted as being the next likely token candidate.
Turning to FIG. 4A, there is shown components of the inference phase 400. A code snippet 402 is entered into a source code editor which is transformed into a corresponding concrete syntax tree 404. The concrete syntax tree 404 is traversed, by a tokenizer 406, to extract tokens and/or subtokens. Ordered sequences of length T are formed and vectorized 408.
The beam search 410 uses the context vector 408 to initiate an inference process 412 using the probability distribution generated from the neural transformer model, P0 . . . P|V| (block 412). If the probability distribution indicates that an end-of-line token is the most likely token to follow in a partial candidate sequence (block 416—yes), then the top k candidate sequences are output (block 418). Otherwise, the beam search 410 takes the top k states or tokens/subtokens identified from the probability distribution generated by the neural transformer model in the inference process (block 420). A new context vector is generated for each of the k states, c1, . . . ck, using the new token/subtoken in the context vector (blocks 422A, 422B). The new context vectors are then input into the inference process (blocks 422A, 422B, 412). The beam search 410 ends when the end-of-line token is selected as the most likely candidate to complete a partial candidate sequence.
FIG. 4B illustrates an exemplary search process 412. An embedding vector for each token and subtoken in a sequence 408 is obtained from the token/subtoken embedding matrix 428 and its corresponding positional vector from the positional embedding matrix 430. The token/subtoken embedding vector and its corresponding positional embedding vector are combined to form a context tensor 432 which is input into the neural transformer model 434.
The output of the neural transformer model 434 is the vector with components h0 . . . h dh 436. The output of the transformer is multiplied by the linear projection layer 438 to generate the predicted embedding vectors 440. The token/subtoken embedding vectors 448 are used as the output classification matrix to generate the unnormalized predictions or logits V0 . . . V |V″ 442. The logits 442 are normalized using the softmax function 444 to generate the softmax prediction 446 P0 . . . P|V|.
Methods
Attention now turns to description of the various exemplary methods that utilize the system and device disclosed herein. Operations for the aspects may be further described with reference to various exemplary methods. It may be appreciated that the representative methods do not necessarily have to be executed in the order presented, or in any particular order, unless otherwise indicated. Moreover, various activities described with respect to the methods can be executed in serial or parallel fashion, or any combination of serial and parallel operations. In one or more aspects, the method illustrates operations for the systems and devices disclosed herein.
FIGS. 5A-5B illustrate an exemplary method 500 illustrating usage of a neural transformer model for code completion. Before the neural transformer model is trained, a set of hyperparameters is selected randomly. A hyperparameter is a parameter associated with the neural network model architecture, the training algorithms, and data normalization, and is set before the start of the model training. A hyperparameter is not learned by the deep learning or neural network. The hyperparameters are selected at random from a set of categorical values or, for real valued hyperparameters like learning rate, drawn at random from a given range. Hyperparameters are tuned based on the performance of the neural transformer model when tested using the validation dataset.
The training of the neural transformer model is a computationally intensive effort which requires parallel data processing. One or more clusters may be used to train the neural transformer model where each cluster contains a set of loosely or tightly coupled computers (e.g., processors, processing units, cores) that perform the same task simultaneously under the control of distributed controller. Each computer works off the same copy of the neural transformer model and uses distributed data parallel training algorithms to synchronize the processing between the clusters.
The neural transformer model is trained using batching where the training dataset is partitioned into batches of a certain size and processed before the model is updated. The size of a batch must be more than or equal to one and less than or equal to the number of samples in the training dataset.
Referring to FIGS. 1 and 5A, one or more source code repositories 106 are searched for source code programs. Each source code program may be written in the same or in different programming languages. The source code repositories 106 can be widely-used code repositories, such as GitHub, internal code repositories, and/or combinations thereof. The source code extraction component 108 extracts a number and type of source code programs that meet an intended objective, such as source code programs that are accessed frequently, source code programs that utilize a particular function (e.g., database operations, computer graphics programs, asynchronous methods, etc.), and the like. These source code programs are used to generate training and validation datasets (collectively, block 502).
Each selected source code program 110 is then parsed and/or compiled by the compilation component 112 to produce a concrete syntax tree (block 504).
Byte pair encoding is used to generate an ordered sequence of tokens/subtokens representing a context of the source code program. The serialized sequence of syntax nodes and tokens is obtained from traversing the concrete syntax tree. In one aspect, the concrete syntax tree is traversed in depth first order (i.e., depth first search, depth first traversal). A depth first traversal starts at a root node and traverses the tree in a single path until it reaches a terminal or leaf node. The traversal then backtracks until it can choose another path to traverse. This process is repeated until all nodes are visited. Next, the token/subtoken sequences are transformed into numeric vectors. (Collectively, block 506).
A portion of the sequences are used as the training dataset and another portion is used as the validation dataset. The training dataset is partitioned into epochs and then the sequences in each epoch are partitioned into batches. Each sequence in each batch (block 510) in each epoch (block 508) is then used to train the neural transformer model (block 514). Initial values are generated for the token/sequence and position embeddings of each sequence which are then used to form a context tensor (block 512).
Referring now to FIG. 5B, a first layer normalization is applied to the context tensor (block 522) followed by masked self-attention (block 524). The output of the masked self-attention is input into a second layer normalization (block 526). The output of the second layer normalization is input into the first one-dimensional convolutional neural network layer (block 528). The output of the first one-dimensional convolutional neural network layer is then input into the second one-dimensional convolutional neural network layer (block 530).
The neural networks are trained iteratively, making multiple passes over the training dataset before converging to a minimum. Each training iteration includes forward propagation (blocks 528-530), loss calculation (block 532), backpropagation steps (block 534) followed by updating the weights by calculating the weight gradients (block 536).
The loss function estimates the loss or error which is used to compare how good or bad the predicted results are. In one aspect, a categorical cross-entropy loss function is used. Once the loss is calculated, it is propagated backwards to the hidden layer that contributed directly to the output. In backpropagation, the partial derivatives of the loss function with respect to the trainable parameters are determined. The weight gradients are calculated as the difference between the old values and the new values of the weights. The weights are adjusted to make the loss as close as possible to zero using a gradient descent technique. In one aspect, a Stochastic Gradient Descent (SGD) method is the optimization algorithm used to find the values of parameters of the function that minimizes the loss function. A backpropagation through time (BPTT) algorithm may be used to update the weights.
Referring back to FIG. 5A, at the completion of each batch, the parameters of the neural network are updated at a preconfigured frequency denoted as NACCUM (block 516). NACCUM is a gradient accumulation frequency and in one aspect has a value of 8. The parameters include the token/subtoken embeddings, the positional embeddings which are stored in a respective embedding matrix. Other parameters include the parameters of the attention layers and the convolutional layers.
Next, the neural transformer model is validated. Before the neural transformer model is trained, a set of hyperparameters is selected randomly and then tuned to achieve a desired performance. The neural transformer model is tested using a validation dataset to determine the appropriate hyperparameters settings to achieve a desired goal. When the desired goal is not achieved, one or more hyperparameters are adjusted and the training is repeated until the target goal is achieved (collectively, block 518).
Evaluation metrics are used to test the quality of the candidate recommendations. In one aspect, a top-k accuracy method is used using mean reciprocal rank (MRR) to perform the evaluation. Top-k accuracy is defined as:
Acc ( k ) = N t o p - k Q ,
and MMR is defined as:
MRR = 1 Q i = 1 Q 1 r a n k i ,
where Ntop-k denotes the number of relevant recommendations in the top k suggestions, Q represents the total number of test data samples, and ranki is the prediction rank of a recommendation.
Accuracy in the top-1 indicates how often the top recommendation is correct, while the top-5 accuracy indicates how often the top three recommendations in the list contain the candidate the user is looking for. The MRR captures the rank of the result, thus providing information outside of the top candidate. A larger value of the MRR indicates the overall smaller rank numbers of correct recommendations. (collectively, block 518).
Upon completion of the model validation, the model is ready to be deployed in a code completion system (block 520). Attention now turns to a discussion of an exemplary method illustrating an inference phase using the neural transformer model is a code completion system.
FIGS. 6A-6B illustrate an exemplary method 600, 608 of line-of-code completion utilizing the neural transformer model. Referring to FIGS. 1 and 6A, code completion is performed in a development environment such as a source code editor 130. The source code editor 130 is configured to interact with a code completion component 142 that performs a beam search that utilizes the neural transformer model. The source code editor 130 performs a background parsing process that monitors the characters input into the source coe editor and continuously parses the source code to update the concrete syntax tree representing the source code of the current line of code (block 602).
The user interface 132 of the source code editor 130 detects a request for candidate sequences to finish the current line of source code. The request may be initiated by a marker character, such as an equal sign “=”, in which the code completion system will provide candidate sequences to complete the rest of the expression after the equal sign. (Collectively, block 604).
Alternatively, the user may request candidates by entering a particular keystroke or sequence of keystrokes, such as the combination of the CTRL key with the whitespace key. In yet another aspect, the system may automatically display, in a dimmed color, a single top candidate at the end of the current source code line regardless of a marker character. The system builds and continuously updates a tree of candidates in the background regardless of whether the user decides to trigger the candidate or not. The candidate is automatically displayed in the user interface when the user has been idle for a period of time. If the user wants to accept the candidate, the user may type in a particular keystroke or combination of keystrokes (e.g., CTRL and I) to accept the candidate. In this case, the cursor position will advance to the end of the suggested code sequence and the dimmed color of the candidate code will change to the normal color of the code. If the user does not want to use the candidate, the candidate disappears when the user continues typing. In this case, the system would refine the code sequence based on the pre-fix filter of the tree of candidates based on the newly typed code. (Collectively, block 604).
Upon detection of the request for a candidate sequence, the concrete syntax tree is parsed to extract tokens/subtokens from the current code segment. Embeddings are obtained from the token/subtoken embedding matrix and the positional matrix. A context tensor is generated from the embeddings. (Collectively, block 606).
A beam search is then performed until the probability distribution indicates that the next likely token is the end-of-line token (block 608).
Referring to FIG. 6B, the beam search uses the neural transformer model with the context tensor to generate a probability distribution for the token/subtoken vocabulary (block 614). If the probability distribution indicates that the next likely token is the end-of-line token, then the beam search is finished (block 616—yes) and the top k candidate sequences are output (block 618).
Otherwise, the top k tokens/subtokens to complete a partial sequence are selected (block 620).
Each of the selected tokens/subtokens is then input in a respective context vector and has a separate data path through the neural transformer model again. The context vector utilizes the selected token/subtoken in the current context vector with the last token/subtoken removed. The new context vector will consist of T token/subtokens with the selected token/subtoken ck added to the beginning of the sequence with the last token/subtoken removed from the sequence. If the current context vector consists of a token/subtoken sequence consisting of c0, c1, . . . , cT, then the new context vector will consist of ck, c0, c1, . . . , cT−1. (Collectively, block 622).
Referring back to FIG. 6A, the beam search keeps track of the generated sequences in the search tree and returns the top candidate sequences to the user interface component for display to the user (block 610). A user may select one of the candidates which is then input into the source code program to complete the line of source code (block 612). Alternatively, the user may disregard the candidate sequences and continue typing. The process is repeated (blocks 602-612) until the user closes the source code program, exits the source code editor or terminates the code completion tool.
Line-of-Code Completion Example
Attention now turns to an exemplary user interface display for a code completion tool using the techniques described herein. Turning to FIG. 7, there is shown a source code program being edited in a source code editor. The user interface shows lines 10-36 of the source code program 702. A pop-up window 704 appears at line 36 after the “=” character is input by a user. The pop-up window 704 contains five candidate sequences to complete the line of code at line 36. The five candidates 706-714 are shown in a ranked order from highest probability to least probability. Each candidate is an ordered sequence of tokens that is likely to complete the expression of line 36.
FIG. 8 is an illustration of a search tree 800 generated from a beam search for the source code snippet shown in FIG. 7. The search tree 800 tracks all states generated by the neural transformer model in the nodes of the search tree. In this example, the beam width is set to four (4). At the first inference level or execution of the neural transformer model 802, the beam search generates a root node 816 with a probability distribution for each token/subtoken in the vocabulary. The top four tokens/subtokens are then selected, which are, “tf”, “gradient”, “gan”, and “gd”. Each selected token is added to a separate context vector which is then used in a subsequent execution of the neural transformer model. The probability distribution resulting from each invocation of the neural transformer model 818A-818D is shown for each of the token/subtokens in the second inference level 804.
The top four tokens/subtokens are then selected from each node in the second inference level 804 from which a new context vector is generated. A third invocation of the neural transformer model is made with new nodes generated from each selected token/subtoken from the second inference level 804 which is shown in the third inference level 806. This process is repeated again until the search ends. FIG. 8 shows search tree 800 resulting from the first seven inference levels, 802, 804, 806, 808, 810, 812, 814.
As shown in FIG. 8, the candidate sequence tf.train.AdamOptimizer (learning_rate is composed of tokens/subtokens tf inferred in the root node 816, the token/subtoken “.” inferred from a node 818A in the second inference level 804, the token/subtoken train inferred from node 820 at the third inference level 806, the token/subtoken “.” inferred from node 822 at the fourth inference level 808, the token/subtoken AdamOptimizer inferred from node 824 at the fifth inference level 810, the token/subtoken “(” inferred from a node 826 at the sixth inference level 812, and the token/subtoken learning inferred from node 828 at the seventh inference level 814.
Exemplary Operating Environment
Attention now turns to a discussion of an exemplary operating environment. FIG. 9 illustrates an exemplary operating environment 900 in which one or more computing devices 902 is used to train the neural transformer model and a second computing device 904 uses the neural transformer model for code completion. However, it should be noted that the aspects disclosed herein is not constrained to any particular configuration of devices. Any one of the computing devices 902, 904 may utilize the neural transformer model in its own code completion system and computing device 904 may generate and test the neural transformer model as well. Computing devices 902 may be configured as a cloud service that generates the neural transformer model as a service for other code completion systems. It should be noted that the operating environment is not limited to any particular configuration and other configurations are possible.
The computing devices 902, 904 may be any type of electronic device, such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, or combination thereof. The operating environment 900 may be configured in a network environment, a distributed environment, a multi-processor environment, or a stand-alone computing device having access to remote or local storage devices.
The computing devices 902, 904 may include one or more processors 908, 940, one or more communication interfaces 910, 942, one or more storage devices 912, 944, one or more input/output devices 914, 949, and one or more memory devices 919, 948. A processor 908, 940 may be any commercially available or customized processor and may include dual microprocessors and multi-processor architectures. A communication interface 910, 942 facilitates wired or wireless communications between the computing device 902, 904 and other devices. A storage device 912, 944 may be computer-readable medium that does not contain propagating signals, such as modulated data signals transmitted through a carrier wave. Examples of a storage device 912, 944 include without limitation RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, all of which do not contain propagating signals, such as modulated data signals transmitted through a carrier wave. There may be multiple storage devices 912, 944 in the computing devices 902, 904. The input/ output devices 914, 946 may include a keyboard, mouse, pen, voice input device, touch input device, display, speakers, printers, etc., and any combination thereof.
A memory device 916, 948 may be any non-transitory computer-readable storage media that may store executable procedures, applications, and data. The computer-readable storage media does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. It may be any type of non-transitory memory device (e.g., random access memory, read-only memory, etc.), magnetic storage, volatile storage, non-volatile storage, optical storage, DVD, CD, floppy disk drive, etc. that does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. A memory 916, 948 may also include one or more external storage devices or remotely located storage devices that do not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave.
Computing device 904 may utilize an integrated development environment (IDE) 954 that allows a user (e.g., developer, programmer, designer, coder, etc.) to design, code, compile, test, run, edit, debug or build a program, set of programs, web sites, web applications, and web services in a computer system. Software programs can include source code files, created in one or more source code languages (e.g., Visual Basic, Visual J#, C++. C#, J#, Java Script, APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme, Smalltalk and the like). The IDE 954 may provide a native code development environment or may provide a managed code development that runs on a virtual machine or may provide a combination thereof. The IDE 954 may provide a managed code development environment using the .NET framework. It should be noted that this operating embodiment is not constrained to providing the source code development services through an IDE and that other tools may be utilized instead, such as a stand-alone source code editor and the like.
A user can create and/or edit the source code program files 952 according to known software programming techniques and the specific logical and syntactical rules associated with a particular source language via a user interface 958 and a source code editor 956 in the IDE 954. Thereafter, the source code program files 952 can be compiled via a compilation component 960 generating data structures representing the syntactic structure and semantic model of the source code.
The memory device 948 of computing device 904 may contain instructions, components, and data. A component is a software program that performs a specific function and is otherwise known as a module, program, and/or application. The memory device 948 may include an operating system 950, one or more source code program files 952, an IDE 954 that may include a source code editor 956, a user interface 958, a compilation component 960, a code completion component 962 and a neural transformer model 964 and other applications and data 966.
The memory device 916 of the computing devices 902 may include an operating system 918, a source code extraction component 920, a token/subtoken sequence extraction component 922, a syntactic analyzer 924, a model training and testing component 926, a neural transformer model 928, a source code repository 930, and other applications and data 932.
The computing devices 902, 904 may be communicatively coupled via a network 909. The network 909 may be configured as an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan network (MAN), the Internet, a portions of the Public Switched Telephone Network (PSTN), plain old telephone service (POTS) network, a wireless network, a WiFi® network, or any other type of network or combination of networks.
The network 909 may employ a variety of wired and/or wireless communication protocols and/or technologies. Various generations of different communication protocols and/or technologies that may be employed by a network may include, without limitation, Global System for Mobile Communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (W-CDMA), Code Division Multiple Access 2000, (CDMA-2000), High Speed Downlink Packet Access (HSDPA), Long Term Evolution (LTE), Universal Mobile Telecommunications System (UMTS), Evolution-Data Optimized (Ev-DO), Worldwide Interoperability for Microwave Access (WiMax), Time Division Multiple Access (TDMA), Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Band (UWB), Wireless Application Protocol (WAP), User Datagram Protocol (UDP), Transmission Control Protocol/Internet Protocol (TCP/IP), any portion of the Open Systems Interconnection (OSI) model protocols, Session Initiated Protocol/Real-Time Transport Protocol (SIP/RTP), Short Message Service (SMS), Multimedia Messaging Service (MMS), or any other communication protocols and/or technologies.
Conclusion
A system is disclosed comprising one or more processors and a memory that stores one or more programs that are configured to be executed by the one or more processors. The one or more programs including instructions that: track a sequence of characters entered into a line of a source code program during an editing session; and at a position in the line of the source code program, generate a candidate sequence to complete the line of source code using a neural transformer model, wherein the neural transformer model is trained on an unsupervised dataset of source code programs written in one or more different programming languages.
The system includes further instructions that when executed by the one or more processors: initiate a beam search to build a search tree to generate the candidate sequence, wherein the search tree includes one or more nodes at one or more inference levels, each node represents an output probability distribution for a set of tokens of a vocabulary of the neural transformer model, wherein the output probability distribution is generated from the neural transformer model, each node expands k tokens/subtokens to a next inference level. The beam search iteratively expands the search tree by invoking the neural transformer model to predict a next token given a sequence of tokens representing a partial candidate to complete the line-of-code.
In one aspect, the neural transformer model is composed of only decoder blocks. The neural transformer model includes at least one decoder block having a masked self-attention layer. The neural transformer model includes at least one one-dimensional convolutional neural network layer.
The system tracks the sequence of characters entered into the line of the source code program by obtaining a sequence of tokens/subtokens representing a current context of the line of code and finding token/subtoken embedding vectors and positional embedding vectors for the sequence of tokens/subtokens. The token/subtoken embedding vectors and the positional embedding vectors are pre-trained.
The system includes instructions that input the token/subtoken embedding vectors and positional embedding vectors into the neural transformer model. The neural transformer model generates a probability distribution for the tokens/subtokens of a model vocabulary.
A method is disclosed comprising: monitoring each token input into a line-of-code of a source code program during a source code development session; iteratively executing a beam search to generate token candidates to complete the line-of-code as a new token is input into the line-of-code, wherein the beam search generates a token candidate using a matrix of token probabilities generated from a neural transformer model; concatenating the token candidates into candidate sequences to complete the line-of-code; and outputting at least one candidate sequence upon detection of a marker character input in the line-of-code during the source code development session.
The method further comprises invoking the neural transformer model to predict a next token given a context vector representing a context of the line-of-code including the new token.
In one aspect, the neural transformer model includes a self-attention layer and a convolutional neural network. The self-attention layer is preceded by layer normalization and layer normalization is applied to the outputs of the self-attention layer. The neural transformer model utilizes token embeddings and positional embeddings representing a context of the line-of-code, wherein the token embeddings and the positional embeddings are pre-trained.
In another aspect, the monitoring of each token input into the source code program further comprises: parsing the input into a concrete syntax tree; performing byte pair encoding to extract tokens from the concrete syntax tree; and concatenating ordered sequences of tokens of length T.
A device is disclosed comprising at least one processor coupled to a memory device. The at least one processor is configured to: extract one or more ordered sequences of tokens from a plurality of source code programs, wherein an ordered sequence of tokens represents a context of a segment of source code from a select one of the plurality of source code programs; and utilize the ordered sequences of tokens to train a neural transformer model to predict a next token to complete a partial sequence of tokens, wherein the partial sequence of tokens is used to produce a candidate sequence of tokens that complete a line-of-code in a target source code program, wherein the neural transformer model includes an attention layer and at least one convolutional neural network layer.
In one aspect, the ordered sequence of tokens includes one or more subtokens. The neural transformer block is a decoder-only transformer. In some aspects, at least two of the plurality of source code programs are written in a different programming language and the ordered sequences of tokens are an unsupervised training dataset. In some aspects, the neural transformer model generates a matrix of token probabilities that are used to predict a next token to succeed in a predicted candidate sequence.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

What is claimed:
1. A system comprising:
one or more processors; and
a memory that stores one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions that:
track a sequence of characters entered into a partially-formed line of a source code program during an editing session, wherein the source code program is written in a first programming language; and
at a position in the partially-formed line of the source code program, generate a candidate sequence to complete the partially-formed line of source code using a decoder-only neural transformer model with attention given the tracked sequence of characters, wherein the decoder-only neural transformer model with attention is trained on an unsupervised dataset of source code programs written in a plurality of different programming languages.
2. The system of claim 1, wherein the one or more programs include further instructions that when executed by the one or more processors:
initiate a beam search to build a search tree to generate the candidate sequence, wherein the search tree includes one or more nodes at one or more inference levels, each node represents an output probability distribution for a set of tokens of a vocabulary of the decoder-only neural transformer model with attention, wherein the output probability distribution is generated from the decoder-only neural transformer model with attention, each node expands k tokens/subtokens to a next inference level.
3. The system of claim 2, wherein the beam search iteratively expands the search tree by invoking the decoder-only neural transformer model with attention to predict a next token given a sequence of tokens representing a partial candidate to complete the partially-formed line-of-code.
4. The system of claim 1, wherein the decoder-only neural transformer model with attention includes at least one decoder block having a masked self-attention layer and a convolutional neural network layer.
5. The system of claim 1, wherein the decoder-only neural transformer model with attention includes at least one one-dimensional convolutional neural network layer.
6. The system of claim 1, wherein track the sequence of characters entered into the partially-formed line of the source code program further comprises:
obtain a sequence of tokens/subtokens representing a current context of the partially-formed line of code; and
find token/subtoken embedding vectors and positional embedding vectors for the sequence of tokens/subtokens, wherein the token/subtoken embedding vectors and the positional embedding vectors are generated from training the decoder-only neural transformer model with attention.
7. The system of claim 6, wherein the one or more programs include further instructions that when executed by the one or more processors:
input the token/subtoken embedding vectors and positional embedding vectors into the decoder-only neural transformer model with attention, wherein the decoder-only neural transformer model with attention generates a probability distribution for the tokens/subtokens of a model vocabulary.
8. The system of claim 1, wherein the plurality of different programming languages does not include the first programming language.
9. A method, comprising:
monitoring each token input into a partially-formed line-of-code of a source code program during a source code development session;
iteratively executing a beam search to generate token candidates to complete the line-of-code as a new token is input into the partially-formed line-of-code, wherein the beam search generates a token candidate using token probabilities generated from a decoder-only neural transformer model with attention trained on a plurality of multi-lingual source code programs;
concatenating the token candidates into candidate sequences to complete the partially-formed line-of-code; and
outputting at least one candidate sequence upon detection of a marker character input in the partially-formed line-of-code during the source code development session.
10. The method of claim 9, further comprising:
invoking the decoder-only neural transformer model to predict a next token given a context vector representing a context of the partially-formed line-of-code including the new token.
11. The method of claim 9, wherein the decoder-only neural transformer model includes a self-attention layer and a convolutional neural network.
12. The method of claim 11, wherein the self-attention layer is preceded by layer normalization and layer normalization is applied to the outputs of the self-attention layer.
13. The method of claim 9, wherein the decoder-only neural transformer model with attention utilizes token embeddings and positional embeddings representing a context of the line-of-code, wherein the token embeddings and the positional embeddings are learned from training the decoder neural transformer model with attention.
14. The method of claim 9, wherein monitoring each token input into the source code program further comprises:
parsing characters input into the line-of-code into a concrete syntax tree;
performing byte pair encoding to extract tokens from the concrete syntax tree; and
concatenating ordered sequences of tokens of length T.
15. A device, comprising:
at least one processor coupled to a memory device;
wherein the at least one processor is configured to:
extract one or more ordered sequences of tokens from a plurality of source code programs written in different programming languages, wherein an ordered sequence of tokens represents a context of a segment of source code from a select one of the plurality of source code programs; and
utilize the ordered sequences of tokens to train a decoder-only neural transformer model with attention to predict a next token to complete a partial sequence of tokens, wherein the partial sequence of tokens is used to produce a candidate sequence of tokens that complete a line-of-code in a target source code program, wherein the decoder-only neural transformer model with attention includes an attention layer and at least one convolutional neural network layer.
16. The method of claim 9, wherein the source code program is written in a programming language that differs from programming languages of the plurality of multi-lingual source code programs.
17. The device of claim 15, wherein the ordered sequence of tokens includes one or more subtokens.
18. The device of claim 15, wherein the ordered sequences of tokens are an unsupervised training dataset.
19. The device of claim 15, wherein the decoder-only neural transformer model with attention generates a matrix of token probabilities that are used to predict a next token to succeed in a predicted candidate sequence.
20. The device of claim 15, wherein the target source code program is written in a programming language that differs from the different programming languages.
US16/680,328 2019-08-01 2019-11-11 Multi-lingual line-of-code completion system Active 2040-03-24 US11262984B2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US16/680,328 US11262984B2 (en) 2019-08-01 2019-11-11 Multi-lingual line-of-code completion system
CN202080054713.XA CN114585999A (en) 2019-08-01 2020-06-11 Multilingual code line completion system
EP20750843.3A EP4007951B1 (en) 2019-08-01 2020-06-11 Multi-lingual line-of-code completion system
PCT/US2020/037102 WO2021021322A2 (en) 2019-08-01 2020-06-11 Multi-lingual line-of-code completion system
US17/580,609 US11809842B2 (en) 2019-08-01 2022-01-20 Multi-lingual line-of-code completion system
US18/232,326 US20240028306A1 (en) 2019-08-01 2023-08-09 Multi-lingual line-of-code completion system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962881736P 2019-08-01 2019-08-01
US16/680,328 US11262984B2 (en) 2019-08-01 2019-11-11 Multi-lingual line-of-code completion system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/580,609 Continuation US11809842B2 (en) 2019-08-01 2022-01-20 Multi-lingual line-of-code completion system

Publications (2)

Publication Number Publication Date
US20210034335A1 US20210034335A1 (en) 2021-02-04
US11262984B2 true US11262984B2 (en) 2022-03-01

Family

ID=71944245

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/680,328 Active 2040-03-24 US11262984B2 (en) 2019-08-01 2019-11-11 Multi-lingual line-of-code completion system
US17/580,609 Active US11809842B2 (en) 2019-08-01 2022-01-20 Multi-lingual line-of-code completion system
US18/232,326 Pending US20240028306A1 (en) 2019-08-01 2023-08-09 Multi-lingual line-of-code completion system

Family Applications After (2)

Application Number Title Priority Date Filing Date
US17/580,609 Active US11809842B2 (en) 2019-08-01 2022-01-20 Multi-lingual line-of-code completion system
US18/232,326 Pending US20240028306A1 (en) 2019-08-01 2023-08-09 Multi-lingual line-of-code completion system

Country Status (4)

Country Link
US (3) US11262984B2 (en)
EP (1) EP4007951B1 (en)
CN (1) CN114585999A (en)
WO (1) WO2021021322A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220058346A1 (en) * 2020-08-19 2022-02-24 Capital One Services, Llc Multi-turn dialogue response generation using asymmetric adversarial machine classifiers

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11355134B2 (en) * 2019-08-02 2022-06-07 Audioshake, Inc. Deep learning segmentation of audio using magnitude spectrogram
KR20210044559A (en) * 2019-10-15 2021-04-23 삼성전자주식회사 Method and device for determining output token
WO2021098585A1 (en) * 2019-11-22 2021-05-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image search based on combined local and global information
US11169786B2 (en) * 2020-02-04 2021-11-09 X Development Llc Generating and using joint representations of source code
US11893469B2 (en) * 2020-05-22 2024-02-06 Microsoft Technology Licensing, Llc Position masking for transformer models
US20210383199A1 (en) * 2020-06-03 2021-12-09 Google Llc Object-Centric Learning with Slot Attention
KR20220001821A (en) * 2020-06-30 2022-01-06 삼성전자주식회사 Tensor processing method, accelerator and accelerator system including the same
US20220012021A1 (en) * 2020-07-09 2022-01-13 Accenture Global Solutions Limited Artificial intelligence-based intelligent programming assistance
CN115129233B (en) * 2021-03-26 2024-03-19 中科寒武纪科技股份有限公司 Data processing device, method and related product
CN113204679B (en) * 2021-04-13 2023-08-01 武汉大学 Code query model generation method and computer equipment
US11763078B2 (en) 2021-04-22 2023-09-19 Microsoft Technology Licensing, Llc Provisional selection drives edit suggestion generation
CN113190219A (en) * 2021-05-08 2021-07-30 南通大学 Code annotation generation method based on recurrent neural network model
US20220374208A1 (en) * 2021-05-15 2022-11-24 Microsoft Technology Licensing, Llc. Code completion with holes
TWI760234B (en) * 2021-05-25 2022-04-01 仁寶電腦工業股份有限公司 Translation method
US11836467B2 (en) 2021-06-15 2023-12-05 Microsoft Technology Licensing, Llc. Code generation with reinforcement learning
WO2022265745A1 (en) * 2021-06-15 2022-12-22 Microsoft Technology Licensing, Llc Code generation with reinforcement learning
US11656851B2 (en) 2021-10-22 2023-05-23 Microsoft Technology Licensing, Llc. Long-range modeling of source code files by syntax hierarchy
US11442775B1 (en) * 2021-12-03 2022-09-13 FriendliAI Inc. Dynamic batching for inference system for transformer-based generation tasks
US11514370B1 (en) * 2021-12-03 2022-11-29 FriendliAI Inc. Selective batching for inference system for transformer-based generation tasks
US20230281318A1 (en) * 2022-03-07 2023-09-07 Microsoft Technology Licensing, Llc. Constrained decoding for source code generation
US20230305824A1 (en) * 2022-03-24 2023-09-28 Microsoft Technology Licensing, Llc. Code adaptation through deep learning
US20230409299A1 (en) * 2022-06-16 2023-12-21 Microsoft Technology Licensing, Llc. Code insertion completion
US20230418566A1 (en) * 2022-06-22 2023-12-28 Amazon Technologies, Inc. Programmatically generating evaluation data sets for code generation models
CN117494705A (en) * 2022-07-20 2024-02-02 华为技术有限公司 Model training method and device
CN115562649B (en) * 2022-10-27 2023-06-16 新疆品宣生物科技有限责任公司 Auxiliary writing method and system for source codes of computer mixed program language

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150135166A1 (en) * 2013-11-12 2015-05-14 Microsoft Corporation Source code generation, completion, checking, correction
US20190227774A1 (en) 2018-01-21 2019-07-25 Microsoft Technology Licensing, Llc. Code completion with machine learning
US20200097261A1 (en) * 2018-09-22 2020-03-26 Manhattan Engineering Incorporated Code completion
US20200372356A1 (en) * 2019-05-23 2020-11-26 Google Llc Generating neural network outputs using insertion commands
US20210026605A1 (en) * 2019-07-26 2021-01-28 X Development Llc Automated identification of code changes
US20210027770A1 (en) * 2019-07-22 2021-01-28 Capital One Services, Llc Multi-turn dialogue response generation with persona modeling

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9244658B2 (en) * 2013-06-04 2016-01-26 Microsoft Technology Licensing, Llc Multi-step auto-completion model for software development environments
US9471286B2 (en) * 2013-06-04 2016-10-18 Microsoft Technology Licensing, Llc System and method for providing code completion features for code modules

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150135166A1 (en) * 2013-11-12 2015-05-14 Microsoft Corporation Source code generation, completion, checking, correction
US9928040B2 (en) * 2013-11-12 2018-03-27 Microsoft Technology Licensing, Llc Source code generation, completion, checking, correction
US20190227774A1 (en) 2018-01-21 2019-07-25 Microsoft Technology Licensing, Llc. Code completion with machine learning
US20200097261A1 (en) * 2018-09-22 2020-03-26 Manhattan Engineering Incorporated Code completion
US20200372356A1 (en) * 2019-05-23 2020-11-26 Google Llc Generating neural network outputs using insertion commands
US20210027770A1 (en) * 2019-07-22 2021-01-28 Capital One Services, Llc Multi-turn dialogue response generation with persona modeling
US20210026605A1 (en) * 2019-07-26 2021-01-28 X Development Llc Automated identification of code changes

Non-Patent Citations (21)

* Cited by examiner, † Cited by third party
Title
"International Search Report and Written Opinion Issued in PCT Application No. PCT/US20/037102", dated Feb. 25, 2021, 14 Pages.
"International Search Report and Written Opinion Issued in PCT Application No. PCT/US20/037102", dated Oct. 30, 2020, 14 Pages.
Alammar, Jay, "The Illustrated GPT-2 (Visualizing Transformer Language Models)", Retrieved From: http://jalammar.github.io/illustrated-gpt2/, Jan. 2018, 10 pages.
Artaches Ambartsoumian et al., Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers, Dec. 19, 2018, [Retrieved on Sep. 9, 2021]. Retrieved from the internet: <URL: https://arxiv.org/pdf/1812.07860.pdf> 10 Pages (1-10) (Year: 2018). *
Ba, et al., "Layer Normalization", In repository of arXiv, arXiv:1607 06450, Jul. 21, 2016, pp. 1-14.
Babii, et al., "Modeling Vocabulary for Big Code Machine Learning", In repository of arXiv, arXiv:1904.01873, Apr. 3, 2019, 12 Pages.
Devlin, et al., "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", In repository of arXiv, arXiv:1810.04805, Oct. 11, 2018, 14 Pages.
Kudo, et al., "Sentencepiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing", In Proceedings of the Conference on Empirical Methods in Natural Language Processing (System Demonstrations), Oct. 31, 2018, pp. 66-71.
Liu, et al., "Generating Wikipedia by Summarizing Long Sequences", In Proceedings of Sixth International Conference on Learning Representations, Apr. 30, 2018, pp. 1-18.
Peters, et al., "Deep Contextualized Word Representations", In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long Papers), Jun. 1, 2018, pp. 2227-2237.
Radford, et al., "Improving Language Understanding by Generative Pre-training", Retrieved From: https://s3-us-west-2 amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf, Jun. 11, 2018, 12 Pages.
Radford, et al., "Language Models are Unsupervised Multitask Learners", Published in OpenAI Blog, vol. 1, Issue 8, Feb. 2019, 24 Pages.
Raychev, et al., "Code Completion With Statistical Language Models", In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, Jun. 9, 2014, pp. 419-428.
Sennrich, et al., "Neural Machine Translation of Rare Words with Subword Units", In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Aug. 7, 2016, pp. 1715-1725.
Shaw, et al., "Self-Attention with Relative Position Representation", In repository of arXiv, arXiv:1803.02155, Mar. 6, 2018, 5 Pages.
So, et al., "The Evolved Transformer", In Proceedings of the 36th International Conference on Machine Learning, Jun. 9, 2019, 10 pages.
Svyatkovskiy, et al., "Pythia: AI-assisted Code Completion System", In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Aug. 4, 2019, pp. 2727-2735.
Uri Alon et al., CODE2SEQ: Generating Sequences From Structured Representations of Code, Feb. 21, 2019, [Retrieved on Sep. 9, 2021]. Retrieved from the internet: <URL: https://arxiv.org/pdf/1808.01400.pdf> 22 Pages (1-22) (Year: 2019). *
Vaswani, et al., "Attention is All You Need", In Proceedings of 31st Conference on Neural Information Processing Systems, Dec. 4, 2017, pp. 1-11.
Walen, Terry Van. , "Method Call Argument Completion using Deep Neural Regression", Retrieved From: https://research.infosupport.com/wp-content/uploads/Method-Call-Argument-Completion-using-Deep-Neural-Regression-Thesis-Terry-van-Walen.pdf, Aug. 24, 2018, 46 Pages.
Wu, et al., "Google's Neural Machine Translation System: Bridging the Gap Between Human and Machine Translation", In repository of arXiv, arXiv:1609.08144, Sep. 26, 2016, pp. 1-23.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220058346A1 (en) * 2020-08-19 2022-02-24 Capital One Services, Llc Multi-turn dialogue response generation using asymmetric adversarial machine classifiers
US11663419B2 (en) * 2020-08-19 2023-05-30 Capital One Services, Llc Multi-turn dialogue response generation using asymmetric adversarial machine classifiers
US20230206009A1 (en) * 2020-08-19 2023-06-29 Capital One Services, Llc Multi-turn dialogue response generation using asymmetric adversarial machine classifiers
US11836452B2 (en) * 2020-08-19 2023-12-05 Capital One Services, Llc Multi-turn dialogue response generation using asymmetric adversarial machine classifiers

Also Published As

Publication number Publication date
US20220147321A1 (en) 2022-05-12
EP4007951B1 (en) 2023-10-04
WO2021021322A3 (en) 2021-04-01
US20240028306A1 (en) 2024-01-25
WO2021021322A2 (en) 2021-02-04
US11809842B2 (en) 2023-11-07
EP4007951A2 (en) 2022-06-08
US20210034335A1 (en) 2021-02-04
CN114585999A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
US11809842B2 (en) Multi-lingual line-of-code completion system
US11379190B2 (en) Deep learning enhanced code completion system
US20210357210A1 (en) Automatic generation of code documentation
US20210357187A1 (en) Neural method completion based on natural language and source code
US20220164626A1 (en) Automated merge conflict resolution with transformers
US11893363B2 (en) Unit test case generation with transformers
US11307831B2 (en) Neural code completion via re-ranking
US11513774B2 (en) Multi-lingual code generation with zero-shot inference
US20240070053A1 (en) Automatic generation of assert statements for unit test cases
US11604719B2 (en) Automated program repair using stack traces and back translations
US11797426B2 (en) Automating test-driven development with transformers
US20220244952A1 (en) Source code generation using code templates with neural transformers
US20230251831A1 (en) Long-range modeling of source code files by syntax hierarchy
US20230305824A1 (en) Code adaptation through deep learning
US20230359441A1 (en) Retrieval-augmented code completion

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SVYATKOVSKIY, ALEXEY;DENG, SHAO KUN;FU, SHENGYU;AND OTHERS;SIGNING DATES FROM 20190802 TO 20191111;REEL/FRAME:050975/0924

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE