CN111562915A - Generation method and device of front-end code generation model - Google Patents

Generation method and device of front-end code generation model Download PDF

Info

Publication number
CN111562915A
CN111562915A CN202010544833.XA CN202010544833A CN111562915A CN 111562915 A CN111562915 A CN 111562915A CN 202010544833 A CN202010544833 A CN 202010544833A CN 111562915 A CN111562915 A CN 111562915A
Authority
CN
China
Prior art keywords
code
model
character string
sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010544833.XA
Other languages
Chinese (zh)
Inventor
郑一鸣
陈启安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202010544833.XA priority Critical patent/CN111562915A/en
Publication of CN111562915A publication Critical patent/CN111562915A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for generating a front-end code generation model. The specific implementation mode of the method comprises the following steps: acquiring a training sample set; obtaining an initial model; inputting the sample interface image into a visual model to obtain image characteristics; determining a code character string sequence corresponding to an input sample interface image, inputting the code character string sequence into a word embedding layer of a language model to obtain a word vector, and inputting the word vector into a feature extraction layer of the language model to obtain text features of the code character string sequence; and taking the image characteristics and the text characteristics as the input of a decoder, taking the sample front-end codes corresponding to the input sample interface images as expected output, training an initial model, and obtaining a front-end code generation model. The embodiment enables the accuracy of the text features obtained by the language model to be higher, and further enables the accuracy of the front-end codes generated by the front-end code generation model obtained through training to be higher.

Description

Generation method and device of front-end code generation model
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for generating a front-end code generation model.
Background
Most user-oriented modern software applications are centered around Graphical User Interfaces (GUIs), rely on attractive User Interfaces (UIs) and intuitive user experiences (UX) to attract customers, facilitate efficient completion of computing tasks, and attract users.
At present, deep learning is becoming a popular study in recent years. And more research changes the daily lives of the public, such as: face recognition, fingerprint unlocking, recommendation systems, and the like. In many applications, there are many achievements in the field of automatic code generation, and methods for converting a GUI into programming code by deep learning have been involved. This problem is similar to image description (imagecapture), and it is a meaningful direction to design a model by means of related studies in image description, which can directly convert GUI into programming code.
The existing data set for training the code generation model has the problems of small data volume and relatively simple data set interface, so that the model obtained by training is difficult to deal with the complex situation of an actual interface.
The existing method mainly adopts an Encoder-decoder framework, uses an output characteristic diagram of a convolutional neural network as semantic coding of an image, and inputs the semantic coding into a recursive cyclic network to generate description, but the general convolutional neural network is often difficult to capture details of the image, such as color, position and the like. And because the length of the middle semantic vector is limited, the framework is often difficult to handle the situation of long sequences, and a general Decoder only uses a recurrent neural network as a Decoder, so that the concerned information is consistent when different codes are generated, which also does not accord with the characteristics of code generation.
Disclosure of Invention
An object of the embodiments of the present application is to provide an improved method and apparatus for generating a front-end code generation model, so as to solve the technical problems mentioned in the above background section.
In a first aspect, an embodiment of the present application provides a method for generating a front-end code generation model, where the method includes: acquiring a training sample set, wherein the training sample comprises a sample interface image and a sample front end code; acquiring an initial model, wherein the initial model comprises an encoder and a decoder, and the encoder comprises a visual model and a language model; inputting the sample interface image into a visual model to obtain image characteristics; determining a code character string sequence corresponding to an input sample interface image, inputting the code character string sequence into a word embedding layer of a language model to obtain a word vector, and inputting the word vector into a feature extraction layer of the language model to obtain text features of the code character string sequence; and taking the image characteristics and the text characteristics as the input of a decoder, taking the sample front-end codes corresponding to the input sample interface images as expected output, training an initial model, and obtaining a front-end code generation model.
In some embodiments, taking the image features and the text features as input to a decoder comprises: merging the image features and the text features to obtain merged features; the merged features are taken as input to the decoder.
In some embodiments, the decoder decodes using an attention mechanism to obtain a code string included in the front-end code.
In some embodiments, the visual model determines image features using a convolutional neural network comprising SE-Net.
In some embodiments, the convolutional neural network replaces the pooling layer with convolutional layers having convolutional kernels equal to or greater than 2 x 2 and step sizes greater than 1.
In some embodiments, the language model and decoder encode and decode using a GRU network.
In some embodiments, the decoder employs the Beam Search algorithm to predict the code string comprised by the front-end code.
In some embodiments, obtaining a set of training samples comprises: generating at least two element data sets based on a preset element type and an element style included by each element type, wherein each element data set in the at least two element data sets corresponds to one element type; extracting element data from at least two element data sets and combining the element data to obtain a sample front end code; and generating a sample interface image based on the sample front end code.
In a second aspect, an embodiment of the present application provides a front-end code generation method, where the method includes: acquiring a target interface image and an initial code character string sequence; the following prediction steps are performed: inputting the target interface image and the initial code character string sequence into a pre-trained front-end code generation model to obtain a code character string behind the code character string sequence; adding the obtained code character string into the initial code character string sequence; determining whether the obtained code string is an end mark; if yes, generating a front-end code of the target interface image based on the code character string output by the model; if it is determined that the resulting code string is not an end marker, the predicting step is continued based on the initial code string sequence after the code string was most recently added.
In a third aspect, an embodiment of the present application provides an apparatus for generating a front-end code generation model, where the apparatus includes: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a training sample set, and the training sample comprises a sample interface image and a sample front end code; the second acquisition module is used for acquiring an initial model, wherein the initial model comprises an encoder and a decoder, and the encoder comprises a visual model and a language model; the first generation module is used for inputting the sample interface image into the visual model to obtain image characteristics; the second generation module is used for determining a code character string sequence corresponding to the input sample interface image, inputting the code character string sequence into a word embedding layer of the language model to obtain a word vector, and inputting the word vector into a feature extraction layer of the language model to obtain text features of the code character string sequence; and the training module is used for taking the image characteristics and the text characteristics as the input of the decoder, taking the sample front-end codes corresponding to the input sample interface images as expected output, training the initial model and obtaining a front-end code generation model.
In a fourth aspect, an embodiment of the present application provides a front-end code generation apparatus, including: the third acquisition module is used for acquiring a target interface image and an initial code character string sequence; a prediction module for performing the prediction steps of: inputting the target interface image and the initial code character string sequence into a pre-trained front-end code generation model to obtain a code character string behind the code character string sequence; adding the obtained code character string into the initial code character string sequence; determining whether the obtained code string is an end mark; if yes, generating a front-end code of the target interface image based on the code character string output by the model; and a determining module for continuing to execute the predicting step based on the initial code character string sequence after the code character string is added last time if the obtained code character string is determined not to be the end mark.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first or second aspect.
In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect or the second aspect.
According to the method and the device for generating the front-end code generation model and the method and the device for generating the front-end code, provided by the embodiment of the application, by introducing the word embedding layer into the language model in the encoder, the association among characters in the code can be better mined, so that the accuracy of text characteristics obtained by using the language model is higher, and the accuracy of the front-end code generated by the trained front-end code generation model is further higher.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of generating a front-end code generation model according to the present application;
FIG. 3 is an exemplary diagram of generating merged features according to a method of generating a front-end code generation model of the present application;
FIG. 4 is a schematic diagram of SE-Net according to a method of generation of a front-end code generation model of the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method of generating a front-end code generation model according to the present application;
FIG. 6 is an exemplary flow chart for generating sample front-end code in accordance with the present application;
FIG. 7 is a flow diagram for one embodiment of a front-end code generation method according to the present application;
FIG. 8 is a schematic block diagram of one embodiment of a front-end code generation model generation apparatus according to the present application;
FIG. 9 is a schematic block diagram of one embodiment of a front-end code generation apparatus according to the present application;
FIG. 10 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which the generation method of the front-end code generation model of the embodiments of the present application may be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal device 101 to interact with server 103 over network 102 to receive or send messages and the like. The terminal device 101 may have various communication client applications installed thereon, such as a web browser application, a search-type application, a shopping-type application, an instant messaging tool, and the like.
The terminal device 101 may be various electronic devices including, but not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc.
The server 103 may be a server that provides various services, such as a background server that provides support for a user interface displayed on the terminal device 101. The background server may obtain a front-end code generation model by training using the training sample set, or receive an interface image uploaded by the terminal device 101 to generate a front-end code.
It should be noted that the method for generating the front-end code generation model provided in the embodiment of the present application may be executed by the terminal device 101 or the server 103, and accordingly, the generating device of the front-end code generation model may be disposed in the terminal device 101 or the server 103.
It should be understood that the number of data servers, networks, and host servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of generating a front-end code generation model according to the present application is shown. The method comprises the following steps:
step 201, a training sample set is obtained.
In this embodiment, an executing subject (e.g., a terminal device or a server shown in fig. 1) of the generation method of the front-end code generation model may obtain the training sample set from a local place or a remote place. Wherein each training sample in the set of training samples may include a sample interface image and a sample front end code. The sample interface image may be an image presented to the user that includes various elements (e.g., buttons, tables, grids, etc.). The sample interface image can be drawn manually, or crawled from a network by using a crawler technology, or automatically generated according to a set rule. The sample front-end code corresponding to the sample interface image can be manually written or automatically generated by using a set rule.
The sample front-end code may run and generate a corresponding Graphical User Interface (GUI). The graphical user interface is consistent with the corresponding interface image. The sample front end code may be available using a variety of programming languages. Such as DSL (Domain-specific languages), HTML (HyperText Markup Language), etc. Among other things, DSL is typically used to define parameterization rules and states to generate the structure of a program. DSL-based models can create different syntax rules for common code statements (e.g., control flow, comments, and brackets). DSL is typically smaller in syntax size compared to general purpose programming languages, which makes it more efficient in code generation tasks, and is therefore typically employed in front-end code generation tasks.
Step 202, an initial model is obtained.
In this embodiment, the execution agent may obtain the initial model locally or remotely. Wherein the initial model comprises an encoder and a decoder, the encoder comprising a visual model and a language model. The visual model is used for determining the image characteristics of the input interface image, the language model is used for determining the text characteristics of the input code character string sequence, and the decoder is used for decoding the image characteristics and the text characteristics to obtain the front-end code.
The visual model may include Convolutional Neural Networks (CNNs) including Convolutional layers, pooling layers, fully-connected layers, and the like. The language model may include various sequence models such as RNN (Recurrent Neural Network), gru (gated Recurrent unit), LSTM (long short-Term Memory Network), and the like.
And step 203, inputting the sample interface image into the visual model to obtain image characteristics.
In this embodiment, the execution subject may input the sample interface image into the visual model to obtain the image feature. In general, an image feature may be a vector of a particular dimension (e.g., 1024) that characterizes various features in the image (e.g., the shape, color, texture, etc. of the image).
Step 204, determining a code character string sequence corresponding to the input sample interface image, inputting the code character string sequence into a word embedding layer of a language model to obtain a word vector, and inputting the word vector into a feature extraction layer of the language model to obtain text features of the code character string sequence.
In this embodiment, the execution subject may first determine a code string sequence corresponding to the input sample interface image. The code character string sequence may be a sequence formed by partial characters included in a sample front-end code corresponding to the input sample interface image. For example, the number of letters included in the code string may be set to 48. As an example, the sample front end code is as follows: the location of the (", …,<START>,str<2>,str<3>,…,<END>) Wherein the start mark<START>47 spaces are filled in the previous time, and at the first iteration, the input code character string sequence is as follows: the location of the (", …,<START>). Each iteration thereafter slides back one string.
Then, the execution body may embed (embedding) words of the code string sequence input language model into a layer, resulting in word vectors.
The word embedding layer is used for converting characters in the character string sequence into word vectors with specific dimensions, and the word vectors can be used for representing the syntactic structure of the character string sequence. For example, the word embedding layer may include a Wrod2Vec model, and an unsupervised training method will result in a parameter matrix with dimensions m × n, where m is the size of the dictionary and n is the dimension of the word vector (e.g., 50). Each character is mapped to the corresponding id of the parameter matrix (i.e., the data in the id-th row) to obtain a 1 x 50 word vector.
This disclosure uses word embedding rather than One-Hot (One-Hot) representation. The word embedding can solve the problems that the representation of words in the one-hot representation method is too independent and the syntactic structure cannot be learned in a relatively one-hot mode.
Finally, the execution body can input the word vector into a feature extraction layer of the language model to obtain the text feature of the code character string sequence.
The feature extraction layer may include at least one layer, for example, two layers of GRU models including 128 units. The resulting text features may characterize the meaning of the string sequence.
Step 205, using the image features and the text features as input of a decoder, using the sample front-end codes corresponding to the input sample interface images as expected output, training an initial model, and obtaining a front-end code generation model.
In this embodiment, the execution subject may use the image feature and the text feature as input of the decoder, use the sample front end code corresponding to the input sample interface image as expected output, train the initial model, and obtain the front end code generation model.
In particular, the decoder is configured to predict the next string after the current string sequence. And outputting the characteristic data representing the next character during each iterative training, comparing the characteristic data with an actual character string (namely expected output) in the sample front-end code, iteratively optimizing parameters of the initial model by using a back propagation method and a gradient descent method, and ending the training under the condition that preset ending conditions (such as loss value convergence of a loss function, training times reaching preset times and the like) are reached to obtain a front-end code generation model.
In some optional implementations of this embodiment, when the image feature and the text feature are taken as input of the decoder, the following steps may be performed:
firstly, merging the image features and the text features to obtain merged features.
Specifically, as shown in fig. 3, numeral 128 indicates the dimension of the Feature (i.e., Text Feature) of each character in the character string sequence, and numeral 1024 represents the dimension of the Image Feature (Image Feature), where the number of Image features is repeated multiple times, in accordance with the length of the character string sequence. After pooling (Concatenate), a pooled signature (ConcatenateFeature) was obtained.
The merged features are then used as input to the decoder.
The implementation mode can enable the decoder to better predict the code characters by combining the text and the image by combining the image characteristics and the text characteristics and inputting the combined image characteristics into the decoder, thereby improving the accuracy of generating the front-end code.
In some optional implementations of this embodiment, the decoder performs decoding using an attention mechanism to obtain a code string included in the front-end code.
Among them, the attention mechanism is widely used in the field of image description. The attention mechanism can break through the bottleneck caused by the fixed semantic vector on one hand, and can enable a decoder to have different weights for different inputs in the decoding process on the other hand. Specifically, the attention mechanism can be realized by the following formulas (1) to (4):
Hen=[P,Q](1)
Score(Hen,yt)=Dot(Hen,yt) (2)
attt=Softmax(Wa*Score(Hen,yt)) (3)
ot=Softmax([attt*yt,yt]) (4)
for each time step t, the processed interface image and character string sequence are coded to respectively obtain an image feature P and a text feature Q, and the P and the Q are combined to obtain a combined feature Hen,ytThe output obtained after the current time step passes through the decoder. The method of dot product is used to obtain HenAnd ytThe similarity between them is used as the basis to get the weight. The final result is referred to ytAnd via a weighted context, by a Softmax layer.
By introducing the attention mechanism, the implementation mode can enable the text features to completely express the complete information of the input character string sequence, and reduce the possibility of information omission and dilution. Secondly, because the length of the text features is fixed, if the input sequence is longer, the expression capability of the text features is weaker, and the attention mechanism can pertinently represent the semanteme by specific character segments in the sequence in a weighting mode, so that the accuracy of the text feature expression semanteme is improved.
In some optional implementations of the present embodiment, the visual model determines the image features using a convolutional neural network comprising SE-Net.
Generally, the convolution operation of CNN can be understood as aggregating information on spatial and characteristic channel dimensions (channel-wise) on a local receptive field by using convolution. More information is added to the convolutional network from a spatial perspective, such as embedding multi-scale information into the inclusion structure, and the like. SE-Net can be understood as adding additional information to the convolutional network from the eigenchannel perspective to improve network performance.
FIG. 4 is a schematic diagram of SE-Net, where X is the input of a given characteristic channel C', FtrThe representation is subjected to a series of convolution operations to obtain U with a characteristic channel of C, and the subsequent steps are the core of SE-Net, namely the Squeeze operation and the Excitation operation.
The input U is first subjected to a Squeeze operation.
Figure BDA0002540322340000101
As shown in formula (5), wherein zcThe C element, u, of the resulting descriptorcRepresenting the C-th channel of input U. FsqA descriptor (descriptor) with the size of 1 × C is obtained by compressing the spatial features of each H × W on the C feature channels of the input U (Global Powing is used here) to obtain a real number with Global receptive field spatial information.
After that, an execution operation is performed according to the obtained descriptor z.
s=Fex(z,W)=σ(g(z,W))=σ(W2*(W1z)) (6)
As shown in equation (6), where σ represents Sigmoid function operation, represents ReLU Linear activation function operation, W1Is a full link layer parameter to be sought, W, of size set to c/r2Is the full link layer parameter to be sought with the size set to C. FexSimilar to the gating mechanism used in GRU and LSTM. Firstly, z is scaled to be 1/r of the original value through a full connection layer, then is amplified to the original value through a full connection layer after passing through a ReLU activation function, and a weight s of 0-1 normalization is obtained through Sigmoid. The scaling and then restoration are required to better fit the non-linear relationship between the feature channels, and r is typically 16.
And finally, performing a recalling operation on the input U according to the obtained weight s to obtain a final output o.
o=Fscale(s,U)=s*U (7)
As shown in formula (7), wherein FscaleWhich is equivalent to weighting the spatial features of each feature channel with a normalized weight s.
By using SE-Net, the realization mode can add extra information to the convolution network from the aspect of the characteristic channel, improve the network performance and further improve the accuracy of generating the front-end code by the model.
In some optional implementations of this embodiment, the convolutional neural network replaces the pooling layer with convolutional layers having convolutional kernels greater than or equal to 2 x 2 and step sizes greater than 1.
In general, the largest pooling layer may be included in the CNN. The max pooling layer is used to reduce the output dimensionality to reduce the data size, speed up the computation, and prevent overfitting to some extent. In addition to the above, the pooling layer has a role of having a certain translation invariance (invariance), but because of this, it is likely to lose some information, such as color, position information, and global feature information. With respect to translational invariance, in this disclosure, such color, position, etc. information is far more important than translational invariance. Therefore, the present disclosure replaces the max-pooling layer with a convolution layer having a step size greater than 1 (e.g., step size of 2) and a convolution kernel having a size greater than or equal to 2 × 2 (e.g., equal to 2 × 2), which can better retain useful information while reducing the data size, thereby contributing to improving expressive power of the generated image features.
In some optional implementations of this embodiment, the language model and decoder employ a GRU network for encoding and decoding.
The GRU does not use the Cell state, but directly uses the hidden state to transfer memory, so that fewer training parameters are provided, the model can be iterated more quickly, and the efficiency of model training is improved.
In some optional implementations of this embodiment, the decoder employs a Beam Search algorithm to predict the code string included in the front-end code.
Specifically, after obtaining the trained model, in the inference process, firstly, an interface image and a model only are input<START>The DSL file of the label, thus carrying out reasoning once to obtain the probability distribution of the next character, and selecting one character (marked as o)1) Write it to the output file, and fill it (<START>,o1) And performing the next round of reasoning with the interface image input model, and gradually performing until the obtained label is<END>The tag either reaches a set termination condition (e.g., maximum length, etc.). The choice of which character to output as the first predicted string distribution probability is a matter of thought, with the goal of finding the best output sequence, as shown in the following equation:
Figure BDA0002540322340000121
by adopting the Beam Search, at least two characters (the width is more than or equal to 2) are selected from the probability distribution of the characters every time, so that more prediction possibilities can be reserved when the code characters are predicted, the situation that the integral prediction deviates from the actual situation due to the fact that one character string is predicted by mistake is avoided, and the accuracy of predicting the character string in the front-end code is improved.
According to the method provided by the embodiment of the application, the word embedding layer is introduced into the language model in the encoder, so that the association among characters in the code can be better mined, the accuracy of the text features obtained by using the language model is higher, and the accuracy of the front-end code generated by the front-end code generation model obtained by training is further higher.
With further reference to FIG. 5, a flow diagram of yet another embodiment of a method of generating a front-end code generation model in accordance with the present application is shown. On the basis of the corresponding embodiment of fig. 2, step 201 may include the following steps:
in step 2011, at least two element data sets are generated based on the preset element types and the element styles included in each element type.
In this embodiment, the execution body may generate at least two element data sets based on a preset element type and an element style included in each element type. Wherein each of the at least two element data sets corresponds to an element category. The element data in the element data set can represent an element of a specific style, and an element can be generated on the interface according to the element data.
By way of example, the element categories may include categories of buttons, grids, input boxes, etc., and the element styles for a button may include four styles of red, green, orange, blue, and default.
Specifically, various element data may be generated using a generation function written in advance. The generating function may automatically generate the element data according to the number and style of the input element data. It should be noted that, the implementation method of generating the function (for example, an exhaustive method of generating the full array) is a currently known technology, and is not described herein again.
Step 2012, extracting element data from at least two element data sets and combining them to obtain a sample front end code.
In this embodiment, the execution subject may extract element data from at least two element data sets and combine the extracted element data to obtain a sample front-end code. Wherein the combined sample front end code may generate an interface comprising various elements.
And 2013, generating a sample interface image based on the sample front end code.
In this embodiment, the execution subject may generate a sample interface image based on the sample front end code. The sample front-end code can run in a set running environment to generate a corresponding user interface, and the user interface can be subjected to screenshot to obtain a sample interface image.
By way of example, as shown in FIG. 6, an exemplary flow diagram for generating sample front-end code is shown. Wherein nvbar is a navigation bar, row is a grid, and form is a table. max is the total amount of element data set in advance, num is the count value of the element data, and rand is the numerical value of a (0,1) interval generated randomly. Fig. 6 illustrates the generation process of the training sample set, and the method described in step 501 is first used to generate each element data set. Then, one nvbar is randomly selected, then, one form or row is randomly selected to be combined to obtain one DSL data, the DSL data is converted into HTML, a GUI is generated by using an analog browser technology, and screenshot is carried out on the GUI, so that a data pair (namely a sample interface image and a corresponding sample front-end code DSL) is obtained.
The method provided by the embodiment corresponding to fig. 5 of the application can avoid the problems of unclear element resolution, complex structure, dynamic interface inclusion and the like caused by data crawled by a crawler technology by generating the training sample set, so that the generated training sample set is more reasonable and easy to train, the efficiency of training models is improved, and the code generation effect of the trained models is improved.
With continued reference to FIG. 7, a flow 700 of one embodiment of a front-end code generation method according to the present application is shown. The method comprises the following steps:
step 701, acquiring a target interface image and an initial code character string sequence.
In the present embodiment, an execution subject (e.g., a terminal device or a server shown in fig. 1) of the generation method of the front-end code generation model may acquire the target interface image and the initial code string sequence from a local or remote location. The target interface image is an image of which the front-end code is to be generated by using the target interface image, and can be an artificially drawn image or an intercepted image. The initial code string sequence may be automatically generated, for example, the initial code string sequence may be: (", …, < START >), where < START > is the START marker of the code, preceded by 47 spaces, and later slid one string backwards for each iteration.
After step 701, the following prediction steps (including steps 702-704) are performed.
Step 702, inputting the target interface image and the initial code character string sequence into a pre-trained front-end code generation model to obtain a code character string behind the code character string sequence.
In this embodiment, the execution body may input the target interface image and the initial code string sequence into a pre-trained front-end code generation model, so as to obtain a code string located after the code string sequence. The front-end code generation model is obtained by pre-training by using the method described in the embodiment corresponding to fig. 2. Regarding the structure of the front-end code generation model, reference may be made to the embodiment corresponding to fig. 2, which is not described herein again.
By way of example, assume that the initial code string sequence is a sequence of (", …,<START>) Then the model outputs the string str<2>
Step 703, adding the obtained code string into the initial code string sequence.
In this embodiment, the execution body may add the obtained code string after the initial code string sequence. Continuing with the above example, str<2>Is added to<START>After that, a new code string sequence is obtained as (", …,<STTRT>,str<2>). Wherein the content of the first and second substances,<START>the previous 46 spaces are included so that each time a new code string is added, the new initial code string holds 48 character strings.
Step 704, determine if the resulting code string is an end marker.
In this embodiment, the executing entity may determine whether the obtained code string is an end mark, if so, execute step 705, otherwise execute step 706. As an example, when the code string output by the model is an END flag < END >, it indicates that the prediction is ended.
Step 705, generating a front end code of the target interface image based on the code character string output by the model.
In this embodiment, the execution body may generate a front end code of the target interface image based on the code character string output by the model. As an example, after obtaining the code string each time, adding the code string to a preset DSL file, obtaining a final DSL file includes: (<START>,str<2>,str<3>,…,<END>)。
And step 706, continuing to execute the prediction step based on the initial code character string sequence after the code character string is added for the last time.
In this embodiment, the execution body may continue to execute the prediction step, i.e., re-execute step 702, based on the initial code string sequence after the code string was added last time.
In the method provided by the embodiment corresponding to fig. 7 of the present application, the interface image can be converted into the front-end code by using the front-end code generation model described in the embodiment corresponding to fig. 2, so that the structure of the front-end code generation model is utilized, the efficiency and accuracy of generating the front-end code are improved, and the reduction of the labor cost caused by manually adjusting the code is facilitated.
With further reference to fig. 8, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating a front-end code generation model, where the apparatus embodiment corresponds to the method embodiment shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 8, the front-end code generation model generation device 800 of the present embodiment includes: a first obtaining module 801, configured to obtain a training sample set, where a training sample includes a sample interface image and a sample front-end code; a second obtaining module 802, configured to obtain an initial model, where the initial model includes an encoder and a decoder, and the encoder includes a visual model and a language model; a first generation module 803, configured to input the sample interface image into the visual model, so as to obtain an image feature; the second generating module 804 is configured to determine a code character string sequence corresponding to the input sample interface image, input the code character string sequence into a word embedding layer of the language model to obtain a word vector, and input the word vector into a feature extraction layer of the language model to obtain a text feature of the code character string sequence; the training module 805 is configured to use the image features and the text features as input of a decoder, use a sample front-end code corresponding to the input sample interface image as an expected output, train an initial model, and obtain a front-end code generation model.
In this embodiment, the first obtaining module 801 may obtain the training sample set locally or remotely. Wherein each training sample in the set of training samples may include a sample interface image and a sample front end code. The sample interface image may be an image presented to the user that includes various elements (e.g., buttons, tables, grids, etc.). The sample interface image can be drawn manually, or crawled from a network by using a crawler technology, or automatically generated according to a set rule. The sample front-end code corresponding to the sample interface image can be manually written or automatically generated by using a set rule.
The sample front end code may run and generate a corresponding graphical user interface. The graphical user interface is consistent with the corresponding interface image. The sample front end code may be available using a variety of programming languages. Such as DSL, HTML, etc.
In this embodiment, the second obtaining module 802 obtains the initial model locally or remotely. Wherein the initial model comprises an encoder and a decoder, the encoder comprising a visual model and a language model. The visual model is used for determining the image characteristics of the input interface image, the language model is used for determining the text characteristics of the input code character string sequence, and the decoder is used for decoding the image characteristics and the text characteristics to obtain the front-end code.
The visual model may include a convolutional neural network including convolutional layers, pooling layers, fully-connected layers, and the like. The language model may include various sequence models, such as RNN, GRU, LSTM, and the like.
In this embodiment, the first generation module 803 may input the sample interface image into the visual model, so as to obtain the image features. In general, an image feature may be a vector of a particular dimension (e.g., 1024) that characterizes various features in the image (e.g., the shape, color, texture, etc. of the image).
In this embodiment, the second generating module 804 may first determine a code string sequence corresponding to the input sample interface image. The code character string sequence may be a sequence formed by partial characters included in a sample front-end code corresponding to the input sample interface image. For example, the number of letters included in the code string may be set to 48. As an example, the sample front end code is as follows: the location of the (", …,<START>,str<2>,str<3>,…,<END>) Wherein the start mark<START>47 spaces are filled in the previous time, and at the first iteration, the input code character string sequence is as follows: the location of the (", …,<START>). Each iteration thereafter slides back one string.
Then, the second generating module 804 may embed (embedding) words of the code string sequence input language model into a layer to obtain word vectors.
The word embedding layer is used for converting characters in the character string sequence into word vectors with specific dimensions, and the word vectors can be used for representing the syntactic structure of the character string sequence. For example, the word embedding layer may include a Wrod2Vec model, and an unsupervised training method will result in a parameter matrix with dimensions m × n, where m is the size of the dictionary and n is the dimension of the word vector (e.g., 50). Each character is mapped to the corresponding id of the parameter matrix (i.e., the data in the id-th row) to obtain a 1 x 50 word vector.
This disclosure uses word embedding rather than One-Hot (One-Hot) representation. The word embedding can solve the problems that the representation of words in the one-hot representation method is too independent and the syntactic structure cannot be learned in a relatively one-hot mode.
Finally, the second generating module 804 may input the word vector into a feature extraction layer of the language model to obtain a text feature of the code string sequence.
The feature extraction layer may include at least one layer, for example, two layers of GRU models including 128 units. The resulting text features may characterize the meaning of the string sequence.
In this embodiment, the training module 805 may train the initial model to obtain a front-end code generation model by using the image features and the text features as input of the decoder and using the sample front-end code corresponding to the input sample interface image as expected output.
In particular, the decoder is configured to predict the next string after the current string sequence. And outputting the characteristic data representing the next character during each iterative training, comparing the characteristic data with an actual character string (namely expected output) in the sample front-end code, iteratively optimizing parameters of the initial model by using a back propagation method and a gradient descent method, and ending the training under the condition that preset ending conditions (such as loss value convergence of a loss function, training times reaching preset times and the like) are reached to obtain a front-end code generation model.
In some optional implementations of this embodiment, the training module may include: a merging unit (not shown in the figure) for merging the image feature and the text feature to obtain a merged feature; an input unit (not shown in the figure) for taking the merging feature as an input of the decoder.
In some optional implementations of this embodiment, the decoder performs decoding using an attention mechanism to obtain a code string included in the front-end code.
In some optional implementations of the present embodiment, the visual model determines the image features using a convolutional neural network comprising SE-Net.
In some optional implementations of this embodiment, the convolutional neural network replaces the pooling layer with convolutional layers having convolutional kernels greater than or equal to 2 x 2 and step sizes greater than 1.
In some optional implementations of this embodiment, the language model and decoder employ a GRU network for encoding and decoding.
In some optional implementations of this embodiment, the decoder employs a Beam Search algorithm to predict the code string included in the front-end code.
In some optional implementation manners of this embodiment, the first obtaining module may include: a third generating unit (not shown in the figure) configured to generate at least two element data sets based on a preset element type and an element style included in each element type, where each element data set of the at least two element data sets corresponds to one element type; a combining unit (not shown in the figure) for extracting element data from at least two element data sets and combining the extracted element data to obtain a sample front-end code; and a fourth generating unit (not shown in the figure) for generating a sample interface image based on the sample front end code.
According to the device provided by the embodiment of the application, the word embedding layer is introduced into the language model in the encoder, so that the association among characters in the code can be better mined, the accuracy of the text features obtained by using the language model is higher, and the accuracy of the front-end code generated by the front-end code generation model obtained by training is further higher.
With further reference to fig. 9, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of a front-end code generating apparatus, which corresponds to the method embodiment shown in fig. 7, and which can be applied in various electronic devices.
As shown in fig. 9, the front-end code generating apparatus 900 of the present embodiment includes: a third obtaining module 901, configured to obtain a target interface image and an initial code string sequence; a prediction module 902 for performing the following prediction steps: inputting the target interface image and the initial code character string sequence into a pre-trained front-end code generation model to obtain a code character string behind the code character string sequence; adding the obtained code character string into the initial code character string sequence; determining whether the obtained code string is an end mark; if yes, generating a front-end code of the target interface image based on the code character string output by the model; a determining module 903 for continuing to perform the predicting step based on the initial code string sequence after the code string was added last time if it is determined that the obtained code string is not the end marker.
In this embodiment, the third obtaining module 901 may obtain the target interface image and the initial code string sequence from a local or remote location. The target interface image is an image of which the front-end code is to be generated by using the target interface image, and can be an artificially drawn image or an intercepted image. The initial code string sequence may be automatically generated, for example, the initial code string sequence may be: (", …, < START >), where < START > is the START marker of the code, preceded by 47 spaces, and later slid one string backwards for each iteration.
In this embodiment, the prediction module 902 may input the target interface image and the initial code string sequence into a pre-trained front-end code generation model to obtain a code string located after the code string sequence. The front-end code generation model is obtained by pre-training by using the method described in the embodiment corresponding to fig. 2. Regarding the structure of the front-end code generation model, reference may be made to the embodiment corresponding to fig. 2, which is not described herein again.
By way of example, assume that the initial code string sequence is a sequence of (", …,<START>) Then the model outputs the string str<2>
The resulting code string is then added after the initial code string sequence. Continuing with the above example, str<2>Is added to<START>After that, a new code string sequence is obtained as (", …,<START>,str<2>). Wherein the content of the first and second substances,<START>the previous 46 spaces are included so that each time a new code string is added, the new initial code string holds 48 character strings.
Then, whether the obtained code character string is an end mark or not can be determined, and if yes, a front-end code of the target interface image is generated based on the code character string output by the model. As an example, after obtaining the code string each time, adding the code string to a preset DSL file, obtaining a final DSL file includes: (<START>,str<2>,str<3>,…,<END>)。
In this embodiment, the determining module 903 may continue to perform the predicting step based on the initial code string sequence after the code string was last added.
The device provided by the above embodiment of the present application can convert the interface image into the front end code by using the front end code generation model described in fig. 2 corresponding to the embodiment, thereby utilizing the structure of the front end code generation model, improving the efficiency and accuracy of generating the front end code, and contributing to reducing the labor cost caused by manually adjusting the code.
Referring now to FIG. 10, shown is a block diagram of a computer system 1000 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU)1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the system 1000 are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The above-described functions defined in the method of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 1001.
It should be noted that the computer readable storage medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a first acquisition module, a second acquisition module, a first generation module, a second generation module, and a training module. Where the names of these modules do not in some cases constitute a limitation of the unit itself, for example, the first acquisition module may also be described as a "module for acquiring a set of training samples".
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a training sample set, wherein the training sample comprises a sample interface image and a sample front end code; acquiring an initial model, wherein the initial model comprises an encoder and a decoder, and the encoder comprises a visual model and a language model; inputting the sample interface image into a visual model to obtain image characteristics; determining a code character string sequence corresponding to an input sample interface image, inputting the code character string sequence into a word embedding layer of a language model to obtain a word vector, and inputting the word vector into a feature extraction layer of the language model to obtain text features of the code character string sequence; and taking the image characteristics and the text characteristics as the input of a decoder, taking the sample front-end codes corresponding to the input sample interface images as expected output, training an initial model, and obtaining a front-end code generation model.
Further, the one or more programs, when executed by the electronic device, may further cause the electronic device to: acquiring a target interface image and an initial code character string sequence; the following prediction steps are performed: inputting the target interface image and the initial code character string sequence into a pre-trained front-end code generation model to obtain a code character string behind the code character string sequence; adding the obtained code character string into the initial code character string sequence; determining whether the obtained code string is an end mark; if yes, generating a front-end code of the target interface image based on the code character string output by the model; if it is determined that the resulting code string is not an end marker, the predicting step is continued based on the initial code string sequence after the code string was most recently added.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (13)

1. A method for generating a front-end code generation model, the method comprising:
acquiring a training sample set, wherein the training sample comprises a sample interface image and a sample front end code;
obtaining an initial model, wherein the initial model comprises an encoder and a decoder, and the encoder comprises a visual model and a language model;
inputting the sample interface image into the visual model to obtain image characteristics;
determining a code character string sequence corresponding to an input sample interface image, inputting the code character string sequence into a word embedding layer of the language model to obtain a word vector, and inputting the word vector into a feature extraction layer of the language model to obtain text features of the code character string sequence;
and taking the image features and the text features as the input of the decoder, taking a sample front end code corresponding to the input sample interface image as expected output, training the initial model, and obtaining a front end code generation model.
2. The method of claim 1, wherein the taking the image feature and the text feature as inputs to the decoder comprises:
merging the image features and the text features to obtain merged features;
the merged feature is taken as input to the decoder.
3. The method of claim 2, wherein the decoder decodes using an attention mechanism to obtain the code string included in the front-end code.
4. The method of claim 1, wherein the visual model determines image features using a convolutional neural network comprising SE-Net.
5. The method of claim 4, wherein the convolutional neural network replaces the pooling layer with a convolutional layer having a convolutional kernel of 2 x 2 or greater and a step size of greater than 1.
6. The method of claim 1, wherein the language model and the decoder are encoded and decoded using a GRU network.
7. The method of claim 1, wherein the decoder employs a Beam Search algorithm to predict the code string included in the front-end code.
8. The method of claim 1, wherein obtaining the set of training samples comprises:
generating at least two element data sets based on a preset element type and an element style included by each element type, wherein each element data set of the at least two element data sets corresponds to one element type;
extracting element data from the at least two element data sets and combining the element data to obtain a sample front end code;
generating a sample interface image based on the sample front end code.
9. A front-end code generation method, the method comprising:
acquiring a target interface image and an initial code character string sequence;
the following prediction steps are performed:
inputting the target interface image and the initial code character string sequence into a pre-trained front-end code generation model to obtain a code character string behind the code character string sequence; adding the obtained code character string into the initial code character string sequence; determining whether the obtained code string is an end mark; if yes, generating a front end code of the target interface image based on the code character string output by the model;
if it is determined that the resulting code string is not an end marker, continuing the predicting step based on the initial code string sequence after the code string was most recently added.
10. An apparatus for generating a front-end code generation model, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a training sample set, and the training sample comprises a sample interface image and a sample front end code;
a second obtaining module, configured to obtain an initial model, where the initial model includes an encoder and a decoder, and the encoder includes a visual model and a language model;
the first generation module is used for inputting the sample interface image into the visual model to obtain image characteristics;
the second generation module is used for determining a code character string sequence corresponding to an input sample interface image, inputting the code character string sequence into a word embedding layer of the language model to obtain a word vector, and inputting the word vector into a feature extraction layer of the language model to obtain text features of the code character string sequence;
and the training module is used for taking the image characteristics and the text characteristics as the input of the decoder, taking the sample front-end codes corresponding to the input sample interface images as expected output, training the initial model and obtaining a front-end code generation model.
11. A front-end code generation apparatus, the apparatus comprising:
the third acquisition module is used for acquiring a target interface image and an initial code character string sequence;
a prediction module for performing the prediction steps of:
inputting the target interface image and the initial code character string sequence into a pre-trained front-end code generation model to obtain a code character string behind the code character string sequence; adding the obtained code character string into the initial code character string sequence; determining whether the obtained code string is an end mark; if yes, generating a front end code of the target interface image based on the code character string output by the model;
and a determining module for continuing to execute the predicting step based on the initial code string sequence after the code string is added last time if it is determined that the obtained code string is not the end mark.
12. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202010544833.XA 2020-06-15 2020-06-15 Generation method and device of front-end code generation model Pending CN111562915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010544833.XA CN111562915A (en) 2020-06-15 2020-06-15 Generation method and device of front-end code generation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010544833.XA CN111562915A (en) 2020-06-15 2020-06-15 Generation method and device of front-end code generation model

Publications (1)

Publication Number Publication Date
CN111562915A true CN111562915A (en) 2020-08-21

Family

ID=72071234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010544833.XA Pending CN111562915A (en) 2020-06-15 2020-06-15 Generation method and device of front-end code generation model

Country Status (1)

Country Link
CN (1) CN111562915A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667232A (en) * 2020-12-21 2021-04-16 深圳前海微众银行股份有限公司 Interface code generation method, device, equipment and storage medium
CN113110843A (en) * 2021-03-05 2021-07-13 卓尔智联(武汉)研究院有限公司 Contract generation model training method, contract generation method and electronic equipment
CN113867724A (en) * 2021-09-15 2021-12-31 中国船舶重工集团公司第七0九研究所 Method and system for automatically generating GUI (graphical user interface) code, server and medium
WO2023065638A1 (en) * 2021-10-22 2023-04-27 平安科技(深圳)有限公司 Data retrieval method and apparatus, and electronic device and storage medium
CN116700684A (en) * 2022-09-30 2023-09-05 荣耀终端有限公司 Code generation method and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050033576A1 (en) * 2003-08-08 2005-02-10 International Business Machines Corporation Task specific code generation for speech recognition decoding
CN108304183A (en) * 2018-02-26 2018-07-20 北京车和家信息技术有限公司 A kind of user interface creating method, device and electronic equipment
CN110310305A (en) * 2019-05-28 2019-10-08 东南大学 A kind of method for tracking target and device based on BSSD detection and Kalman filtering
CN110705299A (en) * 2019-09-26 2020-01-17 北京明略软件系统有限公司 Entity and relation combined extraction method, model, electronic equipment and storage medium
CN110968299A (en) * 2019-11-20 2020-04-07 北京工业大学 Front-end engineering code generation method based on hand-drawn webpage image
CN111190600A (en) * 2019-12-31 2020-05-22 中国银行股份有限公司 GRU attention model-based method and system for automatically generating front-end code

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050033576A1 (en) * 2003-08-08 2005-02-10 International Business Machines Corporation Task specific code generation for speech recognition decoding
CN108304183A (en) * 2018-02-26 2018-07-20 北京车和家信息技术有限公司 A kind of user interface creating method, device and electronic equipment
CN110310305A (en) * 2019-05-28 2019-10-08 东南大学 A kind of method for tracking target and device based on BSSD detection and Kalman filtering
CN110705299A (en) * 2019-09-26 2020-01-17 北京明略软件系统有限公司 Entity and relation combined extraction method, model, electronic equipment and storage medium
CN110968299A (en) * 2019-11-20 2020-04-07 北京工业大学 Front-end engineering code generation method based on hand-drawn webpage image
CN111190600A (en) * 2019-12-31 2020-05-22 中国银行股份有限公司 GRU attention model-based method and system for automatically generating front-end code

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘树春: "《深度实践OCR 基于深度学习的文字识别》", 31 May 2020, 机械工业出版社 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667232A (en) * 2020-12-21 2021-04-16 深圳前海微众银行股份有限公司 Interface code generation method, device, equipment and storage medium
CN113110843A (en) * 2021-03-05 2021-07-13 卓尔智联(武汉)研究院有限公司 Contract generation model training method, contract generation method and electronic equipment
CN113110843B (en) * 2021-03-05 2023-04-11 卓尔智联(武汉)研究院有限公司 Contract generation model training method, contract generation method and electronic equipment
CN113867724A (en) * 2021-09-15 2021-12-31 中国船舶重工集团公司第七0九研究所 Method and system for automatically generating GUI (graphical user interface) code, server and medium
WO2023065638A1 (en) * 2021-10-22 2023-04-27 平安科技(深圳)有限公司 Data retrieval method and apparatus, and electronic device and storage medium
CN116700684A (en) * 2022-09-30 2023-09-05 荣耀终端有限公司 Code generation method and terminal
CN116700684B (en) * 2022-09-30 2024-04-12 荣耀终端有限公司 Code generation method and terminal

Similar Documents

Publication Publication Date Title
CN113326764B (en) Method and device for training image recognition model and image recognition
CN110232183B (en) Keyword extraction model training method, keyword extraction device and storage medium
CN111444340B (en) Text classification method, device, equipment and storage medium
CN108536679B (en) Named entity recognition method, device, equipment and computer readable storage medium
CN111562915A (en) Generation method and device of front-end code generation model
CN111460807B (en) Sequence labeling method, device, computer equipment and storage medium
CN109271493B (en) Language text processing method and device and storage medium
CN111967266A (en) Chinese named entity recognition model and construction method and application thereof
CN110428820B (en) Chinese and English mixed speech recognition method and device
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN111738169B (en) Handwriting formula recognition method based on end-to-end network model
CN107832300A (en) Towards minimally invasive medical field text snippet generation method and device
US20240078385A1 (en) Method and apparatus for generating text
CN116166827B (en) Training of semantic tag extraction model and semantic tag extraction method and device
CN117475038B (en) Image generation method, device, equipment and computer readable storage medium
CN111368551A (en) Method and device for determining event subject
CN112199502A (en) Emotion-based poetry sentence generation method and device, electronic equipment and storage medium
CN111145914A (en) Method and device for determining lung cancer clinical disease library text entity
CN114445832A (en) Character image recognition method and device based on global semantics and computer equipment
CN111445545B (en) Text transfer mapping method and device, storage medium and electronic equipment
CN112784573A (en) Text emotion content analysis method, device and equipment and storage medium
CN115952854B (en) Training method of text desensitization model, text desensitization method and application
CN114911940A (en) Text emotion recognition method and device, electronic equipment and storage medium
CN113051607B (en) Privacy policy information extraction method
CN115587184A (en) Method and device for training key information extraction model and storage medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200821

RJ01 Rejection of invention patent application after publication