CN110968299A - Front-end engineering code generation method based on hand-drawn webpage image - Google Patents

Front-end engineering code generation method based on hand-drawn webpage image Download PDF

Info

Publication number
CN110968299A
CN110968299A CN201911138941.0A CN201911138941A CN110968299A CN 110968299 A CN110968299 A CN 110968299A CN 201911138941 A CN201911138941 A CN 201911138941A CN 110968299 A CN110968299 A CN 110968299A
Authority
CN
China
Prior art keywords
model
vector
training
hand
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911138941.0A
Other languages
Chinese (zh)
Inventor
陈子豪
贺国平
杨佳现
刘哲
刘宇豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201911138941.0A priority Critical patent/CN110968299A/en
Publication of CN110968299A publication Critical patent/CN110968299A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/33Intelligent editors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/35Creation or generation of source code model driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a front-end engineering code generation method based on a hand-drawn webpage image. The main content of the invention is as follows: (1) designing a domain specific language to generalize the layout of the webpage; (2) preprocessing a common webpage to enable the webpage to approach a hand-drawing style, so that the training is convenient; (3) establishing a visual model, a language model and a decoder to form a Draft2Code algorithm, training the hand-drawn webpage image and the corresponding field specific language Code to obtain a recognition model, and further recognizing and converting the hand-drawn webpage; (4) the generated domain-specific language code is processed and converted into front-end code conforming to Vue or React framework syntax. The invention reduces UI design procedures, and the finally generated code file can be further developed by front-end engineers, thereby shortening code construction time and improving development efficiency.

Description

Front-end engineering code generation method based on hand-drawn webpage image
Technical Field
The invention belongs to the field of software efficiency engineering, and relates to an automatic code generation algorithm for converting a webpage hand-drawn manuscript image into a front-end engineering code based on a deep neural network.
Background
A typical design workflow is divided into three phases: firstly, a product manager draws a sketch according to the requirements of a user; then, the designer designs a prototype of a User Interface (UI) according to the sketch; and finally, the front-end engineer realizes page development according to the design prototype. The lengthy development process often becomes a bottleneck problem restricting enterprises, especially for some applications which do not pay too much attention to front-end design, such as B-end business applications used in many companies, over-fine design by designers is not needed, and if the step can be simplified, the working efficiency is greatly improved. If the product manager sketch can be directly used for extracting features by using a deep neural network and converting the features into HTML (Hyper Text markup language) codes, the step of optimally designing the sketch by a designer can be omitted, and the time of initial coding by a front-end engineer can be saved, so that the design flow is simplified.
The existing research and implementation aiming at automatically generating the webpage are difficult to meet the current front-end development requirement, and the reasons are two main points: firstly, input content is screenshots or design drawings of other webpages, but when the webpages do not have other websites as blueprints or small and medium-sized B-end projects which do not need participation of designers, corresponding input cannot be provided, and hand-drawn webpage images are required to be used as input in actual engineering; secondly, the output content is a native HTML page, but in actual development, the generation of the componentized code based on the frame is needed.
The invention designs an automatic generation algorithm Draft2Code of the front-end Code of the hand-drawn webpage image based on deep learning algorithms such as CNN, GRU and the like by referring to the architecture of FIG. 1, so that a program automatically converts a sketch into a page HTML Code which can enable a front-end engineer to directly start logic development; meanwhile, the algorithm is combined with the most popular front-end framework, so that the functions of self-defining components and componentized output are provided for a user, and the reusability of UI styles is enhanced.
Disclosure of Invention
The invention designs a front-end Code automatic generation algorithm Draft2Code based on a hand-drawn webpage image and a deep learning algorithm. The present invention relates to the following points 2:
(1) the present invention refers to the architecture design model training algorithm of fig. 1. Performing feature extraction on an input hand-drawn webpage image by using a convolutional neural network, encoding an input source code by using a gate control cycle unit, and finally combining the output of the gate control cycle unit and the output of the gate control cycle unit to train to obtain a model;
(2) the present invention designs a code generation algorithm with reference to the architecture of fig. 2. The method comprises the steps of performing feature extraction on an input hand-drawn webpage image by using a convolutional neural network, generating a DSL according to a trained model, generating a front-end engineering code according to a mapping relation, and realizing a code generation algorithm.
(1) Design front-end Code automatic generation algorithm Draft2Code based on hand-drawn webpage image
The algorithm architecture mainly comprises 3 main parts:
1) the CNN-based computer vision model is used for extracting features from an input page design drawing image;
2) based on the GRU language model, the function is to encode the source code characteristic sequence;
3) the GRU based decoder, acting to compute the combination of the image features obtained in 1) and the coding obtained in 2), predicts the next feature in the sequence.
According to the overall architecture of the system, for each time t, firstly, inputting an image I into a visual model based on CNN, and outputting a vector p after encoding; simultaneous eigenvector xtInputting into a first GRU-based language model, and outputting a vector q after encodingt. The visual code vector p and the language code vector qtConnected component vector rtInput into a GRU-based decoder for decoding the expression data previously learned by visual model and language model, i.e., the feature vector ytAnd assigns it to xt+1For use at the next time. The following is a formula chart of the above systemThe following steps:
p=CNN(I)
qt=GRU(xt)
rt=[p,qt]
yt=softmax(GRU’(rt))
xt+1=yt
(2) establishing a visual model
The CNN is widely applied to the field of vision, is one of multilayer perceptrons, has strong generalization capability due to the characteristics of local connection and weight sharing, and is very suitable for identifying and detecting objects and graphs. In the design of a visual model, the invention adopts CNN unsupervised learning to convert an input image into a learning fixed-length vector as output.
The input image is adjusted to 256 × 256 color pictures, the activation functions are all relus (Rectified linear function), and only effective convolution is performed without processing the boundary. The number of convolution kernels for the first tier is set to 16, the second tier is 32, the third tier is 64, and the last tier is 128. And outputting the vector p to be subsequently processed through four layers of convolution.
(3) Establishing language model
From the viewpoint of target output content, whether generating a. vue file based on an Vue frame or a. jsx file based on a reach frame, HTML-based syntax is required in terms of page layout description. The invention adopts a lighter Language as DSL (Domain-Specific Language) to participate in training. Unlike HTML which has the ability to accommodate a variety of layout requirements, such as tags for block-level elements < div > and inline elements < span >, DSL as used herein is directed to fixed layout forms only, so the overall layout element has only the following 7 types set to represent block-level elements in different application scenarios:
row layout elements placed horizontally
Stack layout elements placed in the vertical direction
Header-a layout element placed at the top of the page occupying the full page width, typically containing relevant content such as the page title or navigation links
Footer-a layout element placed at the bottom of a page that occupies the full page width, typically containing navigation links or contacts
Single card layout element nearly filling an entire row of a page
Double card layout elements where one row can place two
Quad card layout elements four card layout elements can be placed in a row
Similarly, for elements such as block-level text elements and buttons, 8 types need to be set as a perfect complement:
btn-active button in activated state
Btn-inactive button
Btn-success/confirm button
Btn-warning operation button
Btn-danger dangerous operation button
Big title
Small-title subtitle
Text
This can greatly simplify the language complexity to achieve the goal of reducing search space and vocabulary.
In most markup languages, each element is represented by an open tag, and when a child element is nested inside the open tag, a closed tag of a parent element needs to be set so that a parser can understand the hierarchical relationship. Often a parent element will contain multiple children, raising the question of where to place a closed tag, which translates into dealing with long-range dependencies. The conventional RNN architecture suffers from problems of gradient explosion and gradient disappearance, and cannot process long-sequence data, so the LSTM variant GRU, consisting of 2 layers of GRU neural networks each containing 128 cells, was introduced herein to model the relationship of such long-sequence data.
New memory in GRU
Figure BDA0002280348450000041
Learning memory information by using recursive connections, using vectors x input at time ttAnd the output vector q generated in the previous stept-1Activated by sigmoid function by weight multiplication, i.e. according to the formula zt=σ(Wz·[qt-1,xt]) And rt=σ(Wr·[qt-1,xt]) Get two gate values, update the gate weight ztAnd reset gate weight rt(ii) a Sigma is an activation function sigmoid; at qt-1After multiplication with the weight and resetting gate rtMultiplication followed by a formula
Figure BDA0002280348450000042
Obtain the final new memory
Figure BDA0002280348450000043
Wherein WzFor weight matrix from hidden layer to refresh gate, WrA weight matrix from the hidden layer to the reset gate, and W from the hidden layer to the candidate state
Figure BDA0002280348450000044
The weight matrix of (2). Finally according to the formula qt=(1-zt)
Figure BDA0002280348450000045
Obtaining the output vector q of the current stept. For each time t a vector q is outputtAnd carrying out subsequent treatment.
(4) Building a decoder
Visual coding vector p and linguistic coding vector q at time ttConnected component vector rtAnd inputting the data into a second GRU-based decoder model, wherein the model consists of 2 layers of GRU neural networks respectively containing 512 cells and is used for decoding the performance data obtained by the visual model and the language model learning.
A training stage:
the model is trained using a supervised learning approach. To better balance long term dependence and computational loss, a sliding window pair of length 48 is used forEach trained DSL input file is segmented to obtain a signature sequence. At each moment, inputting a hand-drawn image I and a corresponding characteristic sequence xtOutputting the predicted next feature yt. The model uses a cross-entropy cost function (cross-entropy cost) as its loss function, which will predict the next feature y of the modeltAnd the actual next feature xt+1A comparison is made.
As the context for training is updated through the sliding window at each instant, the same input image I will be reused for samples associated with the same page style; finally, two kinds of marks are set: < START > and < END >, which are used as place-occupying marks of DSL file prefix and suffix, respectively, so as to replace the specific content of the prefix and suffix in the subsequent compiling process;
training is performed by calculating the partial derivatives of the loss with respect to the network weights calculated with back propagation to minimize the multi-class log loss, the loss calculation formula being as follows:
Figure BDA0002280348450000051
in the above formula, xt+1Is the input vector at the next time, ytIs the output vector at the current time. Training with RMSProp (root Mean Square Prop) algorithm, the learning rate is set to 1 × 10-4And limiting the output gradient to [ -1.0, 1.0 [ -1.0 [ ]]Within ranges to account for numerical instability. In order to prevent overfitting of the model, random inactivation (Dropout) regularization is introduced, an inactivation rate of 0.3 is set after a complete connection layer of the visual model, namely 30% of neurons are randomly deleted each time in the training of the layer, so that the model is less dependent on certain local characteristics, and the generalization is stronger. The training mode adopts mini-batch (mini-batch) training with 64 groups of image sequences as one batch. After training, a relation model between the image data and the related characteristic sequence expressed by the DSL codes is established.
And (3) a testing stage:
to generate the DSL Code, a hand-drawn web page image I and a context sequence X with a feature number of 48 are input into the above-described Draft2Code model.X is to bet...xT-1Initialization is set to null vector, last sequence xTIs arranged as<START>. The predicted feature vector y is then usedtTo update the next context feature sequence. That is, x is to be adjustedt...xT-1Are respectively set as xt+ 1...xTThen x is addedtIs set to yt. This process is repeated until the model generates a signature<END>. And finally compiling the generated DSL characteristic sequence into a required target language by using a traditional compiling method. The whole process is shown in fig. 2.
Aiming at the code specification requirements of different frameworks, the invention writes a plurality of mapping relations between the DSL and the front-end code, and stores the mapping relations in a json format file, and the content of the mapping relations is used for replacing the generated DSL so as to meet the development requirements. And for all replacement contents, three replacement marks are proposed: brace ({ }) is used to replace sub-element content, if a < div > element contains a < button > button element, then the button element puts the replacing brace inside the div element; the brackets ([ ]) are used for replacing randomly generated texts, the characters in the hand-drawing manuscript are not analyzed, and therefore, some characters can be randomly generated and put into elements such as titles and the like with texts as main parts; the brackets (()) are used for replacing the property in the element label, mainly event binding, for example, the click event binding in vue replaces the @ click property content in the label, the invention counts according to the number of the buttons, generates empty method functions in sequence for occupying, and binds to the property of each button in sequence.
By converting the webpage design hand-painted manuscript image into an engineered front-end code, the requirement of automatic generation of the webpage code under the condition that no webpage screenshot or professional design drawing is referred to in a front-end project can be met, meanwhile, the modularized standard code conforming to Vue and a React frame can be output, secondary development is facilitated for an engineer, and the working efficiency is remarkably improved. The bilingual evaluation replacement score of the Draft2Code model system reaches 7.7 points, and a webpage corresponding to the hand-drawn manuscript can be generated accurately.
The core technology of this patent includes:
(1) DSL simplified HTML grammar is introduced for optimizing the training process, and the data volume required by training is reduced to a certain extent
(2) A front-end Code automatic generation algorithm (Draft2Code) aiming at the hand-drawn webpage image is constructed, so that codes conforming to front-end engineering can be accurately output, and further development is facilitated.
Drawings
FIG. 1 is an architectural design model training algorithm of the present invention.
FIG. 2 is an architectural design code generation algorithm of the present invention.
Detailed Description
(1) The invention can be operated in a computer of a Windows/Linux/MacOS operating system, and environmental software requires Keras to be version 2.1.2, tensoflow to be version 1.4.0, nltk to be version 3.2.5, opencv-python to be version 3.3.0.10, numpy to be version 1.13.1, h5py to be version 2.7.1, matplotlib to be version 2.0.2, Pillow to be version 4.3.0, tqdm to be version 4.17.1, and scipy to be version 1.0.0.
(2) The training data set of the invention needs to make the webpage image look the same as that drawn by hands, and the training set is subjected to two steps: modifying a CSS style sheet of a webpage in a data set, firstly changing the shape and thickness of a frame of each element on the webpage, changing an original rectangular button and a < div > element into a round angle, simultaneously adding a shadow effect properly, then changing a font into a font which looks like handwriting, and finally enhancing the effect of an image, such as adding oblique lines, shifting, rotating and other effects to the image, and simulating the style of multiple ends in real hand-drawing; and modifying information of each image, such as gray scale conversion, contour detection and the like.
(3) The invention trains the Draft2Code system by using 1700 pairs of hand-drawn webpage images and GUI data, and divides a data set into a training set and a verification set according to the ratio of 8: 2.
(4) The input of the front-end engineering code generation algorithm is a hand-drawn webpage image, and the output is a front-end code file which is in a corresponding webpage layout and is based on Vue/React.

Claims (3)

1. A front-end engineering Code generation method based on a hand-drawn webpage image is characterized in that a front-end Code automatic generation algorithm Draft2Code based on the hand-drawn webpage image is designed, and comprises 3 parts:
(1) establishing a visual model
In the design of a visual model, CNN unsupervised learning is adopted, and an input image is converted into a learning fixed-length vector to be output;
adjusting an input image into a 256 multiplied by 256 color picture, wherein the activation functions are all ReLU, and only convolution is carried out without processing the boundary; the number of convolution kernels of the first layer is set to 16, the second layer is 32, the third layer is 64, and the last layer is 128;
outputting a vector p to be subsequently processed through four layers of convolution;
(2) establishing language model
Introducing variant GRUs of LSTM to model the relationship of long-term sequence data, the model consisting of 2 layers of GRU neural networks each containing 128 cells;
the new memory h-in the GRU is learned memory information by using recursive concatenation, using the vector x input at time ttAnd the output vector q generated in the previous stept-1Activated by sigmoid function by weight multiplication, i.e. according to the formula zt=σ(Wz·[qt-1,xt]) And rt=σ(Wr·[qt-1,xt]) Get two gate values, update the gate weight ztAnd reset gate weight rt(ii) a Sigma is an activation function sigmoid; at qt-1After multiplication with the weight and resetting gate rtMultiplication followed by a formula
Figure FDA0002280348440000011
Obtain the final new memory
Figure FDA0002280348440000012
Wherein WzFor weight matrix from hidden layer to refresh gate, WrA weight matrix from the hidden layer to the reset gate, and W from the hidden layer to the candidate state
Figure FDA0002280348440000013
The weight matrix of (2); finally according to the formula
Figure FDA0002280348440000014
Figure FDA0002280348440000015
Obtaining the output vector q of the current stept
For each time t a vector q is outputtTreating for subsequent treatment;
(3) building a decoder
Visual coding vector p and linguistic coding vector q at time ttConnected component vector rtInputting the data into a second GRU-based decoder model, wherein the model consists of 2 layers of GRU neural networks respectively containing 512 cells and is used for decoding the expression data obtained by the learning of a visual model and a language model;
(4) training phase
The model is trained by using a supervised learning method; in order to better balance long-term dependence and computational loss, each DSL input file used for training is segmented using a sliding window of length 48, resulting in a signature sequence; at each moment, inputting a hand-drawn image I and a corresponding characteristic sequence xtOutputting the predicted next feature yt(ii) a The model uses a cross-entropy cost function as its loss function, which will predict the next feature y of the modeltAnd the actual next feature xt+1Comparing;
as the context for training is updated through the sliding window at each instant, the same input image I will be reused for samples associated with the same page style; finally, two kinds of marks are set: < START > and < END >, which are used as place-occupying marks of DSL file prefix and suffix, respectively, so as to replace the specific content of the prefix and suffix in the subsequent compiling process;
training is performed by calculating the partial derivatives of the loss with respect to the network weights calculated with back propagation to minimize the multi-class log loss, the loss calculation formula being as follows:
Figure FDA0002280348440000021
in the above formula, xt+1Is the input vector at the next time, ytIs the output vector at the current time;
training by using RMSProp (Root Mean Square Prop) algorithm, and setting the learning rate to be 1 multiplied by 10-4And limiting the output gradient to [ -1.0, 1.0 [ -1.0 [ ]]In the range to account for numerical instability; in order to prevent overfitting of the model, random inactivation regularization is introduced, the inactivation rate of 0.3 is set behind a complete connection layer of the visual model, namely 30% of neurons are randomly deleted in the layer of training each time, so that the model is less dependent on some local characteristics and has stronger generalization;
the training mode adopts the small batch training with 64 groups of image sequences as one batch;
after training, establishing a relation model between image data and a related characteristic sequence represented by a DSL code;
(5) testing phase
Inputting a hand-drawn webpage image I and a context sequence X with the characteristic number of 48 into the Draft2Code model for generating a DSL Code; x is to bet...xT-1Initialization is set to null vector, last eigenvector x of the sequenceTIs arranged as<START>(ii) a The predicted feature vector y is then usedtTo update the next signature sequence; that is, x is to be adjustedt...xT-1Are respectively set as xt+ 1...xTThen x is addedtIs set to yt(ii) a This process is repeated until the model generates a signature<END>(ii) a And finally compiling the generated DSL characteristic sequence into a required target language by using a traditional compiling method.
2. The method of claim 1, wherein: compiling a mapping relation between a plurality of DSLs and front-end codes, and storing the mapping relation in a json format file, wherein the content of the file is used for replacing the generated DSLs; and for all replacement contents, three replacement marks are proposed: brace ({ }) is used to replace sub-element content, if a < div > element contains a < button > button element, then the button element puts the replacing brace inside the div element; the middle brackets ([ ]) are used for replacing randomly generated texts and do not analyze characters in the hand-drawing manuscript, so that some characters can be randomly generated and put into elements with texts as main positions; brackets (()) are used to replace attributes within the element tag.
3. The method of claim 1, wherein:
the overall layout element has only the following 7 types set to represent block level elements under different application scenarios:
row layout elements placed horizontally
Stack layout elements placed in the vertical direction
Header-a layout element placed at the top of the page occupying the full page width, typically containing relevant content such as the page title or navigation links
Footer-a layout element placed at the bottom of a page that occupies the full page width, typically containing navigation links or contacts
Single card layout element nearly filling an entire row of a page
Double card layout elements where one row can place two
Quad card layout elements four card layout elements can be placed in a row
For block-level text elements and button elements, 8 types are set as perfect supplements:
btn-active button in activated state
Btn-inactive button
Btn-success/confirm button
Btn-warning operation button
Btn-danger dangerous operation button
Big title
Small-title subtitle
Text.
CN201911138941.0A 2019-11-20 2019-11-20 Front-end engineering code generation method based on hand-drawn webpage image Pending CN110968299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911138941.0A CN110968299A (en) 2019-11-20 2019-11-20 Front-end engineering code generation method based on hand-drawn webpage image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911138941.0A CN110968299A (en) 2019-11-20 2019-11-20 Front-end engineering code generation method based on hand-drawn webpage image

Publications (1)

Publication Number Publication Date
CN110968299A true CN110968299A (en) 2020-04-07

Family

ID=70030936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911138941.0A Pending CN110968299A (en) 2019-11-20 2019-11-20 Front-end engineering code generation method based on hand-drawn webpage image

Country Status (1)

Country Link
CN (1) CN110968299A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111562915A (en) * 2020-06-15 2020-08-21 厦门大学 Generation method and device of front-end code generation model
CN112036147A (en) * 2020-08-28 2020-12-04 平安科技(深圳)有限公司 Method and device for converting picture into webpage, computer equipment and storage medium
CN112379878A (en) * 2020-10-21 2021-02-19 扬州大学 Multi-label learning-based UI element Web code generation method
CN113010741A (en) * 2021-03-30 2021-06-22 南京大学 Sketch-based mobile application model query method
CN113485706A (en) * 2021-07-05 2021-10-08 中国工商银行股份有限公司 DSL-based multi-technology stack front-end code generation method and device
CN113867724A (en) * 2021-09-15 2021-12-31 中国船舶重工集团公司第七0九研究所 Method and system for automatically generating GUI (graphical user interface) code, server and medium
US11221833B1 (en) * 2020-03-18 2022-01-11 Amazon Technologies, Inc. Automated object detection for user interface generation
CN113986251A (en) * 2021-12-29 2022-01-28 中奥智能工业研究院(南京)有限公司 GUI prototype graph code conversion method based on convolution and cyclic neural network
CN115981615A (en) * 2023-03-20 2023-04-18 中科航迈数控软件(深圳)有限公司 G code generation method fusing language model and knowledge graph and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074245A (en) * 2016-03-24 2018-12-21 微软技术许可有限责任公司 Vision diagram is converted into code
CN109656554A (en) * 2018-11-27 2019-04-19 天津字节跳动科技有限公司 User interface creating method and device
CN109683871A (en) * 2018-11-01 2019-04-26 中山大学 Code automatically generating device and method based on image object detection method
CN110018827A (en) * 2019-04-03 2019-07-16 拉扎斯网络科技(上海)有限公司 Method, apparatus, electronic equipment and the readable storage medium storing program for executing of automatic code generating
CN110046226A (en) * 2019-04-17 2019-07-23 桂林电子科技大学 A kind of Image Description Methods based on distribution term vector CNN-RNN network
US20190250891A1 (en) * 2018-02-12 2019-08-15 Oracle International Corporation Automated code generation
US20190317739A1 (en) * 2019-06-27 2019-10-17 Intel Corporation Methods and apparatus to automatically generate code for graphical user interfaces

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074245A (en) * 2016-03-24 2018-12-21 微软技术许可有限责任公司 Vision diagram is converted into code
US20190250891A1 (en) * 2018-02-12 2019-08-15 Oracle International Corporation Automated code generation
CN109683871A (en) * 2018-11-01 2019-04-26 中山大学 Code automatically generating device and method based on image object detection method
CN109656554A (en) * 2018-11-27 2019-04-19 天津字节跳动科技有限公司 User interface creating method and device
CN110018827A (en) * 2019-04-03 2019-07-16 拉扎斯网络科技(上海)有限公司 Method, apparatus, electronic equipment and the readable storage medium storing program for executing of automatic code generating
CN110046226A (en) * 2019-04-17 2019-07-23 桂林电子科技大学 A kind of Image Description Methods based on distribution term vector CNN-RNN network
US20190317739A1 (en) * 2019-06-27 2019-10-17 Intel Corporation Methods and apparatus to automatically generate code for graphical user interfaces

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU19: "CNN (Convolutional Neural Network)", Retrieved from the Internet <URL:https://www.jianshu.com/p/9c011388e382> *
微笑SUN: "深度学习之GRU网络", Retrieved from the Internet <URL:https://www.cnblogs.com/jiangxinyang/p/9376021.html> *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11221833B1 (en) * 2020-03-18 2022-01-11 Amazon Technologies, Inc. Automated object detection for user interface generation
CN111562915A (en) * 2020-06-15 2020-08-21 厦门大学 Generation method and device of front-end code generation model
CN112036147A (en) * 2020-08-28 2020-12-04 平安科技(深圳)有限公司 Method and device for converting picture into webpage, computer equipment and storage medium
CN112036147B (en) * 2020-08-28 2024-01-30 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for converting picture into webpage
CN112379878A (en) * 2020-10-21 2021-02-19 扬州大学 Multi-label learning-based UI element Web code generation method
CN112379878B (en) * 2020-10-21 2024-03-26 扬州大学 Web code generation method of UI element based on multi-label learning
CN113010741A (en) * 2021-03-30 2021-06-22 南京大学 Sketch-based mobile application model query method
CN113010741B (en) * 2021-03-30 2023-09-05 南京大学 Mobile application model query method based on sketch
CN113485706A (en) * 2021-07-05 2021-10-08 中国工商银行股份有限公司 DSL-based multi-technology stack front-end code generation method and device
CN113867724A (en) * 2021-09-15 2021-12-31 中国船舶重工集团公司第七0九研究所 Method and system for automatically generating GUI (graphical user interface) code, server and medium
CN113986251A (en) * 2021-12-29 2022-01-28 中奥智能工业研究院(南京)有限公司 GUI prototype graph code conversion method based on convolution and cyclic neural network
CN115981615A (en) * 2023-03-20 2023-04-18 中科航迈数控软件(深圳)有限公司 G code generation method fusing language model and knowledge graph and related equipment

Similar Documents

Publication Publication Date Title
CN110968299A (en) Front-end engineering code generation method based on hand-drawn webpage image
Yuan et al. A relation-specific attention network for joint entity and relation extraction
CN111753081B (en) System and method for text classification based on deep SKIP-GRAM network
Deng et al. Image-to-markup generation with coarse-to-fine attention
CN110765966B (en) One-stage automatic recognition and translation method for handwritten characters
US20180060727A1 (en) Recurrent encoder and decoder
CN109657226B (en) Multi-linkage attention reading understanding model, system and method
AU2018217281A1 (en) Using deep learning techniques to determine the contextual reading order in a form document
CN107992211B (en) CNN-LSTM-based Chinese character misspelling and mispronounced character correction method
CN111767718B (en) Chinese grammar error correction method based on weakened grammar error feature representation
CN110807335B (en) Translation method, device, equipment and storage medium based on machine learning
CN113535953B (en) Meta learning-based few-sample classification method
Mabona et al. Neural generative rhetorical structure parsing
CN111046661A (en) Reading understanding method based on graph convolution network
CN113449801B (en) Image character behavior description generation method based on multi-level image context coding and decoding
KR102143745B1 (en) Method and system for error correction of korean using vector based on syllable
CN115221846A (en) Data processing method and related equipment
CN112560456A (en) Generation type abstract generation method and system based on improved neural network
JP2022128441A (en) Augmenting textual data for sentence classification using weakly-supervised multi-reward reinforcement learning
CN113806747B (en) Trojan horse picture detection method and system and computer readable storage medium
CN116227603A (en) Event reasoning task processing method, device and medium
CN113592045B (en) Model adaptive text recognition method and system from printed form to handwritten form
CN111274793A (en) Text processing method and device and computing equipment
Zhang et al. A rapid combined model for automatic generating web UI codes
CN114529908A (en) Offline handwritten chemical reaction type image recognition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination