CN109656554B - User interface generation method and device - Google Patents

User interface generation method and device Download PDF

Info

Publication number
CN109656554B
CN109656554B CN201811425428.5A CN201811425428A CN109656554B CN 109656554 B CN109656554 B CN 109656554B CN 201811425428 A CN201811425428 A CN 201811425428A CN 109656554 B CN109656554 B CN 109656554B
Authority
CN
China
Prior art keywords
training
interface
neural network
mark
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811425428.5A
Other languages
Chinese (zh)
Other versions
CN109656554A (en
Inventor
俞亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin ByteDance Technology Co Ltd
Original Assignee
Tianjin ByteDance Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin ByteDance Technology Co Ltd filed Critical Tianjin ByteDance Technology Co Ltd
Priority to CN201811425428.5A priority Critical patent/CN109656554B/en
Publication of CN109656554A publication Critical patent/CN109656554A/en
Application granted granted Critical
Publication of CN109656554B publication Critical patent/CN109656554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Abstract

The disclosure provides a user interface generation method and a user interface generation device, wherein the method comprises the following steps: acquiring image characteristic data of a hand-drawn interface corresponding to a user interface to be generated and acquiring predetermined characteristic data of an initial mark; determining a marking sequence of the hand-drawn interface according to the image characteristic data, the initially marked characteristic data and a pre-trained first cyclic neural network model; a user interface is generated from the sequence of tokens. Therefore, the marking sequence of the hand-drawn interface which can be converted into the interface code can be obtained according to the image characteristic data of the hand-drawn interface, the initially marked characteristic data and the pre-trained first circulation neural network model, the interface code can be automatically obtained by a developer only by drawing one hand-drawn interface, the development of the user interface is completed, the interface code does not need to be manually written, the developer can be greatly helped to easily develop the user interface of the application program, and the development efficiency of the application program is improved.

Description

User interface generation method and device
Technical Field
The present disclosure relates to the field of mobile internet technologies, and in particular, to a method and an apparatus for generating a user interface.
Background
With the development of the mobile internet, applications have exhibited explosive growth. Wherein, the user interface of the application program is used as a medium for interaction and information exchange between the system and the user, and the usability, flexibility, complexity and reliability of the user interface directly influence the stickiness of the user to the application program.
In the related art, a developer generates a user interface of an application by writing a large amount of interface code. When a user interface needs to be updated, developers are also required to spend a lot of time and effort researching how to modify the interface code to update the user interface. Obviously, in the related art, a user interface is generated by writing interface codes, which requires a developer to have excellent programming capability, consumes a lot of time and energy of the developer, and affects development efficiency of an application program, and therefore, how to help the developer to easily develop the user interface of the application program becomes a technical problem to be solved urgently.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
A first object of the present disclosure is to provide a user interface generation method.
A second object of the present disclosure is to provide a user interface generating apparatus.
A third object of the present disclosure is to provide an electronic device.
A fourth object of the present disclosure is to provide a computer-readable storage medium.
To achieve the above object, an embodiment of a first aspect of the present disclosure provides a user interface generating method, including:
acquiring image characteristic data of a hand-drawn interface corresponding to a user interface to be generated and acquiring predetermined characteristic data of an initial mark;
determining a marking sequence of the hand-drawn interface according to the image feature data, the initially marked feature data and a pre-trained first cyclic neural network model;
and generating the user interface according to the marking sequence.
The user interface generation method of the embodiment of the disclosure acquires the image characteristic data of the hand-drawn interface corresponding to the user interface to be generated and the characteristic data of the predetermined initial mark; determining a marking sequence of the hand-drawn interface according to the image feature data, the initially marked feature data and a pre-trained first cyclic neural network model; and generating the user interface according to the marking sequence. Therefore, the marking sequence of the hand-drawn interface which can be converted into the interface code can be obtained according to the image characteristic data of the hand-drawn interface, the initially marked characteristic data and the pre-trained first circulation neural network model, the interface code can be automatically obtained by a developer only by drawing one hand-drawn interface, the development of the user interface is completed, the interface code does not need to be manually written, the developer can be greatly helped to easily develop the user interface of the application program, and the development efficiency of the application program is improved.
To achieve the above object, an embodiment of a second aspect of the present disclosure provides a user interface generating apparatus, including:
the acquisition module is used for acquiring image characteristic data of a hand-drawn interface corresponding to a user interface to be generated and acquiring predetermined characteristic data of an initial mark;
the determining module is used for determining a marking sequence of the hand-drawing interface according to the image characteristic data, the initially marked characteristic data and a pre-trained first cyclic neural network model;
and the generating module is used for converting the marking sequence into an interface code and generating the user interface according to the interface code.
The user interface generation device of the embodiment of the disclosure acquires the image characteristic data of the hand-drawn interface corresponding to the user interface to be generated and the characteristic data of the predetermined initial mark; determining a marking sequence of the hand-drawn interface according to the image feature data, the initially marked feature data and a pre-trained first cyclic neural network model; and generating the user interface according to the marking sequence. Therefore, the marking sequence of the hand-drawn interface which can be converted into the interface code can be obtained according to the image characteristic data of the hand-drawn interface, the initially marked characteristic data and the pre-trained first circulation neural network model, the interface code can be automatically obtained by a developer only by drawing one hand-drawn interface, the development of the user interface is completed, the interface code does not need to be manually written, the developer can be greatly helped to easily develop the user interface of the application program, and the development efficiency of the application program is improved.
To achieve the above object, an embodiment of a third aspect of the present disclosure provides an electronic device, including: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the user interface generation method as described above when executing the program.
In order to achieve the above object, a fourth aspect of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the user interface generating method as described above.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a user interface generation method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating a further method for generating a user interface according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram illustrating a further method for generating a user interface according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a user interface generating apparatus according to an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the present disclosure, and should not be construed as limiting the present disclosure.
A user interface generation method and a user interface generation apparatus according to the embodiments of the present disclosure are described below with reference to the drawings.
Fig. 1 is a schematic flowchart of a user interface generation method according to an embodiment of the present disclosure. The embodiment provides a user interface generation method, wherein an execution main body is a user interface generation device, and the execution main body is composed of hardware and/or software.
As shown in fig. 1, the user interface generating method includes the steps of:
s101, acquiring image characteristic data of a hand-drawn interface corresponding to a user interface to be generated, and acquiring characteristic data of a predetermined initial mark.
In this embodiment, the user interface may be understood as a user interface displayed to the user when the application runs, and the hand-drawn interface may be understood as a sketch of the hand-drawn user interface, where the hand-drawn interface may be obtained by taking a picture of a sketch drawn on paper by a developer, or may be an electronic sketch drawn by the developer using drawing software, but not limited thereto.
In one possible implementation manner, "obtaining image feature data of a hand-drawn interface corresponding to a user interface to be generated" is specifically implemented as follows: receiving a hand-drawn interface corresponding to a user interface to be generated, and performing matrixing processing on the hand-drawn interface to obtain an image matrix of a gesture interface; and inputting the image matrix into a convolutional neural network model to obtain image characteristic data of the hand-drawn image, wherein the convolutional neural network model is obtained by training each training hand-drawn interface on a convolutional neural network.
In this embodiment, after receiving the hand-drawn interface, performing matrixing processing on the hand-drawn interface to obtain a corresponding image matrix. The image matrix can be understood as digital image data of the gesture interface. Wherein, the rows of the matrix correspond to the height of the image (unit is pixel), the columns of the matrix correspond to the width of the image (unit is pixel), the elements of the matrix correspond to the pixels of the image, and the values of the elements of the matrix are the gray values of the pixels.
In order to enable the image matrix to more truly represent the hand-drawn interface, the hand-drawn interface is subjected to image preprocessing before being subjected to matrixing processing, wherein the image preprocessing comprises but is not limited to the following preprocessing: noise cancellation, binarization of the image, etc.
In this embodiment, after the image matrix representing the hand-drawn interface is obtained, the image matrix is input into a pre-trained convolutional neural network model, and image feature data of the hand-drawn interface is extracted.
Due to the superiority of the Convolutional Neural Network (CNN) in the aspect of image processing, the embodiment adopts the Convolutional Neural Network model to extract the image feature data of the hand-drawn interface, thereby improving the accuracy and the extraction efficiency of the image feature data of the hand-drawn interface.
The convolutional neural network model in this embodiment is obtained by training each training freehand interface to the convolutional neural network, and for how to train the convolutional neural network, reference may be made to related technologies, which are not described herein again. The training hand-drawn interface may be understood as a training sample of the convolutional neural network model, and the training sample may be obtained by taking a picture of a sketch drawn by a developer on paper, or may be an electronic sketch drawn by the developer using drawing software, but is not limited thereto. It can be understood that the more the total number of the training hand-drawn interfaces, which is set by the developer according to the actual situation, the higher the accuracy of the convolutional neural network model.
The following notation (Token) is briefly introduced here: the Token (Token) is a compiling principle term, the Token (Token) can be understood as the minimum unit constituting the source code, and in the compiling stage, the process of reading in the character stream constituting the source code and outputting the Token (Token) by the lexical analyzer is called Tokenization (Tokenization), and the description of the Token (Token) can be found in the related art, and will not be described herein again.
Taking the interface code as the code of the text label as an example, the whole interface code can be analyzed as the following marks: < PAD >, < START >, < Text >, < X, y, width, height, content, </Text >, < PAD > where < PAD > represents blank and only acts as a placeholder, < START > represents the START of the interface code, < Text > represents Text, x, y is the abscissa of the coordinate system of the tag in the user interface of the application, y is the ordinate of the coordinate system of the tag in the user interface of the application, width, height is the width and height of the tag, content represents the Text content, </Text > represents the END of the tag, and < END > represents the END of the interface code.
In this embodiment, a marker (Token) sequence of the freehand interface is output using the first recurrent neural network model. In order to start the labeling sequence of the first recurrent neural network model output hand-drawn interface, the initial labeling needs to be determined in advance. The initial flag may be a default initial flag of the system, or may be an initial flag set by the developer, but is not limited thereto, for example, the initial flag is START.
In this embodiment, after obtaining the predetermined initial mark, a process of extracting feature data of the initial mark is performed.
In one possible implementation manner, the specific implementation manner of "obtaining the predetermined characteristic data of the initial mark" is: and inputting the predetermined initial mark into the second recurrent neural network model to obtain the characteristic data of the initial mark.
The embodiment adopts the pre-trained second cyclic neural network model to extract the marked feature data so as to improve the precision of the marked feature data.
In one possible implementation, the "training the second recurrent neural network model" is implemented as follows: acquiring interface codes corresponding to the training hand-drawn interfaces, and performing lexical analysis on the corresponding interface codes to obtain training marks of each training hand-drawn interface; and training the second recurrent neural network by adopting each training mark to obtain a second recurrent neural network model.
Specifically, an interface code of each training hand-drawn interface is prepared in advance, and each training label of each training hand-drawn interface is obtained by performing lexical analysis on the interface code. And the training labels are training samples for training the second recurrent neural network, each training sample is input into the second recurrent neural network, and the parameters of the second recurrent neural network are adjusted to obtain a second recurrent neural network model.
S102, determining a marking sequence of the hand-drawn interface according to the image characteristic data, the initially marked characteristic data and a pre-trained first circulation neural network model.
Because a Recurrent Neural Network (RNN) can achieve high accuracy when processing time series data, and is a preferred Network for processing time series data, the embodiment adopts a pre-trained first Recurrent Neural Network model to output a marker (Token) sequence of a freehand interface, where the marker (Token) sequence includes N markers sorted according to an output sequence, and N is an integer greater than 1.
In this embodiment, after the image feature data of the freehand interface and the feature data of the initial marker are extracted, the first recurrent neural network model may be started to output individual markers, and the individual markers are sorted according to the output sequence to obtain a marker (Token) sequence.
Taking the initial mark as START as an example, inputting the image feature data and the feature data of START into a first recurrent neural network model, and outputting a 1 st mark by the first recurrent neural network model; inputting the image characteristic data and the characteristic data of the 1 st mark into a first recurrent neural network model, and outputting the 2 nd mark by the first recurrent neural network model; inputting the image characteristic data and the characteristic data of the previous output mark into the first cyclic neural network model by analogy to obtain the next output mark until the Nth mark is obtained; sequencing the 1 st marker and the 2 nd marker … … Nth marker according to the output sequence to obtain a marker (Token) sequence.
And S103, generating a user interface according to the marking sequence.
Specifically, the tag sequence may be understood as a tagged interface code that conforms to a lexical rule, and the corresponding interface code may be obtained by parsing the tag sequence. And after the interface code is obtained, compiling the interface code in a mobile application development framework to generate a corresponding user interface.
In order to compile the interface code in the mobile application development framework, the interface code is embedded into a page template provided by the mobile application development framework, and the mobile application development framework compiles the page template embedded with the interface code to generate a corresponding user interface.
Taking a mobile application development framework as the Flutter as an example, the Flutter is a mobile UI framework of google, and can quickly construct a high-quality native user interface on iOS and Android.
The following is the page template for Flutter:
Figure BDA0001881501080000051
Figure BDA0001881501080000061
the user interface generation method comprises the steps of obtaining image characteristic data of a hand-drawn interface corresponding to a user interface to be generated and obtaining characteristic data of a predetermined initial mark; determining a marking sequence of the hand-drawn interface according to the image characteristic data, the initially marked characteristic data and a pre-trained first cyclic neural network model; a user interface is generated from the sequence of tokens. Therefore, the marking sequence of the hand-drawn interface which can be converted into the interface code can be obtained according to the image characteristic data of the hand-drawn interface, the initially marked characteristic data and the pre-trained first circulation neural network model, the interface code can be automatically obtained by a developer only by drawing one hand-drawn interface, the development of the user interface is completed, the interface code does not need to be manually written, the developer can be greatly helped to easily develop the user interface of the application program, and the development efficiency of the application program is improved.
Further, with reference to fig. 2 in combination, on the basis of the embodiment shown in fig. 1, the tag sequence includes N tags that are sorted according to the output sequence, where N is an integer greater than 1, and the specific implementation manner of step S102 is:
and S1021, inputting the image characteristic data and the characteristic data of the (i-1) th mark into a pre-trained first cyclic neural network model aiming at the ith mark, and outputting the ith mark.
Specifically, i is 1, 2, 3 … … N, i is taken from 1 to N, and the 0 th label is the initial label. In the present embodiment, the image feature data and the feature data of the initial marker are input into the first recurrent neural network model, and the 1 st marker is output. Inputting the image characteristic data and the characteristic data of the 1 st mark into the first recurrent neural network model, and outputting the 2 nd mark. And by analogy, the image feature data and the feature data of the (N-1) th mark are input into the first recurrent neural network model, and the Nth mark is output.
In this embodiment, the second recurrent neural network model may be employed to extract the feature data of the i-1 th marker to improve the accuracy of the marked feature data. Specifically, the (i-1) th mark is input into the second recurrent neural network model, and the characteristic data of the (i-1) th mark is obtained.
S1022 determines whether the ith flag is an end flag, if not, executes step S1023, and if so, executes step S1024.
In this embodiment, the END flag may be an initial flag default to the system, or may be an initial flag set by the developer, but not limited thereto, for example, the END flag is END. When the mark output by the first cyclic neural network model is the end mark, the mark output this time is the last mark in the mark sequence, and when the mark output by the first cyclic neural network model is not the end mark, the mark output by the first cyclic neural network model is still to continue outputting the next mark in the mark sequence. In this embodiment, whether the output marker of the first recurrent neural network model is the end marker is determined, so that a complete marker sequence is obtained, and a complete interface code can be obtained subsequently based on the complete marker sequence, thereby facilitating development of a user interface.
And S1023, adding one to the count of i.
For example, if the 1 st flag output is not END, i is updated to 2; if the output 2 nd flag is not END, updating i to 3; and by analogy, updating i to i +1 until the ith mark is END.
And S1024, sequencing the 1 st mark to the ith mark according to the output sequence to obtain a mark sequence.
For example, when the ith marker is END, N markers from the 1 st marker to the ith marker are sorted to obtain a marker sequence.
According to the user interface generation method, the marking sequence of the hand-drawn interface which can be converted into the interface code can be obtained according to the image characteristic data of the hand-drawn interface, the initially marked characteristic data and the pre-trained first cyclic neural network model, the developer can automatically obtain the interface code only by drawing one hand-drawn interface, the development of the user interface is completed, the interface code does not need to be manually written, the developer can be greatly helped to easily develop the user interface of the application program, and the development efficiency of the application program is improved.
Further, with reference to fig. 3 in combination, on the basis of the embodiment shown in fig. 1, the method may further include:
and S104, obtaining each training sample.
And S105, training the first cyclic neural network according to each training sample to obtain a first cyclic neural network model.
The first recurrent neural network model in this embodiment is used to obtain a label sequence of the hand-drawn interface, and therefore, each acquired training sample needs to include a training hand-drawn interface and a corresponding training label sequence, the training hand-drawn interface is used as input data of the first recurrent neural network, the training label sequence is used as output data of the first recurrent neural network, and the first recurrent neural network is trained to obtain the first recurrent neural network model. It can be appreciated that the greater the total number of training samples, the greater the accuracy of the first recurrent neural network model. The total number of training samples is set by the developer according to the actual situation.
In the actual process, an interface code corresponding to a training freehand drawing interface is prepared in advance, and the interface code is subjected to lexical analysis to obtain a training tag sequence comprising M training tags, wherein M is an integer greater than 1, namely the training tag sequence is formed by the M training tags which are sequenced according to the lexical analysis sequence.
Because the training label sequence corresponding to each training sample comprises M training labels which are ordered according to the lexical analysis sequence, M-1 times of training is needed when each training sample is adopted to train the first cyclic neural network.
Further, the training marker sequence corresponding to each training sample includes M training markers in an ordered sequence, where M is an integer greater than 1, and the specific implementation manner of step S105 is: and training the first recurrent neural network for M-1 times according to each training sample.
Aiming at the mth training, M is 1, 2 and 3 … … M-1, namely M is taken from 1 to M-1, and the method comprises the following steps:
and S51, acquiring the feature data of the training hand-drawing interface and the feature data of the mth training mark.
In this embodiment, the convolutional neural network model may be used to obtain the feature data of the training freehand interface, and the second cyclic neural network model may be used to obtain the feature data of the mth training mark, but not limited thereto.
And S52, training the first recurrent neural network according to the feature data of the training freehand interface and the feature data of the mth training mark to obtain a first recurrent neural network model.
Specifically, the characteristic data of the training freehand interface is used as input data of the first recurrent neural network, the characteristic data of the mth training mark is used as output data of the first recurrent neural network, and the first recurrent neural network is trained to obtain a first recurrent neural network model.
And S53, taking the (m + 1) th training mark as an expected output mark of the current training, and taking the mark output by the first cyclic neural network model as an actual output mark of the current training.
S54, determining the difference between the actual output mark and the expected output mark.
In this embodiment, in order to avoid the learning deceleration of the first recurrent neural network and facilitate the training of the first recurrent neural network, a Cross-entropy cost function (Cross-entropy cost function) may be used to measure the difference between the actual output marker and the expected output marker, but not limited thereto. More descriptions about the cross entropy cost function are given in the related art, and are not described herein. .
And S55, adjusting the parameters of the first recurrent neural network model according to the difference.
In this embodiment, in order to optimize the first recurrent neural network model, parameters of the first recurrent neural network model may be adjusted according to the difference based on a back propagation algorithm (BP), but not limited thereto. Further description of the back propagation algorithm is provided in the related art and will not be described herein.
According to the user interface generation method, the first circulation neural network model is trained in advance, the marking sequence of the hand-drawn interface which can be converted into the interface code can be obtained according to the image characteristic data of the hand-drawn interface, the initial marking characteristic data and the first circulation neural network model trained in advance, the fact that a developer can convert the hand-drawn interface into the marking sequence of the hand-drawn interface of the interface code only by drawing one hand-drawn interface is achieved, the fact that the developer can automatically obtain the interface code only by drawing one hand-drawn interface is achieved, the development of the user interface is completed, the interface code does not need to be manually written, the user interface of the application program can be greatly helped to be easily developed by the developer, and the development efficiency of the application program is improved.
Fig. 4 is a schematic structural diagram of a user interface generating apparatus according to an embodiment of the present disclosure. The embodiment provides a user interface generation device, which is an execution main body of the user interface generation method, and the execution main body is composed of hardware and/or software. As shown in fig. 4, the user interface generating means includes: the device comprises an acquisition module 11, a determination module 12 and a generation module 13.
The acquisition module 11 is configured to acquire image feature data of a hand-drawn interface corresponding to a user interface to be generated, and acquire feature data of a predetermined initial mark;
the determining module 12 is configured to determine a labeling sequence of the freehand interface according to the image feature data, the initially labeled feature data, and a pre-trained first recurrent neural network model;
and a generating module 13, configured to generate a user interface according to the tag sequence.
Further, the marker sequence includes N markers ordered according to the output order, where N is an integer greater than 1, and the determining module 12 is specifically configured to:
for the ith flag, where i ═ 1, 2, 3 … … N, the 0 th flag is the initial flag:
inputting the image characteristic data and the characteristic data of the (i-1) th mark into a first cyclic neural network model trained in advance, and outputting the ith mark;
judging whether the ith mark is an end mark or not, and if not, adding one to the count of i;
and if so, sequencing the 1 st mark to the ith mark according to the output sequence to obtain a mark sequence.
Further, the apparatus further comprises a first training module;
the first training module is used for acquiring each training sample, wherein each training sample comprises a training hand-drawing interface and a corresponding training label sequence;
and training the first cyclic neural network according to each training sample to obtain a first cyclic neural network model.
Further, the training marker sequence corresponding to each training sample includes M training markers in an ordered sequence, M is an integer greater than 1, and the first training module is specifically configured to:
training the first recurrent neural network for M-1 times according to each training sample;
acquiring feature data of a training freehand drawing interface and feature data of an mth training mark, wherein M is 1, 2 and 3 … … M-1;
training a first recurrent neural network according to the feature data of the training freehand interface and the feature data of the mth training mark to obtain a first recurrent neural network model;
taking the (m + 1) th training mark as an expected output mark of the training, and taking a mark output by the first cyclic neural network model as an actual output mark of the training;
determining a difference between the actual output signature and the desired output signature;
and adjusting parameters of the first recurrent neural network model according to the difference.
Further, the acquisition module 11 includes a first acquisition unit 111;
the first obtaining unit 111 is configured to receive a hand-drawn interface corresponding to a user interface to be generated, perform matrixing processing on the hand-drawn interface, and obtain an image matrix of a gesture interface;
and inputting the image matrix into a convolutional neural network model to obtain image characteristic data of the hand-drawn image, wherein the convolutional neural network model is obtained by training each training hand-drawn interface on a convolutional neural network.
Further, the acquisition module 11 includes a second acquisition unit 112;
and a second obtaining unit 112, configured to input a predetermined initial marker into the second recurrent neural network model, so as to obtain feature data of the initial marker.
Further, the apparatus further comprises a second training unit;
the second training unit is used for acquiring interface codes corresponding to the training hand-drawn interfaces, and performing lexical analysis on the corresponding interface codes to obtain training marks of each training hand-drawn interface;
and training the second recurrent neural network by adopting each training mark to obtain a second recurrent neural network model.
It should be noted that the foregoing explanation on the embodiment of the user interface generation method is also applicable to the user interface generation apparatus of this embodiment, and details are not described here.
The user interface generation device provided by the disclosure acquires the image characteristic data of the hand-drawn interface corresponding to the user interface to be generated and acquires the characteristic data of a predetermined initial mark; determining a marking sequence of the hand-drawn interface according to the image characteristic data, the initially marked characteristic data and a pre-trained first cyclic neural network model; a user interface is generated from the sequence of tokens. Therefore, the marking sequence of the hand-drawn interface which can be converted into the interface code can be obtained according to the image characteristic data of the hand-drawn interface, the initially marked characteristic data and the pre-trained first circulation neural network model, the interface code can be automatically obtained by a developer only by drawing one hand-drawn interface, the development of the user interface is completed, the interface code does not need to be manually written, the developer can be greatly helped to easily develop the user interface of the application program, and the development efficiency of the application program is improved.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device includes:
memory 1001, processor 1002, and computer programs stored on memory 1001 and executable on processor 1002.
The processor 1002, when executing the program, implements the user interface generation method provided in the above-described embodiments.
Further, the electronic device further includes:
a communication interface 1003 for communicating between the memory 1001 and the processor 1002.
A memory 1001 for storing computer programs that may be run on the processor 1002.
Memory 1001 may include high-speed RAM memory and may also include non-volatile memory (e.g., at least one disk memory).
The processor 1002 is configured to implement the user interface generating method according to the foregoing embodiment when executing the program.
If the memory 1001, the processor 1002, and the communication interface 1003 are implemented independently, the communication interface 1003, the memory 1001, and the processor 1002 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 1001, the processor 1002, and the communication interface 1003 are integrated on one chip, the memory 1001, the processor 1002, and the communication interface 1003 may complete communication with each other through an internal interface.
The processor 1002 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the user interface generation method as described above.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present disclosure have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (9)

1. A method for generating a user interface, comprising:
acquiring image characteristic data of a hand-drawn interface corresponding to a user interface to be generated and acquiring predetermined characteristic data of an initial mark;
determining a marking sequence of the hand-drawn interface according to the image feature data, the initially marked feature data and a pre-trained first cyclic neural network model;
generating the user interface according to the marking sequence;
wherein the acquiring of the predetermined characteristic data of the initial mark comprises:
and inputting a predetermined initial mark into a second recurrent neural network model to obtain the characteristic data of the initial mark.
2. The method according to claim 1, wherein the marker sequence includes N markers ordered in output order, N being an integer greater than 1, and determining the marker sequence of the hand-drawn interface according to the image feature data, the feature data of the initial marker, and a pre-trained first recurrent neural network model includes:
inputting image feature data and i-1 marked feature data into the first recurrent neural network model trained in advance, and outputting an i-th mark, wherein i is 1, 2 and 3 … … N, and the 0 th mark is the initial mark;
judging whether the ith mark is an end mark or not, and if not, adding one to the count of i;
and if so, sequencing the 1 st mark to the ith mark according to the output sequence to obtain the mark sequence.
3. The method of claim 1, further comprising:
obtaining training samples, wherein each training sample comprises a training hand-drawing interface and a corresponding training label sequence;
and training a first recurrent neural network according to each training sample to obtain the first recurrent neural network model.
4. The method of claim 3, wherein the training marker sequence corresponding to each training sample comprises M training markers in an ordered sequence, M being an integer greater than 1;
the training of the first recurrent neural network according to each training sample to obtain the first recurrent neural network model comprises:
training the first recurrent neural network for M-1 times according to each training sample;
acquiring feature data of the training freehand drawing interface and feature data of an mth training mark, wherein M is 1, 2 and 3 … … M-1;
training the first recurrent neural network according to the feature data of the training freehand-drawing interface and the feature data of the mth training mark to obtain a first recurrent neural network model;
taking the (m + 1) th training mark as an expected output mark of the training, and taking a mark output by the first cyclic neural network model as an actual output mark of the training;
determining a gap between the actual output signature and the desired output signature;
and adjusting parameters of the first recurrent neural network model according to the gap.
5. The method according to claim 1, wherein the obtaining image feature data of the hand-drawn interface corresponding to the user interface to be generated comprises:
receiving a hand-drawn interface corresponding to a user interface to be generated, and performing matrixing processing on the hand-drawn interface to obtain an image matrix of the gesture interface;
and inputting the image matrix into a convolutional neural network model to obtain image characteristic data of the hand-drawn image, wherein the convolutional neural network model is obtained by training each training hand-drawn interface on a convolutional neural network.
6. The method of claim 1, further comprising:
acquiring interface codes corresponding to the training hand-drawn interfaces, and performing lexical analysis on the corresponding interface codes to obtain training marks of each training hand-drawn interface;
and training a second recurrent neural network by adopting each training mark to obtain the second recurrent neural network model.
7. A user interface generating apparatus, comprising:
the acquisition module is used for acquiring image characteristic data of a hand-drawn interface corresponding to a user interface to be generated and acquiring predetermined characteristic data of an initial mark;
the determining module is used for determining a marking sequence of the hand-drawing interface according to the image characteristic data, the initially marked characteristic data and a pre-trained first cyclic neural network model;
a generating module, configured to generate the user interface according to the tag sequence;
the obtaining module is specifically configured to, when obtaining feature data of a predetermined initial mark:
and inputting a predetermined initial mark into a second recurrent neural network model to obtain the characteristic data of the initial mark.
8. An electronic device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the user interface generation method according to any of claims 1-6 when executing the program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the user interface generation method of any one of claims 1 to 6.
CN201811425428.5A 2018-11-27 2018-11-27 User interface generation method and device Active CN109656554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811425428.5A CN109656554B (en) 2018-11-27 2018-11-27 User interface generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811425428.5A CN109656554B (en) 2018-11-27 2018-11-27 User interface generation method and device

Publications (2)

Publication Number Publication Date
CN109656554A CN109656554A (en) 2019-04-19
CN109656554B true CN109656554B (en) 2022-04-15

Family

ID=66111548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811425428.5A Active CN109656554B (en) 2018-11-27 2018-11-27 User interface generation method and device

Country Status (1)

Country Link
CN (1) CN109656554B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262825B (en) * 2019-06-24 2023-12-05 北京字节跳动网络技术有限公司 Thermal updating method, thermal updating device, electronic equipment and readable storage medium
CN110502236B (en) * 2019-08-07 2022-10-25 山东师范大学 Front-end code generation method, system and equipment based on multi-scale feature decoding
CN111176960B (en) * 2019-10-22 2022-02-18 腾讯科技(深圳)有限公司 User operation behavior tracking method, device, equipment and storage medium
CN110968299A (en) * 2019-11-20 2020-04-07 北京工业大学 Front-end engineering code generation method based on hand-drawn webpage image
CN112527296A (en) * 2020-12-21 2021-03-19 Oppo广东移动通信有限公司 User interface customizing method and device, electronic equipment and storage medium
CN113377356B (en) * 2021-06-11 2022-11-15 四川大学 Method, device, equipment and medium for generating user interface prototype code

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930159A (en) * 2016-04-20 2016-09-07 中山大学 Image-based interface code generation method and system
CN108288078A (en) * 2017-12-07 2018-07-17 腾讯科技(深圳)有限公司 Character identifying method, device and medium in a kind of image
CN108664473A (en) * 2018-05-11 2018-10-16 平安科技(深圳)有限公司 Recognition methods, electronic device and the readable storage medium storing program for executing of text key message

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9977661B2 (en) * 2013-06-28 2018-05-22 Tencent Technology (Shenzhen) Company Limited Method and system for generating a user interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930159A (en) * 2016-04-20 2016-09-07 中山大学 Image-based interface code generation method and system
CN108288078A (en) * 2017-12-07 2018-07-17 腾讯科技(深圳)有限公司 Character identifying method, device and medium in a kind of image
CN108664473A (en) * 2018-05-11 2018-10-16 平安科技(深圳)有限公司 Recognition methods, electronic device and the readable storage medium storing program for executing of text key message

Also Published As

Publication number Publication date
CN109656554A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN109656554B (en) User interface generation method and device
US10133965B2 (en) Method for text recognition and computer program product
CN109753968B (en) Method, device, equipment and medium for generating character recognition model
US20200082218A1 (en) Optical character recognition using end-to-end deep learning
CN110795938B (en) Text sequence word segmentation method, device and storage medium
CN109740752B (en) Deep model training method and device, electronic equipment and storage medium
CN107273883B (en) Decision tree model training method, and method and device for determining data attributes in OCR (optical character recognition) result
CN106067019A (en) The method and device of Text region is carried out for image
CN109919214B (en) Training method and training device for neural network model
CN113191131A (en) Form template establishing method for text recognition, text recognition method and system
US20210374490A1 (en) Method and apparatus of processing image, device and medium
CN110991303A (en) Method and device for positioning text in image and electronic equipment
CN114548102A (en) Method and device for labeling sequence of entity text and computer readable storage medium
CN112241629A (en) Pinyin annotation text generation method and device combining RPA and AI
JP2012234512A (en) Method for text segmentation, computer program product and system
CN110188327B (en) Method and device for removing spoken language of text
CN112069805A (en) Text labeling method, device, equipment and storage medium combining RPA and AI
CN110135583B (en) Method and device for generating label information and electronic equipment
CN116860747A (en) Training sample generation method and device, electronic equipment and storage medium
CN116661786A (en) Design page generation method and device
CN116701637A (en) Zero sample text classification method, system and medium based on CLIP
CN110941947A (en) Document editing method and device, computer storage medium and terminal
CN114358011A (en) Named entity extraction method and device and electronic equipment
JP2014078168A (en) Character recognition apparatus and program
CN110750960A (en) Configuration file analysis method, storage medium, electronic device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant