CN112579080A - Method and device for generating user interface code - Google Patents

Method and device for generating user interface code Download PDF

Info

Publication number
CN112579080A
CN112579080A CN201910936001.XA CN201910936001A CN112579080A CN 112579080 A CN112579080 A CN 112579080A CN 201910936001 A CN201910936001 A CN 201910936001A CN 112579080 A CN112579080 A CN 112579080A
Authority
CN
China
Prior art keywords
layer
user interface
identification
image
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910936001.XA
Other languages
Chinese (zh)
Inventor
王戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910936001.XA priority Critical patent/CN112579080A/en
Publication of CN112579080A publication Critical patent/CN112579080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for generating a user interface code, and relates to the technical field of computers. One embodiment of the method comprises: extracting each layer and the nesting relation of each layer from a visual draft of a user interface; identifying each layer through an image identification technology to obtain an identification result of each layer; and generating a code described by adopting a domain specific language according to the nesting relation of each image and the identification result of each layer. The embodiment can solve the technical problems of low recognition rate and difficult lateral expansion.

Description

Method and device for generating user interface code
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a user interface code.
Background
At present, a method for generating a code by using a visual draft of an image user interface generally comprises two steps:
in the first step, the model is trained using pre-prepared training data. The training data refers to images of various marked page element (e.g., input box, drop-down box, login form, etc.) modules and corresponding module types (e.g., item list, tab page switching, banner carousel, etc.).
And secondly, firstly converting the visual draft into a picture in a jpg format or a png format, then dividing the whole picture into a plurality of independent sub-pictures by using OpenCV, then respectively identifying each sub-picture by using a trained model, matching module types corresponding to each sub-picture, and finally splicing codes corresponding to each module type into complete codes.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
1) during the recognition process, the pictures are cut according to regions by using OpenCV, so that complete information in each picture layer in the visual manuscript cannot be effectively utilized, and the recognition rate is low.
2) The identified module type can only correspond to a specific development language code, and is not easy to expand transversely. For example, the same visual script may need to develop an Android end page and an ios end page at the same time, which requires separate adaptation of the two systems.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for generating a user interface code, so as to solve the technical problems of low recognition rate and difficulty in horizontal expansion.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method of generating a user interface code, including:
extracting each layer and the nesting relation of each layer from a visual draft of a user interface;
identifying each layer through an image identification technology to obtain an identification result of each layer;
and generating a code described by adopting a domain specific language according to the nesting relation of each image and the identification result of each layer.
Optionally, identifying each layer by using an image identification technology to obtain an identification result of each layer, where the identifying step includes:
training a neural network based on the images in the training set and the labels corresponding to the images to obtain a layer identification model;
and identifying each image layer through the image layer identification model to obtain the identification result of each image layer.
Optionally, the identification result includes each page element in the layer and a position of each page element in the layer.
Optionally, the method further comprises:
converting the code described by the domain-specific language into the code described by the target language through an adaptation layer of the target language.
In addition, according to another aspect of an embodiment of the present invention, there is provided an apparatus for generating a user interface code, including:
the extraction module is used for extracting each layer and the nesting relation of each layer from a visual draft of a user interface;
the identification module is used for identifying each layer through an image identification technology to obtain an identification result of each layer;
and the generating module is used for generating codes described by adopting the domain specific language according to the nesting relation of the images and the recognition result of each layer.
Optionally, the identification module is further configured to:
training a neural network based on the images in the training set and the labels corresponding to the images to obtain a layer identification model;
and identifying each image layer through the image layer identification model to obtain the identification result of each image layer.
Optionally, the identification result includes each page element in the layer and a position of each page element in the layer.
Optionally, the method further comprises:
and the translation module is used for converting the codes described by the domain-specific language into the codes described by the target language through an adaptation layer of the target language.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any of the embodiments described above.
According to another aspect of the embodiments of the present invention, there is also provided a computer readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above embodiments.
One embodiment of the above invention has the following advantages or benefits: because the technical means of extracting each layer and the nesting relation of each layer from the visual draft of the user interface and generating the codes described by the domain-specific language according to the nesting relation of each image and the recognition result of each layer is adopted, the technical problems of low recognition rate and difficulty in lateral expansion in the prior art are solved. According to the embodiment of the invention, by extracting each layer and the nesting relation of each layer from the visual draft, the layer information is effectively utilized, so that the identification accuracy of the visual draft is improved; and the code for describing the user interface by adopting the domain-specific language can be adaptive, so that the code is easy to expand transversely.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a main flow of a method of generating user interface code according to an embodiment of the invention;
FIG. 2 is a diagram showing a main flow of a method of generating a user interface code according to one referential embodiment of the present invention;
FIG. 3 is a diagram showing a main flow of a method of generating a user interface code according to another referential embodiment of the present invention;
FIG. 4 is a schematic diagram of the main modules of an apparatus for generating user interface code according to an embodiment of the present invention;
FIG. 5 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 6 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of a main flow of a method of generating user interface code according to an embodiment of the present invention. As an embodiment of the present invention, as shown in fig. 1, the method for generating a user interface code may include:
step 101, extracting each layer and the nesting relation of each layer from a visual draft of a user interface.
Because the visual draft comprises a plurality of layers, in order to fully acquire the complete information of each layer in the visual draft, each layer and the nesting relation of each layer are firstly extracted from the visual draft of the user interface. Optionally, the visual drafts may be drawn by using graphics software such as Photoshop or coreldaw, so as to extract each layer and the nesting relationship of each layer.
And 102, identifying each layer through an image identification technology to obtain an identification result of each layer.
In this step, each layer extracted in step 101 may be identified by an image identification technique, so as to obtain an identification result of each layer, thereby avoiding an identification error caused by loss of layer information after converting the visual manuscript into a picture. Optionally, the layers may be identified through machine learning, so as to obtain an identification result of each layer, so as to improve identification accuracy.
Optionally, the identification result includes each page element in the layer and a position of each page element in the layer. The page elements may be input boxes, drop-down boxes, login forms, characters, and the like. In this step, the page elements in each layer and the positions of the page elements in the layers are identified by an image recognition technique.
Optionally, identifying each layer by using an image identification technology to obtain an identification result of each layer, where the identifying step includes: training a neural network based on the images in the training set and the labels corresponding to the images to obtain a layer identification model; and identifying each image layer through the image layer identification model to obtain the identification result of each image layer. The training set is an image with labels marked in advance, the labels can be input boxes, pull-down boxes, login forms, characters and the like, a neural network is selected through the training set, so that a layer recognition model is obtained, then each layer is accurately recognized through the layer recognition model, and a recognition result of each layer is obtained.
And 103, generating a code described by a domain specific language according to the nesting relation of the images and the identification result of each layer.
In order to improve the adaptability of the user interface code and make it easy to expand the user interface code horizontally, in the embodiment of the present invention, a code described in a domain-specific language (DSL) is generated according to the nesting relationship of each image obtained in step 101 and the recognition result of each image layer obtained in step 102.
For example, the code that employs DSL may be as follows:
Figure BDA0002221597060000061
therefore, the embodiment of the invention adopts the domain-specific language to describe the semantic level of the page information, and is not related to the specific development language. When a new user interface development technology needs to be supported later, only an adaptation layer for converting the DSL into the language needs to be developed.
According to the various embodiments, the technical means that the layers and the nesting relationship of the layers are extracted from the visual draft of the user interface, and the codes described by the domain specific language are generated according to the nesting relationship of the images and the recognition results of the layers are provided, so that the technical problems of low recognition rate and difficulty in lateral expansion in the prior art are solved. According to the embodiment of the invention, by extracting each layer and the nesting relation of each layer from the visual draft, the layer information is effectively utilized, so that the identification accuracy of the visual draft is improved; and moreover, the code of the user interface is described by adopting the domain-specific language, so that the adaptability of the user interface can be improved, and the user interface is easy to expand transversely.
And a DSL layer is introduced between the identification result and the platform code, so that codes of different platforms can be conveniently generated by expansion.
Fig. 2 is a schematic diagram of a main flow of a method of generating user interface code according to one referential embodiment of the present invention.
Step 201, extracting each layer and the nesting relation of each layer from the visual draft of the user interface.
Because the visual draft comprises a plurality of layers, in order to fully acquire the complete information of each layer in the visual draft, each layer and the nesting relation of each layer are firstly extracted from the visual draft of the user interface.
Step 202, identifying each layer through an image identification technology to obtain an identification result of each layer.
Each layer extracted in step 101 can be identified by an image identification technology, so that an identification result of each layer is obtained, and an identification error caused by loss of layer information after the visual manuscript is converted into a picture is avoided. Optionally, the identification result includes each page element in the layer and a position of each page element in the layer. The page elements may be input boxes, drop-down boxes, login forms, characters, and the like.
And 203, generating a code described by a domain specific language according to the nesting relation of the images and the identification result of each layer.
In order to improve the adaptability of the user interface code and make it easy to expand laterally, the embodiment of the present invention generates a code described by using DSL according to the nesting relationship of each image obtained in step 201 and the recognition result of each image layer obtained in step 202.
Step 204, developing an adaptation layer of the target language.
Optionally, the adaptation layer comprises a Web adaptation layer or a read Native adaptation layer.
Step 205, converting the code described by the domain specific language into the code described by the target language through the adaptation layer of the target language.
Based on the code described with DSL generated in step 203, the code described with DSL is "translated" by the adaptation layer into a code directly usable by the corresponding platform. For example, HTML and CSS which can be directly displayed by a browser can be generated through a web adaptation layer, and tags such as View, Image and Text supported by a read Native framework can be generated through the read Native adaptation layer.
No matter what platform the semantic structural relationship is universal, the embodiment of the invention adopts DSL to describe the code of the user interface, which can improve the adaptability and make the user interface easy to expand transversely. When a new user interface development technology needs to be supported later, only an adaptation layer for converting the DSL into the language needs to be developed.
In addition, in a reference embodiment of the present invention, the detailed implementation of the method for generating a user interface code is described in detail in the above-mentioned method for generating a user interface code, and therefore, the repeated content will not be described.
Fig. 3 is a schematic diagram of a main flow of a method of generating user interface code according to another referential embodiment of the present invention.
Step 301, training a neural network based on the images in the training set and the labels corresponding to the images to obtain a layer recognition model.
The training set is an image labeled with labels in advance, the labels can be input boxes, drop-down boxes, login forms, characters and the like, and the neural network is selected through the training set, so that the layer recognition model is obtained.
Step 302, extracting each layer and the nesting relation of each layer from the visual draft of the user interface.
Because the visual draft comprises a plurality of layers, in order to fully acquire the complete information of each layer in the visual draft, each layer and the nesting relation of each layer are extracted from the visual draft of the user interface.
Step 303, identifying each layer through the layer identification model to obtain an identification result of each layer.
And accurately identifying each layer through the layer identification model obtained through training in the step 301 to obtain the identification result of each layer.
And 304, generating a code described by a domain specific language according to the nesting relation of the images and the identification result of each layer.
In order to improve the adaptability of the user interface code and make the user interface code easy to expand laterally, the embodiment of the present invention generates a code described by using DSL according to the nesting relationship of each image obtained in step 302 and the recognition result of each image layer obtained in step 303.
In addition, in another embodiment of the present invention, the detailed implementation of the method for generating a user interface code is described in detail in the above-mentioned method for generating a user interface code, and therefore, the repeated description is omitted.
Fig. 4 is a schematic diagram of main modules of an apparatus for generating a user interface code according to an embodiment of the present invention, and as shown in fig. 4, the apparatus 400 for generating a user interface code includes a pull-out module 401, an identification module 402, and a generation module 403. The extraction module 401 is configured to extract each layer and the nesting relationship between each layer from a visual draft of a user interface; the identifying module 402 is configured to identify each layer through an image identification technology to obtain an identification result of each layer; the generating module 403 is configured to generate a code described by using a domain-specific language according to the nesting relationship between the images and the recognition result of each layer.
Optionally, the identifying module 402 is further configured to:
training a neural network based on the images in the training set and the labels corresponding to the images to obtain a layer identification model;
and identifying each image layer through the image layer identification model to obtain the identification result of each image layer.
Optionally, the identification result includes each page element in the layer and a position of each page element in the layer.
Optionally, the method further comprises:
and the translation module is used for converting the codes described by the domain-specific language into the codes described by the target language through an adaptation layer of the target language.
According to the various embodiments, the technical means that the layers and the nesting relationship of the layers are extracted from the visual draft of the user interface, and the codes described by the domain specific language are generated according to the nesting relationship of the images and the recognition results of the layers are provided, so that the technical problems of low recognition rate and difficulty in lateral expansion in the prior art are solved. According to the embodiment of the invention, by extracting each layer and the nesting relation of each layer from the visual draft, the layer information is effectively utilized, so that the identification accuracy of the visual draft is improved; and moreover, the code of the user interface is described by adopting the domain-specific language, so that the adaptability of the user interface can be improved, and the user interface is easy to expand transversely.
It should be noted that, in the implementation of the apparatus for generating a user interface code according to the present invention, the details of the method for generating a user interface code are already described in detail, and therefore, the repeated descriptions herein are not repeated.
Fig. 5 illustrates an exemplary system architecture 500 of a method of generating user interface code or an apparatus for generating user interface code to which embodiments of the present invention may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 501, 502, 503 to interact with a server 504 over a network 504 to receive or send messages, etc. The terminal devices 501, 502, 503 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 501, 502, 503 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 501, 502, 503. The background management server may analyze and otherwise process the received data such as the item information query request, and feed back a processing result (for example, target push information, item information — just an example) to the terminal device.
It should be noted that the method for generating the user interface code provided by the embodiment of the present invention is generally executed by the server 505, and accordingly, the apparatus for generating the user interface code is generally disposed in the server 505. The method for generating the user interface code provided by the embodiment of the present invention may also be executed by the terminal devices 501, 502, 503, and accordingly, the apparatus for generating the user interface code may be disposed in the terminal devices 501, 502, 503.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use with a terminal device implementing an embodiment of the invention is shown. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer programs according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a pull-away module, an identification module, and a generation module, where the names of the modules do not in some cases constitute a limitation on the modules themselves.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: extracting each layer and the nesting relation of each layer from a visual draft of a user interface; identifying each layer through an image identification technology to obtain an identification result of each layer; and generating a code described by adopting a domain specific language according to the nesting relation of each image and the identification result of each layer.
According to the technical scheme of the embodiment of the invention, because the technical means of extracting the layers and the nesting relation of the layers from the visual draft of the user interface and generating the codes described by the domain specific language according to the nesting relation of the images and the recognition results of the layers are adopted, the technical problems of low recognition rate and difficulty in lateral expansion in the prior art are solved. According to the embodiment of the invention, by extracting each layer and the nesting relation of each layer from the visual draft, the layer information is effectively utilized, so that the identification accuracy of the visual draft is improved; and the code for describing the user interface by adopting the domain-specific language can be adaptive, so that the code is easy to expand transversely.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of generating user interface code, comprising:
extracting each layer and the nesting relation of each layer from a visual draft of a user interface;
identifying each layer through an image identification technology to obtain an identification result of each layer;
and generating a code described by adopting a domain specific language according to the nesting relation of each image and the identification result of each layer.
2. The method according to claim 1, wherein identifying the layers by using an image identification technique to obtain the identification result of each layer comprises:
training a neural network based on the images in the training set and the labels corresponding to the images to obtain a layer identification model;
and identifying each image layer through the image layer identification model to obtain the identification result of each image layer.
3. The method according to claim 1, wherein the recognition result comprises each page element in the layer and a position of each page element in the layer.
4. The method of claim 1, further comprising:
converting the code described by the domain-specific language into the code described by the target language through an adaptation layer of the target language.
5. An apparatus for generating user interface code, comprising:
the extraction module is used for extracting each layer and the nesting relation of each layer from a visual draft of a user interface;
the identification module is used for identifying each layer through an image identification technology to obtain an identification result of each layer;
and the generating module is used for generating codes described by adopting the domain specific language according to the nesting relation of the images and the recognition result of each layer.
6. The apparatus of claim 5, wherein the identification module is further configured to:
training a neural network based on the images in the training set and the labels corresponding to the images to obtain a layer identification model;
and identifying each image layer through the image layer identification model to obtain the identification result of each image layer.
7. The apparatus according to claim 5, wherein the recognition result comprises each page element in a layer and a position of each page element in a layer.
8. The apparatus of claim 5, further comprising:
and the translation module is used for converting the codes described by the domain-specific language into the codes described by the target language through an adaptation layer of the target language.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-4.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN201910936001.XA 2019-09-29 2019-09-29 Method and device for generating user interface code Pending CN112579080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910936001.XA CN112579080A (en) 2019-09-29 2019-09-29 Method and device for generating user interface code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910936001.XA CN112579080A (en) 2019-09-29 2019-09-29 Method and device for generating user interface code

Publications (1)

Publication Number Publication Date
CN112579080A true CN112579080A (en) 2021-03-30

Family

ID=75110844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910936001.XA Pending CN112579080A (en) 2019-09-29 2019-09-29 Method and device for generating user interface code

Country Status (1)

Country Link
CN (1) CN112579080A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115291876A (en) * 2022-09-29 2022-11-04 安徽商信政通信息技术股份有限公司 Form design tool construction method, system, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115291876A (en) * 2022-09-29 2022-11-04 安徽商信政通信息技术股份有限公司 Form design tool construction method, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112685565B (en) Text classification method based on multi-mode information fusion and related equipment thereof
US10635735B2 (en) Method and apparatus for displaying information
US11151177B2 (en) Search method and apparatus based on artificial intelligence
US20190164250A1 (en) Method and apparatus for adding digital watermark to video
US11758088B2 (en) Method and apparatus for aligning paragraph and video
CN108628830B (en) Semantic recognition method and device
US11055373B2 (en) Method and apparatus for generating information
CN108108342B (en) Structured text generation method, search method and device
CN106919711B (en) Method and device for labeling information based on artificial intelligence
CN108280200B (en) Method and device for pushing information
CN113382083B (en) Webpage screenshot method and device
CN109446442B (en) Method and apparatus for processing information
CN111104479A (en) Data labeling method and device
CN113377653B (en) Method and device for generating test cases
CN109413056B (en) Method and apparatus for processing information
CN110765973A (en) Account type identification method and device
WO2020078050A1 (en) Comment information processing method and apparatus, and server, terminal and readable medium
CN108038172B (en) Search method and device based on artificial intelligence
CN107885872B (en) Method and device for generating information
CN107330087B (en) Page file generation method and device
CN110705271B (en) System and method for providing natural language processing service
CN111400581B (en) System, method and apparatus for labeling samples
CN111027333B (en) Chapter translation method and apparatus
CN112579080A (en) Method and device for generating user interface code
CN109947526B (en) Method and apparatus for outputting information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination