CN113296769A - Data processing method, visual draft processing method, system and electronic equipment - Google Patents

Data processing method, visual draft processing method, system and electronic equipment Download PDF

Info

Publication number
CN113296769A
CN113296769A CN202011296388.6A CN202011296388A CN113296769A CN 113296769 A CN113296769 A CN 113296769A CN 202011296388 A CN202011296388 A CN 202011296388A CN 113296769 A CN113296769 A CN 113296769A
Authority
CN
China
Prior art keywords
picture
information
description information
graph
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011296388.6A
Other languages
Chinese (zh)
Inventor
常艳芳
周婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011296388.6A priority Critical patent/CN113296769A/en
Publication of CN113296769A publication Critical patent/CN113296769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a data processing method, a visual draft processing system and electronic equipment. The method comprises the following steps: determining layout information of the picture according to the picture file of the picture; processing the picture based on the layout information to determine a partial graph with component functions in the picture; and obtaining first description information for generating a program code according to the layout information and the local graph with the component function. By adopting the technical method provided by the application, the first description information can be automatically generated, and the precision is high; in addition, the first description information can be used for obtaining program codes of pictures, and provides a technical basis for providing automatic code generation service on line.

Description

Data processing method, visual draft processing method, system and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, a visual manuscript processing method, a system, and an electronic device.
Background
The program code for automatically generating the visual picture is an effective method for improving the system and program development efficiency and shortening the development period. At present, in the process of automatically generating codes, a visual picture designed by a visual designer is often used as an input source, and a User Interface (UI) component is identified in a target detection mode, wherein the mode is low in accuracy and cannot be applied in large batch because the type of the component and the position of the component in the visual picture are identified.
Disclosure of Invention
The present application provides a data processing method, a visual manuscript processing method, a system and an electronic device which solve the above problems or at least partially solve the above problems.
In one embodiment of the present application, a data processing method is provided. The method comprises the following steps:
determining layout information of the picture according to the picture file of the picture;
processing the picture based on the layout information to determine a partial graph with component functions in the picture;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
In an embodiment of the present application, a data processing system is provided. The system comprises:
the client is used for sending the picture file of the picture to the server;
the server is used for determining the layout information of the picture according to the picture file of the picture; processing the picture based on the layout information to determine a partial graph with component functions in the picture; and obtaining first description information for generating program codes based on the layout information and the local graph with the component functions.
In another embodiment of the present application, a method for processing a visual manuscript is provided. The method comprises the following steps:
determining layout information of the visual manuscript according to the visual manuscript file;
processing a page diagram corresponding to the visual draft based on the layout information to determine a local diagram with component functions in the page diagram;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
In another embodiment of the present application, a system for processing a visual manuscript is provided. The system comprises:
the client is used for sending the visual draft file to the server;
the server is used for determining the layout information of the visual manuscript according to the visual manuscript file; processing a page diagram corresponding to the visual draft based on the layout information to obtain a local diagram with component functions in the page diagram; and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
In one embodiment of the present application, an electronic device is provided. The electronic device includes: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining layout information of the picture according to the picture file of the picture;
processing the picture based on the layout information to determine a partial graph with component functions in the picture;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
In another embodiment of the present application, an electronic device is provided. The electronic device includes: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining layout information of the visual manuscript according to the visual manuscript file;
processing a page diagram corresponding to the visual draft based on the layout information to determine a local diagram with component functions in the page diagram;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
According to the technical scheme provided by the embodiment of the application, the layout information of the picture (or the page image corresponding to the visual draft) is determined according to the picture file (or the visual draft file) of the picture, then the picture (or the page image) is processed based on the layout information, so that the local image with the component function in the picture (or the page image) is obtained, and then the first description information for generating the program code is obtained according to the layout information and the local image with the component function. In the above process, the partial graph with the component function in the picture (or the page graph) is obtained by processing the picture (or the page graph) based on the layout information, so the position information of the partial graph with the component function in the picture (or the page graph) can be accurately obtained; as long as the identification accuracy of the local graph with the component function is ensured, the generated program code is accurate; the image recognition accuracy of the current image recognition technology can be effectively guaranteed. Therefore, compared with the existing scheme of identifying both the component and the position of the component, the scheme provided by the embodiment of the application has higher accuracy and is suitable for large-batch use.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required to be utilized in the description of the embodiments or the prior art are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a schematic diagram of a page image corresponding to a visual manuscript, which is subjected to image recognition according to an embodiment of the present application;
FIG. 2 is a block diagram of a data processing system according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 4a is a schematic diagram of layout information of a picture in a picture file and at least one partial picture obtained according to the layout information according to an embodiment of the present application;
FIG. 4b is a schematic diagram illustrating a method for determining a classification to which the graph content belongs in the at least one local graph according to an embodiment of the present application;
FIG. 5 is a diagram illustrating two partial views of a picture belonging to the same component class and having a nested relationship therebetween according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a visual editing page provided in an embodiment of the present application;
FIG. 7 is a schematic view of a visual editing page provided in accordance with another embodiment of the present application;
fig. 8 is a schematic flowchart of a method for processing a visual manuscript according to another embodiment of the present application;
fig. 9 is a schematic diagram of a flow from design draft to code output according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Before explaining the schemes provided by the embodiments of the present application, related terms referred to in the present application will be briefly described.
Visual picture: the visual manuscript is generated based on a visual manuscript file formed by a designer by using design software to design a specific application scene, for example, the visual picture may be a page diagram for displaying relevant information of goods such as household appliances, clothing, cosmetics, and the like.
Element (b): the smallest unit part that constitutes a visual picture is not resegmentable. Such as text, icons, pictures, shapes, etc.
Assembly of: and (4) componentizing the results of the materials with different granularities on the user interface, such as search boxes, buttons, timers, coupons, video time display and other components.
A container node: the node corresponding to the container capable of accommodating and displaying one or more components in the layout architecture of the page.
Layout information: the hierarchical structure information of each component in the visual picture in the program code.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different. In the present application, the term "or/and" is only one kind of association relationship describing the associated object, and means that three relationships may exist, for example: a or/and B, which means that A can exist independently, A and B can exist simultaneously, and B can exist independently; the "/" character in this application generally indicates that the objects associated with each other are in an "or" relationship. In addition, the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the traditional page development process, a visual designer is required to design a visual picture corresponding to a page by means of a design tool, and a programmer obtains a program code for restoring the visual picture by manually writing the code according to the visual picture designed by the designer.
With the rapid development of computer technology, although some methods for automatically generating page codes have appeared in the prior art, for example, a page map corresponding to a visual manuscript is directly recognized by an image recognition technology (such as a machine learning model, for example, a target detection model obtained by training in fig. 1) to automatically generate a program code for restoring the page map corresponding to the visual manuscript according to the recognition result. However, the program code generated by this image recognition method is basically composed of elements with the smallest granularity, such as images, texts, etc., which are difficult to meet the componentization requirements of the code development process, and the misrecognition condition is easy to occur in the recognition process. The video component corresponding to the rice cooker position 31 shown in fig. 1 is directly recognized as an image element, and the corresponding code segment corresponding to the rice cooker position 31 is expressed by rax-image in the correspondingly generated program code. However, in the actual development process, the program code segment corresponding to the rice cooker position 11 should be expressed by rax-video. In view of the above problems, the prior art further proposes a method for automatically generating a program code for restoring a page image by using a deep learning technique to identify a component in the page image corresponding to a visual manuscript, but this method is prone to position deviation, false identification, and the like, for example, as shown in fig. 1, when a trained target detection model is used to identify a video button component 32 in a picture 100, an output identification result may appear as shown in the picture 110 or the picture 120, that is, the video button component 32 in the picture 110 (i.e., 100) or the picture 120 (i.e., 100) can be accurately identified, but there is position deviation; alternatively, the output recognition result may also appear to misrecognize the text component corresponding to "free 24 s available" in the picture 120 as the video button component 32, and so on. Moreover, when the target detection model is trained, a large number of samples are required to be input, and the samples are the whole picture of the visual draft, so that the components to be identified by the model in the samples need to be marked manually, so as to train the target detection model capable of identifying the corresponding components. In addition, when a new component is added for classification, each sample needs to be marked again, and marking cost is high. In conclusion, the existing automatic code generation method has the problems of low accuracy, high cost and the like, and is difficult to apply to an industrial production environment.
The inventor discovers that in the process of realizing the technical scheme provided by the embodiment of the application: in addition to the page map corresponding to the visual manuscript used by the automatic code generation program in the prior art, the visual manuscript file also includes layer description information of each layer of the page map, such as position information, size information, CSS attributes, and the like. Therefore, layer description information corresponding to the page map may be extracted from a visual draft file, so as to obtain layout information of the page map based on the layer description information, the page map is cut according to the layout information, so as to obtain at least one partial map corresponding to a component in the page map, and then, by identifying the at least one partial map, a program code corresponding to the page map may be obtained according to an identification result of the at least one partial map and the layout information. Executing the program code can restore the corresponding page map. In the process of automatically generating the program code corresponding to the page map, the method directly converts the component identification problem in the page map into the component classification problem, because the position information of the component is accurate, the generated program code is certain accurate as long as the category of the local map corresponding to the component is accurate, and in addition, when an image classification model for identifying and classifying the local map is trained, the required sample can be automatically generated by the program, the manual marking is not relied on, and the cost is favorably reduced. The detailed description of the embodiments will be described in detail below.
Before introducing the method embodiments provided by the present application, a hardware architecture on which the technical solution provided by the present application can be based is explained.
In an implementation scheme, the method provided by the embodiment of the present application can be implemented based on the system architecture shown in fig. 2. The data processing system shown in fig. 2 includes a server 001 and a client 002. The server may be an entity server, a virtual server, a cloud, and the like, which is not specifically limited in this embodiment. The client can be any equipment such as a smart phone, a notebook computer, an intelligent wearable device, a desktop computer and the like. Wherein the content of the first and second substances,
the client 002 is used for sending the picture file of the picture to the server;
the server 001 is used for determining layout information of the picture according to the picture file of the picture; processing the picture based on the layout information to determine a partial graph with component functions in the picture; and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
In this embodiment, the client 002 and the server 001 may be connected via a wireless or wired network. If the client 002 and the server 001 are connected through a mobile network, the network format of the mobile network may be any one of 4G (LTE), 4G + (LTE +), WiMax, 5G, and the like.
In this embodiment, the server 001 may generate a program code corresponding to the picture according to the first description information. The client 002 can also receive the program code sent by the server 001, and display the program code, which is convenient for the user to view. Or, the client 002 may request the server 001 to acquire the first description information, and display a visual layout page related to the picture according to the acquired first description information, through which the user may participate in error correction of the first description information. The error correction data that the user participates in the error correction may be saved as auxiliary data. These saved auxiliary data may be used to improve the data processing accuracy of the server 001, and more specifically, to improve the first description information generation accuracy, or to improve the program code generation accuracy. For example, the user corrects the image recognition result of a partial image through the visualization layout page, and the image recognition result before error correction and the image recognition result after error correction can be respectively used as a negative sample and a positive sample of the partial image to train the image recognition model, so that the recognition accuracy of the image recognition model is improved.
The specific work flows of the components, such as the server and the client, in the data processing system and the signaling interaction therebetween provided in this embodiment will be further described in the following embodiments, and will not be described herein again.
Currently, the method provided by the embodiments of the present application may also be performed by a stand-alone device, such as a client device with certain computing capability, in other words, the execution subject of the method provided by the following embodiments may be the client device.
Fig. 3 is a schematic flowchart illustrating a data processing method according to an embodiment of the present application. The execution main body of the method provided by this embodiment may be the server in the system embodiment described above. As shown in fig. 1, the method comprises the steps of:
101. determining layout information of the picture according to the picture file of the picture;
102. processing the picture based on the layout information to determine a partial graph with component functions in the picture;
103. and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
In the foregoing 101, the picture file of the picture may be designed by a designer for a specific application scenario by using design software, and obtained by using corresponding design software. For example, a designer designs a UI page diagram of an application using corresponding design software, and after the designer completes the design of the UI page diagram using the design software, the diagram file of the UI page diagram can be obtained through the design software. The design software may include, but is not limited to, Sketch, Adobe Illustrator, Photoshop, Adobe XD, etc.
The map file usually includes a plurality of layers for forming a picture and layer description information of each of the plurality of layers, such as position information, size information, CSS attributes, and the like. Therefore, by performing parsing processing on the graph file, the layer description information of the picture can be extracted from the graph file to obtain the layout information of the picture based on the layer description information. The data format of the layer description information may be, but is not limited to, json (javascript Object notification) data format, HTML data format. However, in practical situations, the difference between the respective focus of the designer and the programmer during the design process is large, the designer focuses on whether the desired visual effect can be achieved in the graphic file, and the programmer focuses on the reasonableness of the layer structure and nesting. For example, in the design process, designers sometimes add some layers that have no influence on layout and vision in a drawing file; for example, in a graphic file, designers sometimes use several small graphic layers for stitching to achieve their desired visual effect. For example, the rice cooker 321 shown in fig. 1 is composed of two primitive elements a1 and a2 located on different layers; however, from the perspective of a programmer, the graph formed by splicing the element a1 and the element a2 (i.e., the rice cooker 321) can be taken as a whole, and in this case, the layers need to be merged. Therefore, in order to improve the structural reasonableness and the simplification of the layout information of the picture, reprocessing operations such as layer merging and unnecessary layer deletion need to be performed on the layer description information before the layer description information is used to obtain the layout information of the picture. That is, in an implementable technical solution, the step 101 "determining layout information of a picture according to a picture file of the picture" may specifically include:
1011. extracting layer description information of the picture from the picture file;
1012. and carrying out layer reprocessing treatment on the layer description information to obtain the layout information of the picture.
In 1011 above, the graph file may be analyzed, and then the layer description information of the picture may be extracted from the graph file. Specifically, the graph file data may be automatically read through a corresponding parsing program, and the layer description information of the picture is extracted from the graph file; wherein, the layer description information may include but is not limited to: location information, size information, CSS (Cascading Style Sheets) attributes, and so on. Of course, an analysis tool may also be used to analyze the graph file, so as to extract the layer description information of the picture from the graph file; the analysis tool may be a Software Development kit, such as Software Development kit, which may be abbreviated as SDK.
Here, it should be noted that: the layer description information of the picture extracted from the picture file is based on top, bottom, left, right (top, bottom, left, right) positioning (this positioning is also called absolute positioning), and the layout of the picture can be obtained by directly using the layer description information as the layout of the positioning mode. Since the layout of the positioning method does not have expansibility, is poor in readability, and does not have maintainability for developers, after the layer description information of the picture is extracted from the picture file, further processing such as the layout structure of the layer and CSS attributes is performed on the layer description information, so as to obtain the layer description information conforming to the layer protocol specification, thereby obtaining the layout information conforming to the layer protocol specification, and the specific implementation can be referred to the following related description.
The 1012 "obtaining the layout information of the picture by performing layer reprocessing on the layer description information" may specifically be implemented by the following steps:
a01, identifying the layer contained in the picture based on the layer description information;
a02, performing layout processing on the image layers contained in the picture by using a layout algorithm to obtain layout information conforming to the protocol specification of the image layers.
In specific implementation, the layout algorithm may be used to perform layout processing on the image layers included in the picture, where the layout processing includes at least one of the following: loop identification, reasonable positioning, positioning mode conversion, redundant nesting deletion, reasonable packet nesting and the like. In addition, self-adaptive processing can be performed on elements corresponding to the image layers included in the image, for example, the expansibility of the elements, such as self-adaptation of node positions and expandable sizes of texts, images, components and the like, the alignment relationship among the elements, and the maximum width and the high fault tolerance of the elements can be performed. Specifically, performing layout processing on the layer included in the picture or performing adaptive processing on the element itself corresponding to the layer by using a layout algorithm may refer to the prior art, and details are not repeated here. The layer protocol specification may be a D2C UI (Design 2Code User Interface) layer protocol specification, and the obtained layout information conforming to the layer protocol specification may be the component tree 20 shown in fig. 4 a.
After the layout information (for example, the layout information may be represented as the component tree shown in fig. 4 a) meeting the layer protocol specification is obtained by the above method, possible components may be cut out based on the layout information with the container node as the granularity. That is, in an implementation solution, the processing on the picture in the step 102 may include a cropping process, an image recognition, and the like. Specifically, the step 102 "processing the picture based on the layout information to determine the partial graph with the component function in the picture" may include:
1021. and according to the layout information, cutting the picture to obtain at least one local picture.
1022. And performing image recognition on at least one local graph to obtain a local graph with component functions.
More specifically, the step 1021 may include:
10211. determining a cutting area according to the layout information;
10212. and cutting the picture according to the cutting area to obtain a cut local picture.
In the above 10211, since the location information of the container node is known in the layout information. Accordingly, the clipping region can be determined based on the position information of the container node in the layout information. That is, step 10211 "determining the clipping region according to the layout information" can be implemented by the following steps:
a11, determining nodes according to the layout information;
a12, acquiring the position information of the node;
a13, based on the position information of the node, determining the cutting area.
For the convenience of understanding the above steps, the following description will be made with reference to the example shown in fig. 4 a. Referring to the layout information of the nodes of the component tree 20 shown in fig. 4a, it can be seen that the nodes in the component tree 20 include two types: container nodes (e.g., View nodes), component nodes (e.g., Text nodes, Image nodes). The container node also has a nested relationship (or called parent-child relationship), for example, in fig. 4a, the container node view. outer22 is nested in the container node view. bd21; or the container node view. outer22 is a child node of the container node view. bd21, and the container node view. bd21 is a parent node of the container node view. outer 22. In the process of determining the nodes according to the layout information, only the container node may be regarded as a node to be determined, and then the clipping region is determined according to the position information of the container node, for example, based on the position information of the View node 21, the region 31 may be determined as the clipping region; based on the position information of the View node 22, the area 32 can be determined as a cropping area, and so on, other cropping areas in the picture 30 can be determined, such as an area 33, an area 34, an area 35, an area 36, an area 37, and an area 38. Of course, both the container node and the element node may be regarded as nodes to be determined, and at this time, when the clipping region is determined according to the position information of the node, in addition to determining the clipping region corresponding to the container node according to the position information of the container node, the clipping region corresponding to the element node may also be determined according to the position information corresponding to the element node, for example, the region 321 may be determined as the clipping region according to the position information of the element node 221; from the position information of the element node 211, the region 311 can be determined as a clipping region, and by analogy, a clipping region corresponding to another element node can be determined (not shown in the figure).
In the 10212, according to the cutting area determined in the step 10211, a cutting operation is performed on the picture, so that a corresponding partial image can be cut out. For example, the picture 30 (i.e., the picture 10) is cropped according to the cropping area determined by the container node in fig. 4a, so that the partial graph included in the partial graph set 40 can be obtained; the picture 30 is clipped by the container node and the element node, so that not only the local graph included in the local graph set 40 can be obtained, but also a local graph corresponding to a clipping region determined based on the position information of the element node, such as the local graph corresponding to the clipping region 321, can be obtained. It should be noted that: in the layout information, possible component nodes in the layout information can be determined by taking container nodes as granularity, and then a local graph obtained by clipping the picture according to a clipping region determined by the position information of the container nodes, namely, components which may need componentization in the picture, is obtained.
In the above, after obtaining at least one local graph, the category to which the graph content in the local graph belongs needs to be identified, so as to determine the category to which the local graph belongs (e.g., a video time display component type, a countdown component type, a scroll bar component type, etc.). Specifically, when determining the classification to which the image content in the at least one local image belongs, an image classification model may be trained based on a small number of high-quality samples, then the at least one local image is input into the trained image classification model, and feature extraction and analysis are performed on the at least one local image by using the trained image classification model, so as to obtain the classification to which the image content in the at least one local image belongs.
That is, 1022 "performing image recognition on at least one partial graph to obtain a partial graph with component functions" may specifically include:
10221. acquiring an image classification model;
10222. taking the at least one local graph as the input of the image classification model, and executing the image classification model to obtain an output result containing the classification of the graph content in the at least one local graph;
10223. and determining the partial graph with the graph content belonging to the component classification according to the output result so as to obtain the partial graph with the component function.
In specific implementation, the image classification model can be obtained based on sample training with labels, and the sample with the labels can be understood as a high-quality sample; these samples, which are considered to be of high quality, can be obtained by labeling the samples by a professional worker through an associated labeling tool (e.g., Labelme, Sloth, Vatic); the method can also be obtained based on a crowdsourcing task service platform, and then some samples which meet the high-quality requirement are screened out by utilizing a corresponding evaluation means. After the image classification model is obtained, the at least one local graph can be input into the image classification model, the image classification model assigns a probability value to each local graph, the probability value of a certain classification is larger, the probability that the image classification model predicts that the classification to which the graph content belongs in the local graph is larger, and the probability value is compared with a preset threshold value, so that the final classification result can be obtained. For example: setting a preset threshold value to be 0.7, and when the probability value of a certain classification is greater than 0.7, regarding as a final classification result, for example, after inputting the local map in the local map set 40 obtained in fig. 4b into the trained image classification model, the probability value of the category to which each local map belongs as shown in table 50 will be obtained, and it can be seen from table 50 that only the probability values of the local map 31 and the local map 32 are greater than 0.7, so the final classification result is: the partial map 31 and the partial map 32 are video time display type components (i.e., videobtn components).
Here, it should be noted that: after the image classification model is used for obtaining the classification of the image content in the at least one local image, the classification result corresponding to the at least one local image can be used as training sample data of the image classification model so as to improve the precision of the image classification model. Of course, in order to ensure the accuracy of the at least one local classification result, the image classification result of a local graph may be corrected, so that both the image classification results before and after correction are used as training sample data of the image classification model to improve the accuracy of the image classification model, and the corresponding correction processing operation will be described below. Therefore, in the process of training the image classification model, only a small amount of high-quality sample data needs to be provided at the beginning of training, and in the subsequent training process, training sample data can be automatically generated through a program so as to improve the precision of the image classification model.
In addition, the present embodiment can supplement the following contents: in the embodiment, component classification is performed, so that a corresponding image classification model (as shown in fig. 4 b) can be obtained, and the image classification model is used to perform classification and identification on the local graph. Such as the example shown in fig. 4b, on the e-commerce type application page, the component categories generally include, for example, a videobtn (video time display) category, a countdown (countdown) category, a sliderbar (scroll bar) category, and others (others) category. Since the image classification model can classify four component categories, in the example shown in fig. 4b, the component categories can be identified by selecting the image classification model trained by the samples of the above categories. In other application scenarios, when the page component categories may not be the above categories, a model different from the image classification model needs to be used, and the scheme provided by this embodiment may provide a plurality of image classification models for different application scenarios. Various image classification models can be selected by a user, and a subject (e.g., a server) can also be automatically selected (e.g., an image classification model is selected according to task attributes) by the method of this embodiment, which is not particularly limited in this embodiment.
Besides providing various trained and callable image classification models, the scheme of the embodiment can also provide basic models, such as a deep learning model, a convolutional neural network model and the like; and providing a corresponding interface for the user so that the user can upload training data, scene related data and the like through the client, and then training the basic model selected by the user by using the training data, the scene related data and the like uploaded by the user to obtain the model which accords with the scene required by the user.
The basic model and the multiple image classification models which can be selected and called by the user can be deployed on the server side in an off-line mode, so that functions of applying, selecting and using on line (such as through a webpage or corresponding application APP) are provided for the user. Besides these models, the solution of this embodiment also provides a service for automatically generating program codes corresponding to pictures (or page design drawings) for users on the network side, and users can select image classification models or train required models themselves when using this service. Different service items selected by the user can adopt different charging schemes; for example, the user selects the model required for self-training, then completes the automatic generation service of the program code corresponding to the picture by using the model trained by the user, and the charging scheme is the sum of the cost corresponding to the code generation service and the cost of the training model. During specific implementation, a corresponding charging strategy can be established for each service item provided for the user, and during actual charging, charging can be carried out according to the service item category, the data processing amount and the like selected by the user and the established charging strategy in advance.
In this embodiment, the step 103 of obtaining the first description information for generating the program code according to the layout information and the partial diagram with the component function may specifically include:
1031. locating a target node corresponding to the local graph with the component function in the layout information;
1032. adding the content of the local graph corresponding to the target node at the target node to obtain second description information; wherein, the content of the local graph is determined based on the component classification to which the local graph belongs;
1033. and performing conversion processing conforming to the code semantics on the second description information to obtain the first description information.
In a specific implementation, among the image recognition results of at least one partial map, there may be some partial map image recognition results that are component classifications, and some partial map image recognition results that are not component classifications. At this time, it may be determined that the component classification node in the layout information is a target node to which content needs to be added, where the component classification node is a node corresponding to the local graph of the component classification as a result of the image recognition.
In the foregoing steps 1031 and 1032, the target node may be found from the layout information by calling the corresponding identification function, and then the local graph content corresponding to the target node is added to the corresponding target node, so as to obtain the second description information. The content of the local graph may be determined based on the component classification to which the local graph belongs, or the content of the local graph is the component classification category to which the graph content in the local graph belongs.
For example, with continued reference to fig. 4a and 4b, according to the image recognition result of at least one partial graph of the picture 10, it is determined that the partial graph 31 and the partial graph 32 are video time display components (i.e., video components), based on the image recognition result, an identification function may be called and executed to find the View 21 node corresponding to the partial graph 31 and the View 21 node corresponding to the partial graph 32 from the layout information, and the corresponding image recognition results are respectively added to Smart attributes corresponding to the View 21 node and the View 21 node. In the layout information, the node of the local graph 31 corresponding to the View 21 contains a component name parameter (componentName: "View"), a unique identification name (i.e. id), a class name (i.e. className), and does not contain a "samrt" attribute; after adding the image recognition result (such as the component classification category), the content of the View 21 node corresponding to the local graph 31 includes the content of the "smart" related attribute in addition to the above information.
In practical applications, it may happen that the situation shown in fig. 4b, there are two partial diagrams of the same component class. For example, the View 21 node and the View 22 node corresponding to each of the partial graph 31 and the partial graph 32 in the picture 10 are both identified as a video time display component (i.e., a video object class), and the View 21 node corresponding to the partial graph 31 and the View 22 node corresponding to the partial graph 32 have a nested relationship. Therefore, in the process of semantically converting the contents corresponding to the View 21 node and the View 22 node corresponding to the local graph 32 and the local graph 32, the nodes to be converted need to be accurately determined by combining the nesting relationship between the two nodes, so that the contents corresponding to the nodes to be converted need to be semantically converted. That is, in an implementation technical solution, the step 1033 "perform conversion processing conforming to the code semantics on the second description information to obtain the first description information", which may be implemented by specifically adopting the following steps:
a21, traversing the second description information, and searching at least two nodes which have nesting relation and belong to the same classification in the second description information;
a22, if the second description information contains at least two nodes which have nesting relations and belong to the same classification, determining that the node in the inner layer of the nesting relations in the at least two nodes is the node to be converted;
a23, performing semantic conversion processing on the content corresponding to the node to be converted in the second description information to obtain the first description information.
In the above steps a21 and a22, some filtering rules may be added to the recognition function, for example, when nodes of a plurality of nested inclusion relationships are recognized as the same classification node, only the node at the innermost layer is taken as the node to be converted.
For example, taking the identification results corresponding to the partial graph 31 and the partial graph 32 as an example, as can be seen from the above example, the View 21 node corresponding to the partial graph 31 and the View 22 node corresponding to the partial graph 32 have a nested relationship, and the categories of the View 21 node and the View 22 node are video time display component categories (VideoBtn categories). The code program of the identification function can be configured with some relevant filtering rules, and the code program using the identification function can determine that the innermost View 22 node is the video time display component node (i.e. the node of the Videobtn) which needs to be converted.
The above-mentioned a23 "performing semantic conversion processing on the content corresponding to the node to be converted in the second description information" includes at least one of the following:
changing the name parameter in the content corresponding to the node to be converted into the classification name of the node to be converted;
and adding corresponding attribute values in the contents corresponding to the nodes needing to be converted according to the classes to which the nodes needing to be converted belong.
For example, following the examples listed in steps a21 and a22, after the code program executing the identification function determines that the View 22 node in the second description information is a node that needs to be converted, the expression function may be called for the content corresponding to the View 22 node to convert the name parameter componentName: the 'View' is replaced by the classification name of the View 22 node, namely 'VideoBtn'. The expression function is used for processing the nodes needing to be converted, such as name replacement; the component name is associated according to the category video of the component and the label input to the component when the component is input, namely the category of the component is required to be input simultaneously when the component is input for component identification. In addition, the expression function B2 can also extract time information, such as "00: 35" as the attribute value of the View 22 node added to the content corresponding to the View 22 node.
The above-mentioned a23 "performing conversion processing conforming to the code semantics on the second description information to obtain the first description information", may further include:
judging whether a child node exists under the node needing to be converted or not according to the second description information;
and deleting the content corresponding to the child node under the node to be converted in the second description information under the condition that the child node under the node to be converted also exists.
Further, the method provided by this embodiment may further include the following steps:
104. and obtaining a program code corresponding to the picture according to the first description information.
In a specific implementation, after the first description information is obtained, the first description information may be input into a DSL converter, and the DSL converter generates different types of program codes, such as fact, Vue, and the like, corresponding to the first description information, so as to restore the picture. For example, referring to fig. 5, inputting the first description information of the partial diagram 31 and the partial diagram 32 of the picture 10 into the DSL converter, the program code segments corresponding to the partial diagram 31 and the partial diagram 32 can be obtained.
In the method provided by this embodiment, the layout information of the picture is determined according to the picture file of the picture, the picture is processed based on the layout information to obtain at least one partial graph, and the at least one partial graph is subjected to image recognition to generate first description information according to the layout information and an image recognition result (specifically, a component classification result) of the at least one partial graph, so as to obtain a program code corresponding to the picture according to the first description information. Wherein, the picture can be restored by executing the program code. In the whole program code generating process, as the position information of the local graph is accurate, the generated program code is accurate as long as the identification result of the local graph is accurate; in addition, this embodiment scheme is simple, utilizes the accuracy that promotes automatic code generation, and when training the model that is used for discerning the local map, required sample can be generated by procedure automation completely, no longer relies on artifical marking, does benefit to reduce cost.
Furthermore, the scheme provided by the embodiment also provides a participation interface for the user, so that the user can conveniently participate in one or more links in the whole process, and the accuracy of the whole scheme is improved. Specifically, the present embodiment may further include the following steps:
105a, responding to a visualization display request sent by a client for the first description information, and returning a visualization arrangement page related to the picture generated according to the first description information to the client;
105b, receiving operation information on the visualization arrangement page sent by the client;
105c, determining auxiliary data for improving the generating accuracy of the first description information according to the operation information.
In the above steps 105a and 105b, the user may send a visualization display request to the execution main body of this embodiment for the first description information through an interaction manner (such as a hand touch, a mouse, and a keyboard) provided by the client, and after receiving the visualization request, the execution main body of this embodiment returns a visualization layout page related to the picture generated according to the first description information to the client, where the visualization layout page provides an operation interface for the user to perform operations such as logic layout, data layout, interactive layout, and visual layout for the picture. The logic arrangement is to arrange nesting relations among components or elements in the picture contained in the first description information, the data arrangement is to correct errors of positions, sizes, names, component identification classifications (i.e. the above-mentioned image identification results) of the components or elements in the first description information, and the interactive arrangement and the visual arrangement are to add new components or elements to the picture.
For example, taking the visualization layout page shown in fig. 6 as an example, through the visualization layout page, the user may add a new component or element to the picture 61, for example, through a drag operation, a button component 610 may be added to the picture 61, and in addition, the position, size, CSS style, name, etc. of the component or element of the picture may be modified through operation modules such as style, attribute, event, data, etc. in the visualization layout page. As shown in fig. 6, the user may also perform error correction and the like on the classification recognition result (i.e., the image recognition result mentioned above) of one or more components through the visualization layout page.
The step 105c of determining the auxiliary data for improving the accuracy of generating the first description information according to the operation information may specifically include:
the operation information comprises information for correcting an image recognition result of a local graph, and the image recognition result before correction and the image recognition result after correction are both used as training sample data of an image recognition model and used for improving the precision of the image recognition model;
the image recognition model is used for carrying out image recognition on the local graph.
For example, as shown in fig. 6, assuming that the recognition result of the local graph 611 of the picture 61 is an image type, the type of the local graph 611 may be changed by triggering a type conversion 612 control in the attribute list, and the type to which the local graph 611 belongs is changed to a video component type, so as to correct the image recognition result of the local graph 611, and after the image recognition result of the local graph 611 is corrected, both the image recognition result before the correction of the local graph 611 and the image recognition result after the correction may be used as training sample data of the image classification model (i.e., image recognition model) in fig. 4b, so as to improve the accuracy of the image classification model. Therefore, in the process of improving the image classification model, training sample data can be automatically generated by a program completely, and the manual marking cost can be reduced.
The above steps provide an interface for the user to correct errors and improve the accuracy of the overall scheme by using the user error correction, and in essence, in addition to the error correction interface, this embodiment also provides an interface for the user to directly modify the first description information. That is, the method provided in this embodiment may further include:
106. and modifying the first description information according to the operation information so as to obtain the program code according to the modified first description information.
Further, the solution provided by this embodiment may also provide an interface for a user to modify the final program code, that is, the method provided by this embodiment further includes:
107a, responding to a request sent by a client for acquiring the program code, and returning the program code to the client to be displayed on an interface of the client;
107b, receiving a modification to the program code by the client feedback;
and 107c, saving the modified program codes.
For example, referring to fig. 7, after the user obtains the program request corresponding to the picture 61 by triggering the control 613, the program code E1 of the picture 61 is displayed on the client interface, and the user can modify the program code by an interactive manner provided by the client, and then the modified program code can be maintained by triggering the "save cmd + s" control 614.
The method provided by the embodiment of the present application will be described below with reference to a specific application scenario, that is, a design draft designed by a UI designer in a UI interface scenario. Fig. 8 is a flowchart illustrating a visual manuscript processing method according to another embodiment of the present application. As shown in fig. 8, the visual manuscript processing method includes:
201. determining layout information of the visual manuscript according to the visual manuscript file;
202. processing a page diagram corresponding to the visual draft based on the layout information to determine a local diagram with component functions in the page diagram;
203. and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
According to the technical scheme provided by the embodiment, the layout information of the visual manuscript is determined according to the visual manuscript file, the page map corresponding to the visual manuscript is processed based on the layout information, so that the local map with the component function in the page map is obtained, and the first description information is generated based on the layout information and the local map with the component function in the page map. In the whole process, only the identification result of the local graph needs to be ensured to be accurate, and the scheme is simple.
The step 202 of processing the page map corresponding to the visual draft based on the layout information to determine the partial map having the component function in the page map may specifically include
2021. According to the layout information, cutting the page map to obtain at least one local map;
2022. and performing image recognition on at least one local graph to obtain a local graph with component functions.
The clipping process performed on the page map in 2021 may specifically include:
20211. determining a cutting area according to the layout information;
20212. and cutting the page map according to the cutting area to obtain a cut local map.
The step 2022 "performing image recognition on at least one partial graph to obtain a partial graph with component functions" may specifically include:
20221. acquiring an image classification model;
20222. taking the at least one local graph as the input of the image classification model, and executing the image classification model to obtain an output result containing the classification of the graph content in the at least one local graph;
20223. and determining the partial graph with the graph content belonging to the component classification according to the output result so as to obtain the partial graph with the component function.
The step 203 of obtaining the first description information for generating the program code according to the layout information and the partial diagram with the component function may specifically include:
2031. locating a target node corresponding to the local graph with the component function in the layout information;
2032. adding the content of the local graph corresponding to the target node at the target node to obtain second description information; wherein, the content of the local graph is determined based on the component classification to which the local graph belongs;
2033. and performing conversion processing conforming to the code semantics on the second description information to obtain the first description information.
Further, the method of this embodiment may further include the following steps:
204. and obtaining a program code corresponding to the page map corresponding to the visual draft according to the first description information.
Further, this embodiment provides the method, further including the steps of:
205. responding to a visualization display request sent by a client for the first description information, and returning a visualization arrangement page which is generated according to the first description information and is related to the page map to the client;
206. receiving operation information on the visual arrangement page sent by the client;
207. and determining auxiliary data for improving the generation accuracy of the first description information according to the operation information.
Here, it should be noted that: the content of each step in the data processing method provided in the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described herein again. In addition, the method provided by the embodiment of the present application may further include, in addition to the above steps, other partial or all steps in the above embodiments, specifically refer to corresponding contents in the above embodiments, and only the picture in the above embodiments needs to be replaced with the page diagram of the visual manuscript in the present embodiment.
More specifically, in connection with the embodiment shown in fig. 9, a method for processing a visual manuscript may include three steps:
s1, Design to JSON process
Namely, the layer JSON description information of the picture is derived from the UI design draft (i.e., the visual draft above).
S2, JSON to JSON Process
Specifically, after the layer JSON description information of the layer of the picture is subjected to layer parsing, layer rectification, layer identification and other processing, the JSON description information (i.e., the second description information mentioned above) conforming to the code structure is obtained through a layout algorithm. Then, cutting the picture, and identifying the components of the cut picture; after the JSON description information conforming to the code structure is added based on the component identification result, the JSON description information conforming to the code semantics (namely the first description information mentioned above) can be obtained through semantic processing.
S3 JSON to Code process
The JSON description information conforming to the code semantics obtained in step S2 is input to the DSL, and different kinds of codes, such as read, Vue, Rax, H5, MiniApp, and the like, can be generated.
The process of the step S2JSON to JSON is the core of the scheme of this embodiment, and the specific process may include:
s21, obtaining JSON description information of the layer through layer analysis from the design draft;
and S22, after image layer correction and image layer identification, the JSON description information of the image layer reaches a layout algorithm layer, and JSON description information conforming to a code structure is generated at the layer, and is referred to as JSON description information with a hierarchical structure.
S23, after JSON description information with a hierarchical structure enters a component identification layer, in an organization identification layer, according to the JSON description information with the hierarchical structure, a design draft picture is cut by taking a container node as granularity, and one or more cut pictures (namely the local pictures mentioned above) are obtained; then, predicting each cutting graph by using an image recognition model (more specifically, an image classification model) to obtain the cutting graph with the component function (namely, the local graph mentioned above); and finally, attaching the classification information of the components of the cutting graph with the component function to JSON description information with a hierarchical structure to obtain the JSON description information with the component information.
S24, inputting the JSON description information with the component information obtained from the previous layer into a semantic layer, and in the semantic layer, modifying parameters such as the name of a node which needs semantic conversion in the JSON description information with the component information, and/or adding corresponding attribute information, and/or deleting some nodes in the JSON description information with the component information, and the like, and finally obtaining the JSON description information which accords with the code semantics.
An exemplary embodiment of the present application further provides a visual manuscript processing system, which has the same structure as fig. 2. Specifically, the visual manuscript processing system comprises:
the client is used for sending the visual draft file to the server;
the server is used for determining the layout information of the visual manuscript according to the visual manuscript file; processing a page diagram corresponding to the visual draft based on the layout information to obtain a local diagram with component functions in the page diagram; and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
The server may be an entity server, a virtual server, a cloud service platform, and the like, which is not specifically limited in this embodiment; the client can be any device such as a smart phone, a notebook computer, an intelligent wearable device, a desktop computer and the like.
The client and server in the visual draft processing system may have the same structures as those of the client and server corresponding to fig. 2. In addition, the execution principle and the interaction process of each component unit, such as the client and the server, in the embodiment of the visual manuscript processing system may refer to the description of the embodiment corresponding to fig. 8, and are not described herein again.
Fig. 10 is a schematic structural diagram of a data processing apparatus according to an exemplary embodiment of the present application, where the apparatus includes: a first determining module 301, a processing module 302 and an obtaining module 303; wherein the content of the first and second substances,
a first determining module 301, configured to determine layout information of a picture according to a picture file of the picture;
a processing module 302, configured to process the picture based on the layout information to determine a partial graph having component functions in the picture;
an obtaining module 303, configured to obtain first description information used for generating a program code according to the layout information and the local graph with the component function.
According to the technical scheme provided by the embodiment, the layout information of the picture is determined according to the picture file of the picture, the picture is processed based on the layout information, so that the local picture with the component function is obtained, and then the first description information is generated according to the layout information and the local picture with the component function. In the whole process, as long as the identification accuracy of the local graph with the component function is ensured, the generated program code is accurate to a certain extent. The identification of the local graph with the component function can be realized by adopting a corresponding model, the model can be obtained by training, samples required by training can be automatically generated by a program, manual marking is not relied on, and the cost is reduced.
Further, when the processing module 302 processes the picture based on the layout information to determine a partial graph having a component function in the picture, specifically, the processing module is configured to: according to the layout information, the picture is cut to obtain at least one local graph; and performing image recognition on at least one local graph to obtain a local graph with component functions.
The processing module 302, when performing cropping processing on the picture based on the layout information to obtain at least one partial graph, is specifically configured to: determining a cutting area according to the layout information; and cutting the picture according to the cutting area to obtain a cut local picture.
Further, when determining the clipping region according to the layout information, the processing model 302 is specifically configured to: determining nodes according to the layout information; acquiring position information of a node; and determining the cutting area based on the position information of the node.
Further, when performing image recognition on the at least one local graph, the processing module 302 is specifically configured to: acquiring an image classification model; taking the at least one local graph as the input of the image classification model, and executing the image classification model to obtain an output result containing the classification of the graph content in the at least one local graph; and determining the partial graph with the graph content belonging to the component classification according to the output result so as to obtain the partial graph with the component function.
Further, when the obtaining module 303 obtains the first description information for generating the program code according to the layout information and the partial diagram with the component function, specifically:
locating a target node corresponding to the local graph with the component function in the layout information;
adding the content of the local graph corresponding to the target node at the target node to obtain second description information; wherein, the content of the local graph is determined based on the component classification to which the local graph belongs;
and performing conversion processing conforming to the code semantics on the second description information to obtain the first description information.
Further, when the obtaining module 303 performs conversion processing conforming to the code semantics on the second description information to obtain the first description information, it is specifically configured to:
traversing the second description information, and searching at least two nodes which have nesting relation and belong to the same component classification in the second description information; if the second description information contains at least two nodes which have nesting relations and belong to the same component classification, determining the node in the inner layer of the nesting relation in the at least two nodes as the node to be converted; and performing semantic conversion processing on the content corresponding to the node to be converted in the second description information.
Further, when performing semantic conversion processing on the content corresponding to the node to be converted in the second description information, the obtaining module 303 is specifically configured to: changing the name parameter in the content corresponding to the node to be converted into the component classification name of the node to be converted; and/or adding corresponding attribute values in the content corresponding to the nodes needing to be converted according to the component classification to which the nodes needing to be converted belong.
Still further, when the obtaining module 303 performs conversion processing conforming to the code semantics on the second description information to obtain the first description information, it is further configured to:
judging whether a child node exists under the node needing to be converted or not according to the second description information;
and deleting the content corresponding to the child node under the node to be converted in the second description information under the condition that the child node under the node to be converted also exists.
Further, when determining the layout information of the picture according to the picture file of the picture, the first determining module 301 is specifically configured to: extracting layer description information of the picture from the picture file; and carrying out layer reprocessing treatment on the layer description information to obtain the layout information of the picture.
Still further, when the first determining module 301 performs layer reprocessing on the layer description information to obtain the layout information of the picture, it is specifically configured to: identifying the image layer contained in the picture based on the image layer description information; and carrying out layout processing on the layers contained in the picture by using a layout algorithm so as to obtain layout information conforming to the layer protocol specification.
Further, the obtaining module 303 is further configured to: and obtaining a program code corresponding to the picture according to the first description information.
Further, the apparatus provided in this embodiment further includes:
the first response module is used for responding to a visualization display request sent by a client aiming at first description information, and returning a visualization arrangement page which is generated according to the first description information and is related to the picture to the client;
the first receiving module is used for receiving the operation information on the visual arrangement page sent by the client;
and the second determining module is used for determining auxiliary data for improving the generating accuracy of the first description information according to the operation information.
Further, when determining, according to the operation information, auxiliary data for improving the accuracy of generating the first description information, the second determining module is specifically configured to: the operation information comprises information for correcting an image recognition result of a local image, and the image recognition result before correction and the image recognition result after correction are both used as training sample data of the image and are used for improving the precision of the image recognition model; the image recognition model is used for carrying out image recognition on the local graph.
Further, the apparatus provided in this embodiment further includes:
and the modification module is used for modifying the first description information according to the operation information so as to obtain the program code according to the modified first description information.
Further, the apparatus provided in this embodiment further includes:
the second response module is used for responding to a request for acquiring the program code sent by the client and returning the program code to the client so as to display the program code on an interface of the client;
a second receiving module, configured to receive a modification to the program code fed back by the client;
and the storage module is used for storing the modified program code.
Here, it should be noted that: the data processing apparatus provided in this embodiment may implement the technical solution described in the data processing method embodiment shown in fig. 3, and the specific implementation principle of each module or unit may refer to the corresponding content in the data processing method embodiment shown in fig. 3, which is not described herein again.
The application further provides a processing device of the visual manuscript. The structure of the processing apparatus for the visual manuscript is similar to that of fig. 10. Specifically, the processing device of the visual manuscript comprises: the device comprises a first determining module, a processing module and an obtaining module;
the first determining module is used for determining the layout information of the visual manuscript according to the visual manuscript file;
the processing module is used for processing the page map corresponding to the visual draft based on the layout information so as to determine a local map with component functions in the page map;
and the obtaining module is used for obtaining first description information used for generating the program code according to the layout information and the local graph with the component function.
According to the technical scheme provided by this embodiment, the layout information of the visual manuscript is determined according to the visual manuscript file, the page map corresponding to the visual manuscript is processed based on the layout information to obtain the partial map with the component function in the page map, and the first description information is generated based on the layout information and the partial map with the component function in the page map. Because of whole process, only need guarantee that the recognition result of the local map that has the subassembly function is accurate can, the scheme is simple, and when training the model that is used for discerning the local map, required sample can be by procedure automatic generation completely, no longer relies on artifical marking, does benefit to reduce cost.
Further, when the processing module processes the page map corresponding to the visual manuscript based on the layout information to determine the local map having the component function in the page map, the processing module is specifically configured to: according to the layout information, cutting the page map to obtain at least one local map; and performing image recognition on at least one local graph to obtain a local graph with component functions.
Further, when the processing module performs image recognition on at least one local graph to obtain a local graph with a component function, the processing module is specifically configured to: acquiring an image classification model; taking the at least one local graph as the input of the image classification model, and executing the image classification model to obtain an output result containing the classification of the graph content in the at least one local graph; and determining the partial graph with the graph content belonging to the component classification according to the output result so as to obtain the partial graph with the component function.
Further, when the obtaining module obtains the first description information for generating the program code according to the layout information and the partial diagram with the component function, the obtaining module is specifically configured to: locating a target node corresponding to the local graph with the component function in the layout information; adding the content of the local graph corresponding to the target node at the target node to obtain second description information; wherein, the content of the local graph is determined based on the component classification to which the local graph belongs; and performing conversion processing conforming to the code semantics on the second description information to obtain the first description information.
Further, the obtaining module is further configured to: and obtaining a program code corresponding to the page map corresponding to the visual draft according to the first description information.
Further, the apparatus provided in this embodiment further includes:
the response module is used for responding to a visualization display request sent by a client aiming at the first description information, and returning a visualization arrangement page which is generated according to the first description information and is related to the page map to the client;
the receiving module is used for receiving the operation information on the visual arrangement page sent by the client;
and the second determining module is used for determining auxiliary data for improving the generating accuracy of the first description information according to the operation information.
Here, it should be noted that: the processing apparatus for a visual manuscript provided in this embodiment may implement the technical solution described in the embodiment of the processing method for a visual manuscript shown in fig. 8, and the specific implementation principle of each module or unit may refer to the corresponding content in the embodiment of the data processing method shown in fig. 8, and is not described herein again.
Fig. 11 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 11, the client device includes: a memory 601 and a processor 602. The memory 601 may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device. The memory 601 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 602, coupled to the memory 601, is configured to execute the program stored in the memory 601, so as to:
determining layout information of the picture according to the picture file of the picture;
processing the picture based on the layout information to determine a partial graph with component functions in the picture;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
When the processor 602 executes the program in the memory 601, other functions may be implemented in addition to the above functions, which may be specifically referred to in the description of the foregoing embodiments.
Further, as shown in fig. 11, the electronic apparatus further includes: communication components 603, power components 604, and a display 605. Only some of the components are schematically shown in fig. 11, and it is not meant that the electronic device includes only the components shown in fig. 11.
An embodiment of the present application further provides another electronic device, which has a structure similar to that of fig. 10. Specifically, the electronic device includes: a memory and a processor. The memory is used for storing programs. The processor, coupled with the memory, to execute the program stored in the memory to:
determining layout information of the visual manuscript according to the visual manuscript file;
processing a page diagram corresponding to the visual draft based on the layout information to determine a local diagram with component functions in the page diagram;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
When the processor executes the program in the memory, the processor may implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
Further, as shown in fig. 11, the electronic apparatus further includes: communication components, power components, and displays, among other components. Only some of the components are schematically shown in fig. 11, and the electronic device is not meant to include only the components shown in fig. 10.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps or functions of the data processing method provided in the foregoing embodiments when executed by a computer.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (26)

1. A data processing method, comprising:
determining layout information of the picture according to the picture file of the picture;
processing the picture based on the layout information to determine a partial graph with component functions in the picture;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
2. The method of claim 1, wherein processing the picture to determine a partial graph with component functionality in the picture based on the layout information comprises:
according to the layout information, the picture is cut to obtain at least one local graph;
and performing image recognition on at least one local graph to obtain a local graph with component functions.
3. The method according to claim 2, wherein performing cropping processing on the picture based on the layout information to obtain at least one partial graph comprises:
determining a cutting area according to the layout information;
and cutting the picture according to the cutting area to obtain a cut local picture.
4. The method of claim 3, wherein determining a clipping region according to the layout information comprises:
determining nodes according to the layout information;
acquiring position information of a node;
and determining the cutting area based on the position information of the node.
5. The method of claim 2, wherein performing image recognition on the at least one partial graph to obtain a partial graph with component functions comprises:
acquiring an image classification model;
taking the at least one local graph as the input of the image classification model, and executing the image classification model to obtain an output result containing the classification of the graph content in the at least one local graph;
and determining the partial graph with the graph content belonging to the component classification according to the output result so as to obtain the partial graph with the component function.
6. The method according to any one of claims 1 to 5, wherein obtaining first description information for generating program code according to the layout information and the partial graph with component functions comprises:
locating a target node corresponding to the local graph with the component function in the layout information;
adding the content of the local graph corresponding to the target node at the target node to obtain second description information; wherein, the content of the local graph is determined based on the component classification to which the local graph belongs;
and performing conversion processing conforming to the code semantics on the second description information to obtain the first description information.
7. The method according to claim 6, wherein performing a conversion process conforming to a code semantic on the second description information to obtain the first description information comprises:
traversing the second description information, and searching at least two nodes which have nesting relation and belong to the same component classification in the second description information;
if the second description information contains at least two nodes which have nesting relations and belong to the same component classification, determining the node in the inner layer of the nesting relation in the at least two nodes as the node to be converted;
and performing semantic conversion processing on the content corresponding to the node to be converted in the second description information.
8. The method according to claim 7, wherein performing semantic conversion processing on the content corresponding to the node to be converted in the second description information includes at least one of:
changing the name parameter in the content corresponding to the node to be converted into the component classification name of the node to be converted;
and adding corresponding attribute values in the contents corresponding to the nodes needing to be converted according to the component classification to which the nodes needing to be converted belong.
9. The method according to claim 7, wherein performing a conversion process conforming to a code semantic on the second description information to obtain the first description information further comprises:
judging whether a child node exists under the node needing to be converted or not according to the second description information;
and deleting the content corresponding to the child node under the node to be converted in the second description information under the condition that the child node under the node to be converted also exists.
10. The method according to any one of claims 1 to 5, wherein determining layout information of a picture from a picture file of the picture comprises:
extracting layer description information of the picture from the picture file;
and carrying out layer reprocessing treatment on the layer description information to obtain the layout information of the picture.
11. The method according to claim 10, wherein performing layer reprocessing processing on the layer description information to obtain layout information of the picture comprises:
identifying the image layer contained in the picture based on the image layer description information;
and carrying out layout processing on the layers contained in the picture by using a layout algorithm so as to obtain layout information conforming to the layer protocol specification.
12. The method of any one of claims 1 to 5, further comprising:
and obtaining a program code corresponding to the picture according to the first description information.
13. The method of any one of claims 1 to 5, further comprising:
responding to a visualization display request sent by a client for first description information, and returning a visualization arrangement page related to the picture generated according to the first description information to the client;
receiving operation information on the visual arrangement page sent by the client;
and determining auxiliary data for improving the generation accuracy of the first description information according to the operation information.
14. The method of claim 13, wherein determining auxiliary data for improving the accuracy of generating the first description information according to the operation information comprises:
the operation information comprises information for correcting an image recognition result of a local graph, and the image recognition result before correction and the image recognition result after correction are both used as training sample data of an image recognition model and used for improving the precision of the image recognition model;
the image recognition model is used for carrying out image recognition on the local graph.
15. The method of claim 13, further comprising:
and modifying the first description information according to the operation information so as to obtain the program code according to the modified first description information.
16. The method of claim 14, further comprising:
responding to a request sent by a client for acquiring the program code, and returning the program code to the client to be displayed on an interface of the client;
receiving a modification to the program code fed back by the client;
and saving the modified program code.
17. A data processing system, comprising:
the client is used for sending the picture file of the picture to the server;
the server is used for obtaining the layout information of the pictures according to the picture files of the pictures; processing the picture based on the layout information to determine a partial graph with component functions in the picture; and obtaining first description information for generating program codes based on the layout information and the local graph with the component functions.
18. A method for processing a visual manuscript is characterized by comprising the following steps:
determining layout information of the visual manuscript according to the visual manuscript file;
processing a page diagram corresponding to the visual draft based on the layout information to determine a local diagram with component functions in the page diagram;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
19. The method of claim 18, wherein processing a visual draft corresponding page map to determine a partial map of the page map with component functionality based on the layout information comprises:
according to the layout information, cutting the page map to obtain at least one local map;
and performing image recognition on at least one local graph to obtain a local graph with component functions.
20. The method of claim 19, wherein performing image recognition on at least one of the partial graphs to obtain a partial graph with component functions comprises:
acquiring an image classification model;
taking the at least one local graph as the input of the image classification model, and executing the image classification model to obtain an output result containing the classification of the graph content in the at least one local graph;
and determining the partial graph with the graph content belonging to the component classification according to the output result so as to obtain the partial graph with the component function.
21. The method according to any one of claims 18 to 20, wherein obtaining first description information for generating program code according to the layout information and the partial map with component functions comprises:
locating a target node corresponding to the local graph with the component function in the layout information;
adding the content of the local graph corresponding to the target node at the target node to obtain second description information; wherein, the content of the local graph is determined based on the component classification to which the local graph belongs;
and performing conversion processing conforming to the code semantics on the second description information to obtain the first description information.
22. The method of any one of claims 18 to 20, further comprising:
and obtaining a program code corresponding to the page map corresponding to the visual draft according to the first description information.
23. The method of any one of claims 18 to 20, further comprising:
responding to a visualization display request sent by a client for the first description information, and returning a visualization arrangement page which is generated according to the first description information and is related to the page map to the client;
receiving operation information on the visual arrangement page sent by the client;
and determining auxiliary data for improving the generation accuracy of the first description information according to the operation information.
24. A visual draft processing system, comprising:
the client is used for sending the visual draft file to the server;
the server is used for determining the layout information of the visual manuscript according to the visual manuscript file; processing a page diagram corresponding to the visual draft based on the layout information to determine a local diagram with component functions in the page diagram; and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
25. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining layout information of the picture according to the picture file of the picture;
processing the picture based on the layout information to determine a partial graph with component functions in the picture;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
26. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining layout information of the visual manuscript according to the visual manuscript file;
processing a page diagram corresponding to the visual draft based on the layout information to determine a local diagram with component functions in the page diagram;
and obtaining first description information for generating a program code according to the layout information and the local graph with the component function.
CN202011296388.6A 2020-11-18 2020-11-18 Data processing method, visual draft processing method, system and electronic equipment Pending CN113296769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011296388.6A CN113296769A (en) 2020-11-18 2020-11-18 Data processing method, visual draft processing method, system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011296388.6A CN113296769A (en) 2020-11-18 2020-11-18 Data processing method, visual draft processing method, system and electronic equipment

Publications (1)

Publication Number Publication Date
CN113296769A true CN113296769A (en) 2021-08-24

Family

ID=77318389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011296388.6A Pending CN113296769A (en) 2020-11-18 2020-11-18 Data processing method, visual draft processing method, system and electronic equipment

Country Status (1)

Country Link
CN (1) CN113296769A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023093850A1 (en) * 2021-11-26 2023-06-01 北京沃东天骏信息技术有限公司 Component identification method and apparatus, electronic device, and storage medium
CN116860324A (en) * 2023-09-01 2023-10-10 深圳代码兄弟技术有限公司 Development data processing method, development data processing apparatus, and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023093850A1 (en) * 2021-11-26 2023-06-01 北京沃东天骏信息技术有限公司 Component identification method and apparatus, electronic device, and storage medium
CN116860324A (en) * 2023-09-01 2023-10-10 深圳代码兄弟技术有限公司 Development data processing method, development data processing apparatus, and readable storage medium
CN116860324B (en) * 2023-09-01 2023-12-05 深圳代码兄弟技术有限公司 Development data processing method, development data processing apparatus, and readable storage medium

Similar Documents

Publication Publication Date Title
US10489126B2 (en) Automated code generation
Staar et al. Corpus conversion service: A machine learning platform to ingest documents at scale
US20230177821A1 (en) Document image understanding
US10223344B2 (en) Recognition and population of form fields in an electronic document
US20240126826A1 (en) System and method for integrating user feedback into website building system services
CN107766349B (en) Method, device, equipment and client for generating text
US11100279B2 (en) Classifying input fields and groups of input fields of a webpage
CN111240669B (en) Interface generation method and device, electronic equipment and computer storage medium
CN113377356B (en) Method, device, equipment and medium for generating user interface prototype code
CN113296769A (en) Data processing method, visual draft processing method, system and electronic equipment
CN113254815A (en) Document processing method, page processing method and equipment
JPWO2018235326A1 (en) Computer program, font switching device and font switching method
US20200364034A1 (en) System and Method for Automated Code Development and Construction
CN111142871A (en) Front-end page development system, method, equipment and medium
Malik et al. Reimagining application user interface (UI) design using deep learning methods: Challenges and opportunities
CN115373658A (en) Method and device for automatically generating front-end code based on Web picture
CN115546815A (en) Table identification method, device, equipment and storage medium
CN113805886A (en) Page creating method, device and system, computer device and storage medium
CN113742559A (en) Keyword detection method and device, electronic equipment and storage medium
CN113535970A (en) Information processing method and apparatus, electronic device, and computer-readable storage medium
CN112445469B (en) Code generation method, system, computer equipment and storage medium
JP2018132838A (en) Information processing device and program for information processing device
CN117762389A (en) Code generation method, device, electronic equipment and storage medium
CN117131850A (en) Form style conversion method, device, equipment and medium based on generation of antagonistic neural network
CN117391044A (en) Form design style migration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination