CN112306490A - Layer export method, device, equipment and storage medium - Google Patents

Layer export method, device, equipment and storage medium Download PDF

Info

Publication number
CN112306490A
CN112306490A CN202011332334.0A CN202011332334A CN112306490A CN 112306490 A CN112306490 A CN 112306490A CN 202011332334 A CN202011332334 A CN 202011332334A CN 112306490 A CN112306490 A CN 112306490A
Authority
CN
China
Prior art keywords
visual element
element nodes
node
visual
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011332334.0A
Other languages
Chinese (zh)
Inventor
陈欣怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011332334.0A priority Critical patent/CN112306490A/en
Publication of CN112306490A publication Critical patent/CN112306490A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Abstract

The application discloses a layer exporting method, device, equipment and storage medium, and belongs to the field of UI design. The method comprises the following steps: acquiring a node tree of a visual draft file of a user interface, wherein the node tree comprises visual element nodes, and the visual element nodes correspond to image layers of visual elements forming the user interface; determining visual element nodes which do not support code construction in a node tree; merging visual element nodes belonging to the same level in the visual element nodes which do not support code construction to obtain merged visual element nodes; and exporting the image layer corresponding to the merged visual element node as a first map-cutting image layer. In the process of realizing graph cutting, manual operation is not needed, and the graph cutting layer can be exported only by processing the node tree of the visual draft file through computer equipment. The problems of easy error and low efficiency caused by manual operation can be avoided, and the technical scheme of full-automatic drawing derivation is provided.

Description

Layer export method, device, equipment and storage medium
Technical Field
The present application relates to the field of UI design, and in particular, to a layer derivation method, apparatus, device, and storage medium.
Background
Designers can use vector design tools, such as Sketch (a vector design tool from Bohemian Coding), Photoshop (a vector design tool from Adobe), to design the UI (User Interface) of an application.
After a designer designs a visual draft file of the UI by using a vector design tool, the visual draft file of the UI is provided for a front-end developer to serve as a development root. There are two types of visual elements on a visual draft document: a first visual element that can be constructed by code and a second visual element that cannot be constructed by code. For the first visual element, the first visual element is realized by a front-end developer through programming codes, such as a button; for the second visual element, a designer or a developer may derive a layer corresponding to the second visual element, such as advertising a poster picture, and declare a layer node and a corresponding Uniform Resource Identifier (URI) in a code of the UI interface. That is, the layer corresponding to the second visual element is added to the code as a layer node.
The process of deriving the layer of the second visual element is referred to as map cutting. The graph cutting process is mainly manually realized by designers or developers according to personal experience, and is easy to make mistakes and low in efficiency.
Disclosure of Invention
The application provides a layer exporting method, a layer exporting device and a storage medium, and provides a technical scheme for automatically exporting a cut image without manual operation of a user. The technical scheme is as follows:
according to an aspect of the present application, there is provided a layer derivation method, including:
acquiring a node tree of a visual draft file of a user interface, wherein the node tree comprises visual element nodes, and the visual element nodes correspond to image layers of visual elements forming the user interface;
determining visual element nodes which do not support code construction in the node tree;
merging visual element nodes belonging to the same level in the visual element nodes which do not support code construction to obtain merged visual element nodes;
and deriving the image layer corresponding to the merged visual element node into a first map-cutting image layer.
According to another aspect of the present application, there is provided an apparatus for deriving a layer, the apparatus including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a node tree of a visual draft file of a user interface, the node tree comprises visual element nodes, and the visual element nodes correspond to image layers of visual elements forming the user interface;
a determining module, configured to determine a visual element node in the node tree that does not support code construction;
the merging module is used for merging visual element nodes belonging to the same level in the visual element nodes which do not support code construction to obtain merged visual element nodes;
and the derivation module is used for deriving the image layer corresponding to the merged visual element node into a first map-cutting image layer.
In an alternative design, the merging module is configured to:
merging visual element nodes which meet merging conditions in the visual element nodes which do not support code construction to obtain merged visual element nodes;
wherein the merging condition is used for judging whether two visual element nodes belong to the same level.
In an alternative design, the merging condition includes at least one of:
the image layers corresponding to the two visual element nodes belong to the same slice group;
the color similarity between the image layers corresponding to the two visual element nodes reaches a first threshold value;
the intersection degree of the image layers corresponding to the two visual element nodes is greater than a second threshold value;
two visual element nodes belong to the same directory level in the node tree.
In an alternative design, the visual element nodes that are merged further satisfy the following condition:
and the area of the image layer corresponding to the visual element node is smaller than a third threshold value.
In an alternative design, the determining module is configured to:
and filtering the visual element nodes supporting code construction in the node tree to obtain the visual element nodes which do not support code construction in the node tree.
In an alternative design, the determining module is configured to:
filtering the visual element nodes which meet the filtering condition in the node tree to obtain the visual element nodes which do not support code construction in the node tree; the filtration conditions include at least one of:
belonging to a style-free character node;
belong to a straight line node;
a graph node belonging to a specified shape.
In an alternative design, the apparatus further comprises:
an adding module, configured to add a target identifier to the merged visual element node in the node tree;
the export module is configured to:
and deriving the layer corresponding to the visual element node with the target identifier in the node tree as the first map-cutting layer.
In an alternative design, the derivation module is to:
and exporting the layer corresponding to the visual element node which is not combined in the visual element nodes which are not supported by the code construction as a second map cutting layer.
According to another aspect of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the layer derivation method as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the layer derivation method as described above.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the layer derivation method provided in the various alternative implementations of the above aspects.
The beneficial effect that technical scheme that this application provided brought includes at least:
the method comprises the steps of determining visual element nodes which do not support code construction in a node tree, combining the visual element nodes belonging to the same level, and then deriving layers corresponding to the combined visual element nodes to obtain a map cutting layer, so that the map cutting is realized. In the process of realizing graph cutting, manual operation is not needed, and the graph cutting layer can be exported only by processing the node tree of the visual draft file through computer equipment. The problems of easy error and low efficiency caused by manual operation can be avoided, and the technical scheme of full-automatic drawing derivation is provided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of layer derivation provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a layer derivation method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another layer derivation method according to an embodiment of the present application;
fig. 4 is a schematic diagram of visual element nodes belonging to the same slice group according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating determining whether a color similarity between two image layers reaches a first threshold according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a determination of whether an intersection degree between two layers reaches a second threshold according to an embodiment of the present application;
FIG. 7 is a diagram of visual element nodes belonging to the same directory hierarchy according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an apparatus for deriving a layer according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another layer deriving apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a principle of layer derivation provided in an embodiment of the present application. As shown in fig. 1, in step 101, a computer device obtains a node tree of a visual draft file. The visual draft file is used to develop a user interface. The node tree includes visual element nodes corresponding to layers of visual elements constituting the user interface in the visual draft file. In step 102, the computer device rejects the visual element nodes supporting code construction in the node tree, thereby obtaining the visual element nodes not supporting code construction in the node tree. The computer device obtains an updated node tree by traversing all visual element nodes in the node tree and removing the visual element nodes meeting the filtering condition, wherein the updated node tree comprises the visual element nodes which do not support code construction. The filtering condition is used for reflecting the visual elements of the image layer corresponding to the visual element nodes, and can be realized through codes, for example, the filtering condition comprises that the visual element nodes belong to at least one of the text nodes without patterns, the visual element nodes belong to the straight line nodes, and the visual element nodes belong to the graphic nodes with the specified shapes. The specified shape includes a circle and a rectangle, and the circle and the rectangle are not deformed. In step 103, the computer device merges visual element nodes belonging to the same hierarchy. The same layer means that layers corresponding to the visual element nodes belong to the same map-cutting layer, and the computer equipment can derive the map-cutting according to the layers of the visual element nodes belonging to the same map-cutting layer. The computer equipment judges whether any two visual element nodes in the visual element nodes which do not support code construction meet the merging condition or not, and merges the two visual element nodes which meet the merging condition to serve as the nodes of a map cutting layer. Thereby obtaining an updated node tree comprising nodes of at least one graph cut layer. The merging condition comprises that image layers corresponding to two visual element nodes belong to the same slice group, the color similarity of the image layers corresponding to the two visual element nodes reaches a first threshold value, the intersection degree of the image layers corresponding to the two visual element nodes is greater than a second threshold value, and the two visual element nodes belong to at least one of the same directory levels in a node tree. Optionally, the merged visual element node further conforms that the area of the layer corresponding to the visual element node is smaller than a third threshold. In step 104, the computer device derives the map layer according to the node tree after the visual element nodes are removed and merged. Optionally, when the visual element nodes are merged, the computer device may add a target identifier to the merged visual element nodes, and then export a layer corresponding to the visual element node having the target identifier as a map-cutting layer, thereby implementing the map cutting.
When the method is adopted for cutting the graph, manual operation is not needed, and the graph cutting layer can be exported only by processing the node tree of the visual draft file according to the filtering condition and the merging condition through computer equipment. The problems of easy error and low efficiency caused by manual operation can be avoided, and the technical scheme of full-automatic drawing derivation is provided.
Fig. 2 is a schematic flowchart of a layer derivation method according to an embodiment of the present application. The method may be used for a computer device or a client on a computer device. As shown in fig. 2, the method includes:
step 201: a node tree of a visual draft file of a user interface is obtained.
The user interface includes a system interface of the computer device, and an interface in an Application (Application) installed on the computer device. Such as interfaces in social clients, financial clients, game clients, and song clients. The visual draft file corresponds to a user interface, and a developer can realize the user interface through programming codes according to the visual draft file. The format of the visual draft file includes ". sketch", ". PSD", and ". xd". The format of the visual draft file output by the Sketch is ". Sketch". The format of the visual draft file output by Photoshop is ". PSD". The visual draft file output by Adobe XD (vector design tool by Adobe) is in the format ". XD".
In the visual draft file, visual elements are drawn in layers. The visual element nodes correspond to layers of visual elements that make up the user interface. The visual element nodes comprise at least one of layer group nodes (comprising at least one visual element node), picture nodes (the visual elements are pictures), shape nodes (the visual elements are shapes) and text nodes (the visual elements are texts). The node tree comprises visual element nodes, and the node tree can reflect the structural relationship among the visual element nodes, namely the structural relationship among the image layers corresponding to the visual element nodes. The node tree further includes attributes, types, styles, and the like of the image layers corresponding to each visual element node.
Step 202: and determining the visual element nodes which do not support code construction in the node tree.
The visual element node which does not support code construction refers to a visual element node corresponding to a layer of a visual element which cannot be realized by a developer through programming codes. For example, the corresponding visual element nodes comprise a page background graph, a virtual head portrait, a special font character and the like.
Optionally, the computer device filters out the visual element nodes supporting code construction in the node tree, so as to obtain the visual element nodes not supporting code construction. The visual element nodes constructed by the supporting codes comprise nodes of characters without patterns, nodes of straight lines and graphic nodes of specified shapes. The specified shape includes a circle and a rectangle, and the circle and the rectangle are not deformed.
Step 203: and merging the visual element nodes belonging to the same level in the visual element nodes which do not support code construction to obtain merged visual element nodes.
The same layer means that layers corresponding to the visual element nodes belong to the same map-cutting layer, and the computer equipment can derive the map-cutting according to the layers of the visual element nodes belonging to the same map-cutting layer.
The computer equipment judges whether any two visual element nodes in the visual element nodes which do not support code construction meet the merging condition or not, and merges the two visual element nodes which meet the merging condition to serve as the nodes of a map cutting layer. Thereby obtaining an updated node tree comprising nodes of at least one graph cut layer.
Optionally, the merging conditions include: the layers corresponding to the two visual element nodes belong to the same slice group, the color similarity of the layers corresponding to the two visual element nodes reaches a first threshold, the intersection degree of the layers corresponding to the two visual element nodes is greater than a second threshold, and the two visual element nodes belong to at least one of the same directory hierarchy in the node tree. Optionally, the merged visual element node further conforms that the area of the layer corresponding to the visual element node is smaller than a third threshold.
Step 204: and exporting the image layer corresponding to the merged visual element node as a first map-cutting image layer.
The computer device merges the visual element nodes and is able to update the node tree. When the computer device derives the first map-cutting layer according to the updated node tree, the combined visual element nodes can instruct the computer device to combine the layers corresponding to the visual element nodes and derive the layers as the same map-cutting layer, so that the map cutting is realized.
Illustratively, for a visual draft file formatted in a ". sketch", a computer device can implement exporting a first tangent layer through components in a vector design tool that designs the visual draft file.
To sum up, the method provided in the embodiment of the present application determines the visual element nodes that do not support code construction in the node tree, merges the visual element nodes belonging to the same level, and then derives the layer corresponding to the merged visual element node to obtain the map-cutting layer, thereby implementing the map cutting. In the process of realizing graph cutting, manual operation is not needed, and the graph cutting layer can be exported only by processing the node tree of the visual draft file through computer equipment. The problems of easy error and low efficiency caused by manual operation can be avoided, and the technical scheme of full-automatic drawing derivation is provided.
Fig. 3 is a schematic flowchart of another layer derivation method according to an embodiment of the present application. The method may be used for a computer device or a client on a computer device. As shown in fig. 3, the method includes:
step 301: a node tree of a visual draft file of a user interface is obtained.
The user interface includes a system interface of the computer device, and an interface in an Application (Application) installed on the computer device. The visual draft file corresponds to a user interface, and a developer can realize the user interface through programming codes according to the visual draft file. The visual draft file is uploaded by a developer or designer in a computer device. The format of the visual draft file includes ". sketch", ". PSD", and ". xd".
The node tree includes visual element nodes corresponding to layers of visual elements that make up the user interface. The visual element nodes comprise at least one of image layer group nodes, picture nodes, shape nodes and character nodes. The node tree includes visual element nodes corresponding to layers of visual elements that make up the user interface. The node tree can reflect the structural relationship between layers of the visual elements. The node tree further includes attributes, types, styles, and the like of the image layers corresponding to each visual element node.
For a text node type, the attribute is the content of the text, and the style includes the size of a character, the font, the line height, the color, whether to be bolded or not, and the like.
For the type of the picture node, the attribute is a path of the picture, and the type comprises the width and the height of the picture, the background, whether the picture is a round corner, whether a frame exists and the like.
For the graph node type, the attribute is an identification indicating a shape and a path, and the style comprises the width and the height of the graph, the background, whether the graph is a round corner or not, whether a border exists or not and the like.
And the computer equipment can determine the node tree of the visual manuscript file according to the obtained visual manuscript file. For example, the format of the visual manuscript file is ". sketch", and the computer device can obtain a JS Object Notation (JSON) file of a JSON including the node tree of the visual manuscript file by decompressing the visual manuscript file. In the JSON file, the identifier of the layer group node is layer, the identifier of the picture node is image, the identifiers of the shape nodes are rectangle and oval, and the identifier of the text node is text. For visual draft files in the formats of ". PSD" and ". xd", the computer device can export a node tree, such as a PSD.js plugin, through a node tree corresponding to the format of the visual draft file.
Step 302: and determining the visual element nodes which do not support code construction in the node tree.
The visual element node which does not support code construction refers to a visual element node corresponding to a layer of a visual element which cannot be realized by a developer through programming codes. For example, the corresponding visual element nodes comprise a page background graph, a virtual head portrait, a special font character and the like. Optionally, the computer device may obtain the visual element node that does not support the code construction in the node tree by filtering the visual element node that supports the code construction in the node tree. Specifically, the computer device filters visual element nodes meeting the filtering condition in the node tree, thereby obtaining visual element nodes which do not support code construction in the node tree. The filter criteria are used to evaluate whether the visual element node supports pass code construction. The filtration conditions include at least one of:
belonging to a typeless text node;
belongs to a straight line node;
graph nodes belonging to a specified shape.
The text nodes belonging to the non-style refer to that the visual elements in the image layer corresponding to the visual element nodes are characters, and the characters have no special style. The special style refers to a character style which cannot be realized by codes, such as a special font and artistic words. The nodes belonging to the straight line mean that the visual elements in the layer corresponding to the visual element nodes are straight lines. The graph nodes belonging to the specified shape mean that the visual elements in the graph layer corresponding to the visual element nodes are in the specified shape, the specified shape comprises a circle and a rectangle, and the circle and the rectangle are not deformed.
Illustratively, the computer device determines whether a visual element node in the node tree belongs to a style-free text node by:
Figure BDA0002796179860000091
the "node.styles, the" node.shadings "," node.styles, the "node.shadings" and the "node.styles, the" backsourcend & & node.styles, the "type ═ color" are used to filter out the visual element nodes of the text with no border (border), no projection (shades), no background (background) and no background color from the node tree, so as to obtain the visual element nodes belonging to the text nodes without styles.
The computer device determines whether a visual element node in the node tree belongs to a straight line node by:
private static isLine(node:QObject):boolean{
return node.width>50&&node.height<=2;
}
the 'node width > 50' and the 'node height < ═ 2' are used for filtering out visual element nodes of image layers with the length being more than 50 and the height being less than 2 from the node tree, and therefore the visual element nodes belonging to the straight line nodes are obtained.
The computer device determines whether a visual element node in the node tree belongs to a graph node of a specified shape by:
Figure BDA0002796179860000092
Figure BDA0002796179860000101
wherein, "// non-quadrangle", "// quadrangle but with corners modified" and "if a gradual border" correspond to code for filtering out rectangular and circular visual element nodes of quadrangles, corners unmodified and no gradual border from the node tree, resulting in visual element nodes belonging to graphic nodes of a specified shape.
The computer device filters the visual element nodes which finish judging whether the filtering condition is met through the following codes:
Figure BDA0002796179860000102
Figure BDA0002796179860000111
the codes corresponding to the "// no-style character nodes", "// straight lines", and "// normal graph nodes" are used for eliminating the character nodes judged to belong to no-style, the visual element nodes belonging to the straight line nodes or the graph nodes in the specified shape from the node tree, so that the node tree including the visual element nodes which do not support code construction can be obtained, that is, the visual element nodes which do not support code construction can be obtained.
It should be noted that, the computer device implements the visual element Node that does not support code building in the above-mentioned Node tree by running a Node plug-in (Node. The computer equipment can determine the visual element nodes which do not support the code construction in the node tree according to all the filtering conditions or part of the filtering conditions. In addition, the developer can also modify the filtering conditions according to the actual situation, for example, add new filtering conditions, so that the accuracy of the computer device for determining the visual element nodes which do not support the code construction in the node tree is improved.
Step 303: and merging the visual element nodes belonging to the same level in the visual element nodes which do not support code construction to obtain merged visual element nodes.
The same layer means that layers corresponding to the visual element nodes belong to the same map-cutting layer, and the computer equipment can derive the map-cutting according to the layers of the visual element nodes belonging to the same map-cutting layer. Optionally, the computer device merges visual element nodes meeting a merging condition in the visual element nodes that do not support code construction to obtain merged visual element nodes. The computer device takes the merged visual element node as a node of a map-cutting layer, so that an updated node tree comprising at least one node of the map-cutting layer can be obtained. The merging condition is used for judging whether the two visual element nodes belong to the same level or not. Optionally, the merging condition includes at least one of:
the layers corresponding to the two visual element nodes belong to the same slice group;
the color similarity between the layers corresponding to the two visual element nodes reaches a first threshold;
the degree of intersection of the layers corresponding to the two visual element nodes is greater than a second threshold;
two visual element nodes belong to the same directory hierarchy in the node tree.
Aiming at judging whether the image layers corresponding to the two visual element nodes belong to the same slice group or not:
when a designer designs a visual manuscript file, a label is usually given to a layer in the design manuscript file, and the label is used for reflecting a slice group to which the layer belongs. The marked rule is input into the computer equipment by a developer, and the computer equipment can determine the visual element nodes belonging to the same slice group in the visual element nodes which do not support code construction according to the marked rule and merge the visual element nodes.
Illustratively, fig. 4 is a schematic diagram of visual element nodes belonging to the same slice group provided by the embodiment of the present application. As shown in fig. 4, a user interface 401 of a vector design tool for designing a visual draft file includes a layer preview area 402 and a node tree preview area 403 corresponding to a layer. The first image layer 404 corresponds to a first visual element 405 and the second image layer 405 corresponds to a second visual element 406. The designer labels the first layer 404 and the second layer 405 as belonging to the same slice group. Then, when merging the visual element nodes, the computer device merges the visual element nodes corresponding to the first layer 404 and the second layer 405.
Aiming at judging whether the color similarity between the image layers corresponding to the two visual element nodes reaches a first threshold value:
two visual element nodes with the same or very similar color of the layer can be generally derived as the same map-cut layer. For example, a plurality of layers that make up a solid icon, each layer having the same visual element color. And the computer equipment determines the color similarity between the image layers corresponding to the two visual element nodes, and merges the two visual element nodes when a first threshold value is reached. Optionally, the computer device determines the color similarity according to a sum of variances of color values corresponding to the two image layers. When the color similarity is less than or equal to a first threshold, it is determined that the color similarity reaches the first threshold. The first threshold is determined by a developer, and the larger the first threshold is, the higher the possibility that two visual element nodes are merged is; the smaller the first threshold, the lower the likelihood that two visual element nodes are merged. Illustratively, the color value (RGB) of the first layer is R1, G1, B1, the color value of the second layer is R2, G2, B2, and the color similarity between the first layer and the second layer determined by the computer device is (R1-R2)2+(g1-g2)2+(b1-b2)2. Optionally, when the visual element color of the image layer corresponding to the visual element node is a gradient color, the computer deviceAnd determining the average value of the gradient color values as the color values of the layer. For example, if the gradient color value is From (0, 0, 0) to (240, 240, 240), the color value determined by the computer device is R ═ 0+240)/2, G ═ 0+240)/2, and B ═ 0+ 240)/2. Optionally, if the layer corresponding to the visual element node is a picture and the layer has no color attribute, the computer device determines whether to merge two visual element nodes by determining the similarity of the color complexity of the two layers. Color complexity refers to the richness of colors contained by visual elements of the layers. The computer device determines the similarity of the color complexity according to the number of colors and the color values included in the two image layers, wherein the closer the number of colors is, the more the same color values are, and the higher the possibility that the computer device merges two visual element nodes is.
For example, fig. 5 is a schematic diagram for determining whether the color similarity between two image layers reaches a first threshold value according to an embodiment of the present application. As shown in fig. 5, the visual elements of the three layers in the first layer group 501 constitute an animation icon 502. The colors of the three layers in the first layer group 501 are the same. The visual elements of the three layers in the second layer group 503 make up the music icon 504. The colors of the three layers in the second layer group 503 are the same. When merging the visual element nodes, the computer device merges the visual element nodes corresponding to the three layers in the first layer group 501. The visual element nodes corresponding to the three image layers in the second image layer group 503 are merged.
Aiming at judging whether the intersection degree of the image layers corresponding to the two visual element nodes is larger than a second threshold value:
two visual element nodes with intersections for layers can be generally derived as the same map-cut layer. The larger the area of intersection of the two layers is, the higher the possibility that the computer device merges the visual element nodes corresponding to the two layers is. And when the intersection degree of the image layers corresponding to the two visual element nodes is greater than a second threshold value, the computer equipment can combine the two visual element nodes. The degree of intersection is determined by determining the distance of the two layers or the area of the intersection. When the two layers do not have the intersection part, the closer the distance between the two layers is, the larger the intersection degree is; when the two layers have the intersecting part, the larger the area of the intersecting part of the two layers is, the larger the degree of intersection is. The second threshold is determined by a developer, and the second threshold may be set for two cases where there is no intersection between two layers and where there is an intersection between two layers. Namely, when the two layers do not have the intersection part and the distance between the two layers is smaller than the second threshold value for measuring the distance, the computer equipment determines that the intersection degree of the two layers is larger than the second threshold value. And when the two layers have intersecting parts and the area of the intersecting parts of the two layers is larger than a second threshold for measuring the intersecting area, the computer equipment determines that the intersecting degree of the two layers is larger than the second threshold. For example, fig. 6 is a schematic diagram for determining whether the degree of intersection between two layers reaches the second threshold value according to the embodiment of the present application. As shown in fig. 6 (a), a dashed box outside the first layer 601 represents a second threshold, and the second layer 602 intersects with an area where the dashed box is located, where the degree of intersection between the first layer 601 and the second layer 602 is greater than the second threshold. As shown in fig. 6 (b), the second layer 602 does not intersect with the region of the dashed box, and the degree of intersection between the first layer 601 and the second layer 602 is smaller than the second threshold.
Aiming at judging whether two visual element nodes belong to the same directory hierarchy in a node tree or not:
when a designer designs a visual draft file, layers to be merged are usually placed in the same directory hierarchy. The computer equipment merges two visual element nodes belonging to the same directory hierarchy in the node tree, and can automatically merge the visual element nodes needing merging. The two visual element nodes belong to the same directory hierarchy, which means that the father nodes of the two visual element nodes in the node tree are the same. In addition, for two visual element nodes that do not belong to the same directory hierarchy, the computer device may determine a smallest common parent node of the two visual element nodes in the node tree and determine a level difference between the two visual element nodes. The smaller the level difference between two visual element nodes, the higher the likelihood that the computer device will merge the two visual element nodes. For example, if the common parent node of the first visual element node and the second visual element node is the third visual element node, the level difference between the first visual element node and the third visual element node is one level (the third visual element node is the parent node of the first visual element node), and the level difference between the second visual element node and the third visual element node is four levels, the level difference between the first visual element node and the second visual element node is three levels.
Illustratively, fig. 7 is a schematic diagram of visual element nodes belonging to the same directory hierarchy provided in an embodiment of the present application. As shown in fig. 7, the visual elements of the six layers in the layer group 701 collectively form a background map 702. When the computer device merges the visual element nodes, since the six image layers in the image layer group 701 belong to the same directory hierarchy, the computer device merges the visual element nodes corresponding to the six image layers respectively.
Optionally, the merged visual element node further satisfies the following condition:
the area of the layer corresponding to the visual element node is smaller than the third threshold.
For the visual element nodes needing to be merged, the areas of the corresponding layers are usually limited, and the third threshold is determined by the developer according to the sizes of the visual elements in the design file, for example, 35 × 35.
It should be noted that the computer device determines whether the two visual element nodes meet the merge condition according to the sequence in the above description, or determines whether the two visual element nodes meet the merge condition according to another sequence. And in the process of judging whether the two visual element nodes meet the merging condition, the computer equipment also judges whether the area of the layer corresponding to the visual element node is smaller than a third threshold value, so that whether the two visual element nodes are merged is comprehensively determined. In addition, the computer device can also perform merging scoring on the two visual element nodes according to each merging condition, for example, the merging scoring is higher for two visual element nodes with more similar layer colors. And then adding the areas of the image layers corresponding to the nodes of the visual elements. For example, the smaller the layer area, the higher the score. And finally, obtaining a combination score of the two visual element nodes, and when the combination score is higher than a score threshold value, the computer equipment determines to combine the two visual element nodes. In the process of merging the visual element nodes, the computer equipment can further continue to merge the merged visual element node with other visual element nodes until any two visual element nodes in the node tree can not be merged continuously. In addition, the developer can also modify the merging conditions according to the actual situation, for example, add new merging conditions, so that the accuracy of the computer device for determining the visual element nodes needing to be merged in the node tree is improved. Optionally, the computer device implements the process of merging the visual element nodes by running a Node plug-in, where the Node plug-in and the Node plug-in used for determining the visual element Node that does not support code construction in the Node tree are the same plug-in or different plug-ins.
Step 304: and exporting the image layer corresponding to the merged visual element node as a first map-cutting image layer.
The computer device merges the visual element nodes and is able to update the node tree. When the computer device derives the first map-cutting layer according to the updated node tree, the merged visual element node can instruct the computer device to merge layers corresponding to the merged visual element node, and derive the merged visual element node as the same map-cutting layer, so that the map cutting is realized. Optionally, the computer device may add a target identifier to the merged visual element node in the node tree, and derive a map layer corresponding to the visual element node having the target identifier in the node tree as a first map layer. The target mark is used for instructing the computer equipment to merge the layers corresponding to the visual element nodes and derive the layers as the same map cutting layer. The target identification is determined according to a rule for deriving a plug-in for use by the map-cutting layer to identify the map-cutting layer.
Illustratively, the computer device derives the image layer corresponding to the merged visual element node by:
Figure BDA0002796179860000151
the code is run through a plug-in a vector design tool which generates a visual draft file with the format of ". sketch". The computer equipment can export a group of layers with the same identification into a map cutting layer through the codes. Namely, a group of layers with the same identifier are overlapped, and the layers obtained by overlapping are output as a picture (for example, png), so that the graph cutting is realized.
Step 305: and exporting the layer corresponding to the visual element node which is not combined in the visual element nodes which do not support the code construction as a second map cutting layer.
For visual element nodes which do not merge in the visual element nodes which do not support code construction, the computer device can also respectively export the corresponding layers as graph cutting layers, so as to realize graph cutting of the layers of the visual element nodes which are not merged. Optionally, the computer device may further determine whether an area of a layer corresponding to a visual element node that is not merged in the visual element nodes that do not support code building is smaller than a third threshold. And when the value is smaller than the third threshold value, the computer equipment can export the layer corresponding to the visual element node as a map cutting layer.
To sum up, the method provided in the embodiment of the present application determines the visual element nodes that do not support code construction in the node tree, merges the visual element nodes belonging to the same level, and then derives the layer corresponding to the merged visual element node to obtain the map-cutting layer, thereby implementing the map cutting. In the process of realizing graph cutting, manual operation is not needed, and the graph cutting layer can be exported only by processing the node tree of the visual draft file through computer equipment. The problems of easy error and low efficiency caused by manual operation can be avoided, and the technical scheme of full-automatic drawing derivation is provided.
In addition, the visual element nodes supporting code construction in the node tree are determined through the filtering conditions, and the efficiency of determining the visual element nodes supporting code construction can be improved. The visual element nodes supporting code construction are filtered out, and the situation that the layers corresponding to the visual element nodes supporting code construction are exported to be graph cutting layers can be avoided. And determining visual element nodes belonging to the same level in the node tree through a merging condition, so as to merge the visual element nodes, and then deriving the layers of the merged visual element nodes as cut map layers. The layers to be combined can be combined, so that the graph cutting efficiency is improved, and the development efficiency of developers can be improved.
It should be noted that, the order of the steps of the method provided in the embodiments of the present application may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by those skilled in the art within the technical scope disclosed in the present application shall be covered by the protection scope of the present application, and therefore, the detailed description thereof is omitted.
Fig. 8 is a schematic structural diagram of an apparatus for deriving a layer according to an embodiment of the present application. The apparatus may be for a computer device or a client on a computer device. As shown in fig. 8, the apparatus 80 includes:
an obtaining module 801, configured to obtain a node tree of a visual draft file of a user interface, where the node tree includes visual element nodes, and the visual element nodes correspond to layers of visual elements forming the user interface.
A determining module 802, configured to determine a visual element node in the node tree that does not support code building.
The merging module 803 is configured to merge visual element nodes belonging to the same level in the visual element nodes that do not support code construction, so as to obtain merged visual element nodes.
The deriving module 804 is configured to derive a map layer corresponding to the merged visual element node as a first map-cutting map layer.
In an alternative design, merge module 803 is configured to:
and merging the visual element nodes which meet the merging condition in the visual element nodes which do not support the code construction to obtain the merged visual element nodes. The merging condition is used for judging whether the two visual element nodes belong to the same level or not. In an alternative design, the combining conditions include at least one of:
the image layers corresponding to the two visual element nodes belong to the same slice group;
the color similarity between the image layers corresponding to the two visual element nodes reaches a first threshold value;
the intersection degree of the image layers corresponding to the two visual element nodes is greater than a second threshold value;
two visual element nodes belong to the same directory level in the node tree.
In an alternative design, the merged visual element nodes further conform to: and the area of the image layer corresponding to the visual element node is smaller than a third threshold value.
In an alternative design, determining module 802 is configured to:
and filtering the visual element nodes supporting code construction in the node tree to obtain the visual element nodes which do not support code construction in the node tree.
In an alternative design, determining module 802 is configured to:
and filtering the visual element nodes which meet the filtering condition in the node tree to obtain the visual element nodes which do not support the code construction in the node tree. The filtration conditions include at least one of:
belonging to a style-free character node;
belong to a straight line node;
a graph node belonging to a specified shape.
In an alternative design, as shown in fig. 9, the apparatus 80 further comprises:
an adding module 805, configured to add a target identifier to the merged visual element node in the node tree.
The deriving module 804 is configured to derive a map layer corresponding to the visual element node with the target identifier in the node tree as a first map layer.
In an optional design, the deriving module 804 is configured to derive, as a second map-cutting layer, a layer corresponding to a visual element node that is not merged in the visual element nodes that do not support code building.
It should be noted that: the layer deriving device provided in the foregoing embodiment is only illustrated by dividing each functional module, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the layer deriving device and the layer deriving method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Embodiments of the present application further provide a computer device, including: the image layer derivation method comprises a processor and a memory, wherein at least one instruction, at least one program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to realize the image layer derivation method provided by the method embodiments.
Optionally, the computer device is a server. Illustratively, fig. 10 is a schematic structural diagram of a server provided in an embodiment of the present application.
The server 1000 includes a Central Processing Unit (CPU) 1001, a system Memory 1004 including a Random Access Memory (RAM) 1002 and a Read-Only Memory (ROM) 1003, and a system bus 1005 connecting the system Memory 1004 and the CPU 1001. The computer device 1000 also includes a basic Input/Output system (I/O system) 1006 to facilitate information transfer between various devices within the computer device, and a mass storage device 1007 for storing an operating system 1013, application programs 1014, and other program modules 1015.
The basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1008 and input device 1009 are connected to the central processing unit 1001 through an input-output controller 1010 connected to the system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1007 and its associated computer-readable storage media provide non-volatile storage for the server 1000. That is, the mass storage device 1007 may include a computer-readable storage medium (not shown) such as a hard disk or a Compact Disc-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable storage media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable storage instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory devices, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1004 and mass storage device 1007 described above may be collectively referred to as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 1001, the one or more programs containing instructions for implementing the method embodiments described above, and the central processing unit 1001 executes the one or more programs to implement the methods provided by the various method embodiments described above.
The server 1000 may also operate as a remote server connected to a network through a network, such as the internet, according to various embodiments of the present application. That is, the server 1000 may be connected to the network 1012 through a network interface unit 1011 connected to the system bus 1005, or the network interface unit 1011 may be used to connect to another type of network or a remote server system (not shown).
The memory also includes one or more programs, which are stored in the memory, and the one or more programs include instructions for performing the steps performed by the server in the methods provided by the embodiments of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the modules described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where at least one program code is stored in the computer-readable storage medium, and when the program code is loaded and executed by a processor of a computer device, the layer derivation method provided in the above method embodiments is implemented.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the layer derivation method provided by the above method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the above readable storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an example of the present application and should not be taken as limiting, and any modifications, equivalent switches, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method for deriving layers, the method comprising:
acquiring a node tree of a visual draft file of a user interface, wherein the node tree comprises visual element nodes, and the visual element nodes correspond to image layers of visual elements forming the user interface;
determining visual element nodes which do not support code construction in the node tree;
merging visual element nodes belonging to the same level in the visual element nodes which do not support code construction to obtain merged visual element nodes;
and deriving the image layer corresponding to the merged visual element node into a first map-cutting image layer.
2. The method according to claim 1, wherein merging visual element nodes belonging to the same hierarchy among visual element nodes that do not support code building to obtain merged visual element nodes comprises:
merging visual element nodes which meet merging conditions in the visual element nodes which do not support code construction to obtain merged visual element nodes;
wherein the merging condition is used for judging whether two visual element nodes belong to the same level.
3. The method of claim 2, wherein the combining condition comprises at least one of:
the image layers corresponding to the two visual element nodes belong to the same slice group;
the color similarity between the image layers corresponding to the two visual element nodes reaches a first threshold value;
the intersection degree of the image layers corresponding to the two visual element nodes is greater than a second threshold value;
two visual element nodes belong to the same directory level in the node tree.
4. The method according to any of claims 1 to 3, wherein the merged visual element nodes further satisfy the following condition:
and the area of the image layer corresponding to the visual element node is smaller than a third threshold value.
5. The method of any of claims 1 to 3, wherein the determining the visual element nodes in the node tree that do not support code building comprises:
and filtering the visual element nodes supporting code construction in the node tree to obtain the visual element nodes which do not support code construction in the node tree.
6. The method of claim 5, wherein the filtering the visual element nodes supporting code building in the node tree to obtain the visual element nodes not supporting code building in the node tree comprises:
filtering the visual element nodes which meet the filtering condition in the node tree to obtain the visual element nodes which do not support code construction in the node tree; the filtration conditions include at least one of:
belonging to a style-free character node;
belong to a straight line node;
a graph node belonging to a specified shape.
7. The method of any of claims 1 to 3, further comprising:
adding a target identifier to the merged visual element node in the node tree;
the deriving the layer corresponding to the merged visual element node as a first map layer includes:
and deriving the layer corresponding to the visual element node with the target identifier in the node tree as the first map-cutting layer.
8. The method of any of claims 1 to 3, further comprising:
and exporting the layer corresponding to the visual element node which is not combined in the visual element nodes which are not supported by the code construction as a second map cutting layer.
9. An apparatus for deriving a layer, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a node tree of a visual draft file of a user interface, the node tree comprises visual element nodes, and the visual element nodes correspond to image layers of visual elements forming the user interface;
a determining module, configured to determine a visual element node in the node tree that does not support code construction;
the merging module is used for merging visual element nodes belonging to the same level in the visual element nodes which do not support code construction to obtain merged visual element nodes;
and the derivation module is used for deriving the image layer corresponding to the merged visual element node into a first map-cutting image layer.
10. A computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the layer derivation method according to any one of claims 1 to 8.
11. A computer-readable storage medium, wherein at least one program code is stored in the computer-readable storage medium, and the program code is loaded and executed by a processor to implement the layer derivation method according to any one of claims 1 to 8.
CN202011332334.0A 2020-11-24 2020-11-24 Layer export method, device, equipment and storage medium Pending CN112306490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011332334.0A CN112306490A (en) 2020-11-24 2020-11-24 Layer export method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011332334.0A CN112306490A (en) 2020-11-24 2020-11-24 Layer export method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112306490A true CN112306490A (en) 2021-02-02

Family

ID=74335729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011332334.0A Pending CN112306490A (en) 2020-11-24 2020-11-24 Layer export method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112306490A (en)

Similar Documents

Publication Publication Date Title
US7844896B2 (en) Layout-rule generation system, layout system, layout-rule generation program, layout program, storage medium, method of generating layout rule, and method of layout
US7644356B2 (en) Constraint-based albuming of graphic elements
CN105511873B (en) User interface control display method and device
US7870503B1 (en) Technique for analyzing and graphically displaying document order
US20100275152A1 (en) Arranging graphic objects on a page with text
KR101773574B1 (en) Method for chart visualizing of data table
RU2430421C2 (en) Applying effects to merged text path
CN111414165A (en) Interface code generation method and equipment
CN109308386B (en) Engineering drawing wall body identification method and device and electronic equipment
US10762377B2 (en) Floating form processing based on topological structures of documents
CN110738035A (en) document template generation method and device
CN108763511B (en) Layer layout method and device in page, electronic equipment and storage medium
CN112329548A (en) Document chapter segmentation method and device and storage medium
CN113361525A (en) Page generation method and device based on OCR, computer equipment and storage medium
EP0740273B1 (en) Method and apparatus for processing finite element meshing model
CN112015405B (en) Interface layout file generation method, interface generation method, device and equipment
CN112306490A (en) Layer export method, device, equipment and storage medium
JP6948492B2 (en) Information management device and file management method
JP6994138B2 (en) Information management device and file management method
US20020154149A1 (en) System, method and computer program product for associative region generation and modification
CN114154095A (en) Page picture generation method, device, equipment and storage medium
CN110688108A (en) Model generation method and device and storage medium
CN111659121B (en) Method and device for processing effect graph, electronic equipment and readable storage medium
CN117421089B (en) Step geographic information element processing method and device
CN116775015A (en) Layer display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40038245

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination