CN117009685A - Page rendering method and device, electronic equipment and storage medium - Google Patents

Page rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117009685A
CN117009685A CN202210455351.6A CN202210455351A CN117009685A CN 117009685 A CN117009685 A CN 117009685A CN 202210455351 A CN202210455351 A CN 202210455351A CN 117009685 A CN117009685 A CN 117009685A
Authority
CN
China
Prior art keywords
information
layer
target
layer information
compiled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210455351.6A
Other languages
Chinese (zh)
Inventor
贾丽鹏
宗宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210455351.6A priority Critical patent/CN117009685A/en
Publication of CN117009685A publication Critical patent/CN117009685A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a page rendering method, a page rendering device, electronic equipment and a storage medium, wherein the page rendering method comprises the following steps: acquiring visual information to be compiled of a target page; determining the layer types of a plurality of layer information to be compiled; executing target compiling operation corresponding to the layer type on the layer information to be compiled to obtain compiled target layer information; based on the target layer information, performing visual attribute extraction processing to obtain target visual attribute information; rendering the target page based on the target visual attribute information. According to the embodiment of the invention, the analysis processing such as compiling, attribute extracting and the like of the visual information to be compiled can be realized off-line, the analysis time in the page rendering process is improved, the analysis of the complex visual file is realized, and the reduction degree of the target page is improved.

Description

Page rendering method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of data processing, and in particular relates to a page rendering method, a page rendering device, electronic equipment and a storage medium.
Background
With the development of internet technology, an application program can display advertisement content and behavior transformation of a user through a landing page. In internet marketing, a landing page is a web page that is displayed to a user in a jumped manner after a potential user clicks on an advertisement or searches by using a search engine, and specifically refers to a first page that the user enters by clicking on advertising materials/links and other channels.
The landing page can be obtained by parsing and rendering the visual file. The visual file output by the image processing software in the current mainstream has structural information (Domain Specific Language, DSL), a user can inquire the structure through the corresponding operation specification, and the visual file is operated by means of an online analysis tool provided by a landing page making end, so that the required visual file structure is analyzed and the visual elements are extracted; rendering of the landing page is performed based on the parsed visual file structure and the extracted visible elements. However, in the current floor page rendering process, online analysis is required for the video file, when the number of users is large, queuing analysis is required, the waiting time is usually more than half an hour, and when the network is not smooth, the situation that analysis cannot be performed also occurs, so that the floor page rendering progress is affected; in addition, the existing known compiling methods have the problem that the compiling effect and the visual file difference are large, and the compiling content needs to be repeatedly modified to be put into use, so that the user experience is affected.
Disclosure of Invention
In view of the above technical problems, the present disclosure provides a method, an apparatus, an electronic device, and a storage medium for rendering a page.
According to an aspect of the disclosed embodiments, a page rendering method is provided, the method is applied to image processing software to obtain visual information to be compiled of a target page, and the visual information to be compiled includes a plurality of layer information to be compiled;
determining the layer types of the plurality of layer information to be compiled;
performing target compiling operation corresponding to the layer type on the layer information to be compiled to obtain compiled target layer information;
based on the target layer information, performing visual attribute extraction processing to obtain target visual attribute information;
and rendering the target page based on the target visual attribute information.
According to another aspect of the embodiments of the present disclosure, there is provided a page rendering apparatus including:
the system comprises a visual information acquisition module to be compiled, a visual information compiling module and a visual information compiling module, wherein the visual information acquisition module is used for acquiring visual information to be compiled of a target page, and the visual information to be compiled comprises a plurality of layer information to be compiled;
the layer type determining module is used for determining layer types of the plurality of layer information to be compiled;
the compiling module is used for executing target compiling operation corresponding to the layer type on the layer information to be compiled to obtain compiled target layer information;
The attribute extraction module is used for carrying out visual attribute extraction processing based on the target layer information to obtain target visual attribute information;
and the rendering module is used for rendering the target page based on the target visual attribute information.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the page rendering method described above.
According to another aspect of the disclosed embodiments, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the above-described page rendering method.
According to another aspect of the disclosed embodiments, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the above-described page rendering method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the script virtual machine in the image processing software is used for analyzing and processing the visual information to be compiled, so that the off-line analysis of the visual information to be compiled can be realized, network limitation can be avoided, waiting time can be greatly reduced, the analysis timeliness of the visual information to be compiled can be improved, in the analysis process, the layer types of a plurality of layer information to be compiled are combined, the layer information to be compiled is subjected to target compiling operation corresponding to the layer types to obtain compiled target layer information, the reduction degree of a target page obtained by rendering the compiled target layer information can be greatly improved, the analysis of a complex visual file can be realized, the visual attribute extraction processing is carried out to obtain target visual attribute information, the target page is rendered based on the target visual attribute information, the analysis efficiency in the page rendering process can be improved, and the reduction degree of the target page is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an application environment shown in accordance with an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of page rendering, according to an example embodiment;
FIG. 3 is a flowchart illustrating a method after obtaining visual information of a target page to be compiled, in accordance with an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method for performing visual attribute extraction processing to obtain target visual attribute information based on target layer information, according to an example embodiment;
FIG. 5 is a flowchart of another method for extracting visual attribute information based on target layer information to obtain target visual attribute information according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating a method of rendering a target page based on target visual attribute information, according to an example embodiment;
FIG. 7 is a flowchart illustrating a method of presenting a preview page and updating target visual attribute information in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating an operator interface in a page rendering process, according to an example embodiment;
FIG. 9 is a diagram illustrating a comparison of the effects of a target page and a visual file, according to an example embodiment;
fig. 10 is a block diagram illustrating a structure of a page rendering apparatus according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment according to an exemplary embodiment, and as shown in fig. 1, the application environment may be image processing software, and in particular, the image processing software may be Adobe Photoshop. The above page rendering method may be performed by a script virtual machine in image processing software. The script virtual machine can comprise a first script virtual machine and a second script virtual machine; as host software, the image processing software may provide a first script virtual machine (JavaScript Engine) in the canvas (Application) and a second script virtual machine (V8 JavaScript Engine) in the extensible platform (CEP Extension Runtime). The first script virtual machine in the canvas can be used for operating image processing software, and the operation of visual information to be compiled is realized by calling a built-in interface, for example, the operations of moving a layer, deleting the layer, rasterizing the image and the like can be executed; a second script virtual machine in the extensible platform may be used for page previews, data conversion, and generation of target pages.
Referring to fig. 2, fig. 2 is a flowchart illustrating a page rendering method, which may be applied to the above-described script virtual machine, according to an exemplary embodiment, the page rendering method including the following steps, as shown in fig. 2.
In step S201, visual information to be compiled of a target page is acquired.
In a specific embodiment, the target page may refer to a page to be generated; specifically, the target page may be a landing page to be generated. The visual information to be compiled may refer to information to be compiled including a plurality of page elements; the format of the visual information to be compiled may be a specific format of the image processing software, for example, when the image processing software is Adobe Photoshop, the format of the visual information to be compiled may be a bitmap file format. The visual information to be compiled may include a plurality of layer information to be compiled, and any layer information to be compiled may characterize visual information of the target page corresponding to any layer of the plurality of layers.
In a specific embodiment, the visual information to be compiled of the target page may be designed and generated by image processing software. The image processing software may transmit visual information to be compiled to the first script virtual machine.
In a specific embodiment, the plurality of to-be-compiled layer information in the to-be-compiled visual information may include a plurality of to-be-compiled layer information corresponding to each of the plurality of drawing boards, and the plurality of to-be-compiled layer information corresponding to the to-be-compiled drawing board in the plurality of drawing boards may be determined based on a drawing board selection instruction triggered by the user.
In a specific embodiment, to prevent erroneous deletion or movement, the image processing software may default to lock the background layer, as shown in fig. 3, and after the step S201, the method may further include:
s301, background locking detection is carried out on the plurality of layers of information to be compiled, and a detection result is obtained.
S302, unlocking the layer information of which the detection result indicates background locking, and obtaining unlocked visual information to be compiled.
In a specific embodiment, the visual information to be compiled may be visual information to be compiled after unlocking.
In the above embodiment, the situation that the background layer is locked may exist in the visual information to be compiled, and the locked visual information to be compiled may be obtained by performing background locking detection on the plurality of pieces of the visual information to be compiled and performing unlocking processing on the visual information of the layer whose detection result indicates that the background is locked, so that the compiling failure of the background layer due to the locking of the background layer may be avoided.
In step S202, a layer type of a plurality of layer information to be compiled is determined.
In a particular embodiment, the layer types may include a non-text type, a text type, an invisible type, a null layer group type, a mask type, an image type, a hybrid layer type, or a hybrid layer group type, etc. The text types may include, among other things, plain text types and/or rich text types.
In a specific embodiment, the step S202 may include:
and carrying out layer type identification on the plurality of layer information to be compiled in the visual information to be compiled to obtain layer types corresponding to the plurality of layer information to be compiled.
In a specific embodiment, the layer attribute information of each layer information to be compiled may be obtained, and the layer type corresponding to the layer information to be compiled may be identified by combining the layer attribute information. Specifically, when the layer attribute information indicates that the layer information to be compiled is text information, it may be determined that the layer type of the layer information to be compiled is a text type; when the layer attribute information indicates that the layer information to be compiled is not text information, the layer type of the layer information to be compiled can be determined to be a non-text type; when the layer attribute information indicates that the layer information to be compiled is normal text information or rich text information, the layer type of the layer information to be compiled can be determined to be normal text type or rich text type. When the layer attribute information indicates that the layer information to be compiled is invisible layer information, the layer type of the layer information to be compiled can be determined to be invisible type. When the layer attribute information indicates that the layer information to be compiled is layer group information and is an empty group, it may be determined that the layer type of the layer information to be compiled is an empty layer group type. When the layer attribute information indicates that the layer information to be compiled is mask layer information, the layer type of the layer information to be compiled can be determined to be a mask type. When the layer attribute information indicates that the layer information to be compiled is image layer information, the layer type of the layer information to be compiled can be determined to be an image type, wherein the image layer can comprise a vector layer, a bitmap layer, a 3D layer, a filter layer, an intelligent object layer or a gradient filling layer. When the layer attribute information indicates that a mixer acts on the layer information to be compiled, the layer type of the layer information to be compiled can be determined to be a mixed layer type; when the layer attribute information indicates that the mixer acts on the layer information to be compiled and the layer information to be compiled is layer group information, the layer type of the layer information to be compiled can be determined to be a mixed layer group type.
In a specific embodiment, if the background layer locking exists in the plurality of layer information to be compiled, the background locking detection can be performed on the plurality of layer information to be compiled, and the unlocking processing can be performed on the layer information, the detection result of which indicates that the background is locked, so as to obtain unlocked visual information to be compiled. Specifically, after the unlocked visual information to be compiled is obtained, layer type identification can be performed on the multiple layer information to be compiled in the unlocked visual information to be compiled, so that layer types corresponding to the multiple layer information to be compiled can be obtained.
In step S203, a target compiling operation corresponding to the layer type is performed on the layer information to be compiled, so as to obtain compiled target layer information.
In a specific embodiment, in the process of parsing the layer information to be compiled, because of the complex structure of the layer information to be compiled, the layer information to be compiled may be converted into target layer information of a hierarchical structure by performing a target compiling operation corresponding to a layer type.
In a specific embodiment, the target compiling operation may refer to an operation performed on a layer corresponding to layer information to be compiled. The target compiling operation may include merging of layer information, deleting of layer information, modifying of hierarchical relationship of layer information, and the like.
In a specific embodiment, the performing, by the layer information to be compiled, the target compiling operation corresponding to the layer type may include:
and deleting the layer information of which the layer type comprises an invisible type or an empty layer group type in the layer information to be compiled.
In a particular embodiment, the target compilation operation may be a delete operation in the event that a layer type of the layer information includes an invisible type or an empty layer group type.
In the embodiment, by deleting the invisible type layer information, the derivation of irrelevant elements in the invisible layer can be avoided, and the reduction degree of the target page can be further improved better.
In a specific embodiment, the visual information to be compiled may be tree structure information, and correspondingly, the visual information to be compiled may be tree structure information with a plurality of layer information to be compiled as nodes and a hierarchical relationship between layers to which the layer information to be compiled belongs as a node hierarchical relationship, where the layer information to be compiled performs a target compiling operation corresponding to a layer type, and may further include:
and determining the layer information to be processed of the masked layer information mask from the plurality of layer information to be compiled.
And carrying out merging processing on the mask layer information and the layer information to be processed to obtain first merged layer information, and moving the first merged layer information into the root node of the tree structure information.
In a specific embodiment, the mask layer information may refer to layer information of which a layer type is a mask type in the plurality of layer information to be compiled. Specifically, the position of the mask layer information in the current layer group can be recorded as a starting position by traversing the tree structure information, then the position of the first non-mask mark layer corresponding to the current layer group is found from the current layer group in a hierarchical relation from top to bottom to be a final position, and the layer information between the starting position and the final position is used as the mask layer information.
In a specific embodiment, the layer information to be processed may refer to layer information in which a partial region is masked by masking layer information.
In a specific embodiment, in a case where the layer type of the layer information is a mask type, the target compiling operation may include a layer information merging operation and a modifying operation of a hierarchical relationship of the layer information.
In the above embodiment, by combining the mask layer information and the layer information to be processed in advance, the mask effect of the mask layer information is applied to the layer information to be processed, so that the loss of the mask effect caused by the loss of the mask layer information in the export process is avoided, and the reduction degree of the target page can be improved.
In a specific embodiment, the performing, by the layer information to be compiled, the target compiling operation corresponding to the layer type may include:
and merging the layer information with the multiple layer types of the image types and continuous layers in the layer information to be compiled to obtain second merged layer information, and moving the second merged layer information into the root node.
In a specific embodiment, the second merging layer information may refer to layer information obtained by merging layer information with a plurality of layer types being image types and having continuous layers in the plurality of layer information to be compiled. The plurality of layer types are picture types and hierarchically continuous layer information may refer to layer information of a plurality of hierarchically continuous picture types within a single layer group.
Specifically, the position of the first layer information with the image type in the current layer group can be recorded as the initial position through traversing the tree structure information, then the layer information with the image type being the non-image type is found from the current layer group according to the sequence from top to bottom in the hierarchical relation, the upper layer image information of the layer information with the non-image type is taken as the final position, the layer information between the initial position and the final position is the layer information with the image type and the continuous hierarchy, the layer information between the initial position and the final position is merged, the second merged layer information can be obtained, and the second merged layer information is moved into the root node.
In the above embodiment, by merging the layers of the image types with continuous layers in the layer group, the same type of content can be more integral without losing editability, so that the user can conveniently edit the integral image for the second time, and the method is suitable for advertisers.
In a specific embodiment, the performing, by the layer information to be compiled, the target compiling operation corresponding to the layer type may include:
and moving single layer information with the image type as the layer type in the plurality of layer information to be compiled into the root node.
In a specific embodiment, the single layer information of which the layer type is the picture type may refer to layer information adjacent thereto, which are all of non-picture type.
Specifically, single layer information with the image type being the image type in the plurality of layer information to be compiled can be found by traversing the tree structure information.
In a specific embodiment, in a case where the layer type of the layer information is an image type, the target compiling operation may be a modifying operation of the hierarchical relationship of the layer information, or may include a modifying operation of the layer information merging operation and the hierarchical relationship of the layer information.
In a specific embodiment, the user may select an parsing manner (including parsing by groups and parsing by layers) according to the parsing granularity, and in the case that the user selects the parsing manner by groups, perform the merging processing on the layer information with the image type as the layer type in the plurality of layer types to be compiled and with continuous layers in the plurality of layer information to be compiled, to obtain second merged layer information, move the second merged layer information into the root node, and perform the step of moving the single layer information with the image type as the image type in the plurality of layer information to be compiled into the root node; when the user selects the resolution method for layer resolution, the action of moving the layer information of the image type into the root node is taken as the target compiling operation for the layer information of the image type. It can be understood that for the resolution mode of resolution according to the layers, the layer information with continuous layers is not combined and directly moved into the root node, and the layer relation among the layer information can be reserved, so that the resolution of the material with the finest granularity is provided, a user can conveniently edit the material for the second time according to the finest granularity, and the requirements of different editing scenes can be better met.
In a specific embodiment, the performing, by the layer information to be compiled, the target compiling operation corresponding to the layer type may include:
and combining the mixed layer information and lower layer information of the mixed layer information to obtain third combined layer information, and moving the third combined layer information into the root node.
In a specific embodiment, the mixed layer information may refer to layer information of which a layer type is a mixed layer in the plurality of layer information to be compiled. The lower layer information of the mixed layer information may refer to layer information corresponding to a lower layer of the layer position at a position corresponding to the mixed layer information.
In a specific embodiment, in a case where the layer type of the layer information is a hybrid layer type, the target compiling operation may include a layer information merging operation and a modification operation of a hierarchical relationship of the layer information.
Specifically, the mixed layer information and the lower layer information of the mixed layer information are combined to obtain the third combined layer information, so that the design effect loss of the mixed layer can be avoided, and the reduction degree of the target page is further improved.
In a specific embodiment, the performing, by the layer information to be compiled, the target compiling operation corresponding to the layer type may include:
And under the condition that the layer type is the mixed layer group type, moving the layer information in the mixed layer group into a root node, and deleting the mixed layer group.
In a specific embodiment, the mixed layer group may refer to a layer group whose layer type is a mixed layer.
In a specific embodiment, in a case where the layer type of the layer information is a hybrid layer group, the target compiling operation may include a modifying operation of a hierarchical relationship of the layer information and a deleting operation of the layer information.
Specifically, the mixed layer group may include at least one layer information; by moving all layer information in the mixed layer group into the root node and deleting the mixed layer group, the influence of merging the layer information in the mixed layer group into a single bitmap in the export process on the secondary editing can be avoided.
In a specific embodiment, the performing, by the layer information to be compiled, the target compiling operation corresponding to the layer type may include:
deleting the layer information of which the layer type comprises an invisible type or an empty layer group type in the layer information to be compiled;
determining the layer information to be processed of a masked layer information mask from a plurality of layer information to be compiled; combining the mask layer information and the layer information to be processed to obtain first combined layer information, and moving the first combined layer information into the root node of the tree structure information;
Combining the layer information with the image types and continuous layers in the layer information to be compiled to obtain second combined layer information, and moving the second combined layer information into the root node;
the method comprises the steps of moving single layer information with image types in the layer information to be compiled into a root node;
combining the mixed layer information and lower layer information of the mixed layer information to obtain third combined layer information, and moving the third combined layer information into a root node; the mixed layer information is layer information of which the layer type is a mixed layer in the layer information to be compiled;
and under the condition that the layer type is the mixed layer group type, moving the layer information in the mixed layer group into a root node, and deleting the mixed layer group.
In a specific embodiment, the target layer information may be layer information obtained after performing at least one target compiling operation on the layer information to be compiled.
In the above embodiment, the visual information to be compiled may be flattened into a layered structure by moving the first merged layer information into the root node of the tree structure information, moving the second merged layer information into the root node, moving the single layer information with the image type as the layer type in the plurality of layer information to be compiled into the root node, moving the third merged layer information into the root node, and moving the layer information in the mixed layer group into the root node, which is more beneficial to the description of the page structure by the code information of the target page converted later.
In step S204, visual attribute extraction processing is performed based on the target layer information, to obtain target visual attribute information.
In a specific embodiment, the target visual attribute information may refer to generic descriptive structural information of a target page layer layout. The target visual attribute information may characterize size attributes, location attributes, category attributes, and/or content corresponding to the plurality of visual elements in the visual information to be compiled. The target visual attribute information may include attribute information corresponding to each of the plurality of layer information in the target layer information, and specifically, the attribute information may be a JSON (JavaScript Object Notation, JS object numbered musical notation) array. The attribute information corresponding to each layer information may include size information, position information, category information of the corresponding element of the layer information, and identification information of the corresponding image.
In a specific embodiment, as shown in fig. 4, the step S204 may include:
s401, rasterizing the non-text layer information to obtain first layer information after rasterizing.
In a specific embodiment, the non-text layer information may refer to layer information in which a layer type in the target layer information is a non-text type. The first layer information may include a plurality of rasterized non-text type layer information.
Specifically, the multiple bitmap information can be obtained by rasterizing multiple non-text type layer information in the target layer information respectively, and the multiple bitmap information is used as the first layer information.
In a specific embodiment, before rasterizing, for layer information with size information smaller than preset size information in non-text layer information, layer information with the element actual size unchanged but the layer size information (such as layer width and layer height) being the preset size information can be obtained through redrawing, and then rasterizing is performed, so as to avoid that the layer is difficult to select by an editor due to undersize of the layer, and secondary editing is affected.
In the above embodiment, by performing rasterization processing on multiple non-text type layer information in the target layer information, the corresponding layer effect loss caused by that the editor does not support a special layer type in part of image processing software can be avoided, the special effect in the layer information can be reserved to the greatest extent, and the reduction degree of the target page can be further improved.
S402, determining out-of-range layer information from the first layer information and the text layer information.
In a specific embodiment, the visual information to be compiled may include background layer information, and the out-of-bounds layer information may refer to layer information beyond a background layer boundary. The out-of-range layer information may be a plurality of layer information exceeding a background layer boundary from among the first layer information and the text layer information.
Specifically, the position information of each layer information in the first layer information and the text layer information can be obtained, and the layer information can be determined to be non-boundary crossing layer information under the condition that the position information of the layer information belongs to the boundary range of the background layer; and under the condition that the position information of the layer information exceeds the boundary range of the background layer, determining that the layer information is out-of-range layer information.
S403, cutting the target area in the out-of-range layer information to obtain second layer information.
In a specific embodiment, the target region may refer to a region beyond the boundary of the background layer in the out-of-range layer information.
Specifically, the position of the target area of the out-of-range layer information can be determined according to the position information and the size information of the out-of-range layer information and the position information of the background layer boundary; and cutting off the target area in the out-of-range layer information based on the position of the target area of the out-of-range layer information, and reserving the part except the target area in the out-of-range layer information as second layer information.
In the above embodiment, by clipping the target area in the out-of-range layer information to obtain the second layer information, it is possible to avoid the influence of the second editing caused by the inability of the editor to select the out-of-range layer information.
S404, visual attribute extraction processing is carried out on the non-boundary crossing layer information and the second layer information, and target visual attribute information is obtained.
In a specific embodiment, the non-boundary crossing layer information may refer to layer information other than the boundary crossing layer information in the first layer information and the text layer information.
Specifically, for layer information with a layer type of a common text type, text content information, position information, size information and color information in the layer information with the common text type can be extracted to serve as attribute information corresponding to the layer information, wherein the size information can comprise attributes such as font size, line height and the like; for the layer information with the layer type being the image type, the image position information, the size information and the image identification information in the layer information with the image type can be extracted to serve as attribute information corresponding to the layer information.
The distribution structure of the plurality of attribute information in the target visual attribute information corresponds to the distribution structure of the plurality of layer information in the target layer information. It may be appreciated that the attribute information in the target visual attribute information is extracted from the layer information in the target layer information, that is, the plurality of layer information in the target layer information may correspond to the plurality of attribute information in the target visual attribute information one by one.
In a specific embodiment, as shown in fig. 5, for rich text layer information with a layer type being a rich text type, the step S204 may include:
s501, carrying out word-by-word analysis processing on the rich text layer information to obtain attribute information corresponding to each of a plurality of rich text words in the rich text layer information.
In a specific embodiment, the rich text in the visual information to be compiled is different from the normal text, and may include a plurality of text styles, so that a rich text effect may be displayed, and further, the design feel of the page may be improved. The rich text layer information may refer to layer information whose layer type is a rich text type. In a unit of single text, the rich text layer information can comprise a plurality of rich text characters, and the attribute information of each rich text character can comprise attribute information such as font size, color, line height, font type, text content, layer identification and the like.
In a specific embodiment, the attribute information corresponding to each of the plurality of rich text words ordered according to the word order of the rich text content can be obtained by performing word-by-word analysis processing on the plurality of rich text words in the rich text layer information. Specifically, attribute information such as font size, color, line height, font type, text content, layer identification and the like corresponding to the rich text can be obtained by carrying out attribute extraction processing on the single rich text.
S502, combining the attribute information corresponding to each of the plurality of rich text characters to obtain target visual attribute information corresponding to rich text layer information.
In a specific embodiment, a single layer may include multiple rich text fonts with the same attribute except for the font content, where the attribute information with the same attribute in some of the multiple rich text fonts may be combined to obtain combined attribute information, and the combined attribute information is used as target visual attribute information corresponding to the rich text layer information. Specifically, characters with the same attribute except for the text content can be combined to obtain combined attribute information. For the rich text layer information with different font attributes, the attribute information corresponding to each of a plurality of rich text words can be combined according to the word arrangement sequence of the rich text content, so as to obtain the target visual attribute information corresponding to the rich text layer information.
In the embodiment, the rich text type layer is analyzed, so that the effect of the rich text of the target page can be kept, and the reduction degree of the target page is improved; by combining the attribute information with the same partial attribute in the plurality of rich text characters, the length of the description information corresponding to the attribute information can be reduced, and the occupied memory of the target visual attribute information can be further reduced.
In step S205, a target page is rendered based on the target visual attribute information.
In a specific embodiment, as shown in fig. 6, the method further includes:
s601, image export processing is carried out on target layer information, and a plurality of target images are obtained.
In a specific embodiment, the plurality of target images may be images derived from a plurality of layer information in the target layer information one by one. Each target image may be a file in picture format, for example, the target image may be a file in png format.
In a specific embodiment, image derivation processing is performed on each layer information in the target layer information, so that a target image corresponding to each layer information can be obtained, and a plurality of images corresponding to a plurality of layer information in the target layer information are used as a plurality of target images. The image deriving process includes layer information of a character type, and the obtained target image may be an image formed by all characters including all character effects in the layer information.
In a specific embodiment, the step S205 may include:
s602, generating target analysis data based on the target visual attribute information and a plurality of target images.
In one particular embodiment, target resolution data may be used to render a target page. The target resolution data may include target visual attribute information and a plurality of target images, wherein the target visual attribute information may include a plurality of attribute information corresponding to a plurality of layer information in the target layer information, and identification information in the plurality of attribute information corresponds to identification information of the plurality of target images one by one.
Specifically, the corresponding relation between a plurality of target images and a plurality of attribute information in the target visual attribute information can be obtained according to the corresponding relation between the target images and the layer information in the target layer information and the corresponding relation between the target visual attribute information and the layer information in the target layer information; the identification information in the attribute information of the target visual attribute information is used as the identification information of the target image corresponding to the attribute information, so that the identification information of each of a plurality of target images can be obtained.
S603, rendering the target page based on the target analysis data.
In practical application, the second script virtual machine can generate code information of a target page through target analysis data; and sending the code information of the target page to a page rendering engine so that the page rendering engine performs page rendering according to the code information of the target page to obtain the target page. Wherein code information of the target page may be used for rendering the generated target page, and the code information of the target page may be HTML (Hyper Text Markup Language ) code information. The page rendering engine may be a browser.
In a specific embodiment, as shown in fig. 7, the method further includes:
s701, displaying target visual attribute information and a preview page corresponding to the target visual attribute information based on a plurality of target images.
In practical applications, the preview page can be generated by a second script virtual machine in the extensible platform based on the plurality of target images and the target visual attribute information.
S702, updating target visual attribute information in response to an updating instruction based on the target visual attribute information, and updating the preview page based on the updated target visual attribute information.
In practical application, the user can update the position information and the size information in the target visual attribute information to realize secondary editing, so that the reduction degree of the target page is further improved.
In a specific embodiment, the layer of the text type in the preview page may be directly generated based on the corresponding attribute information in the target visual attribute information, for example, may be based on category information, size information, and location information; or may be generated based on corresponding attribute information in the target visual attribute information and the corresponding target image. The method specifically adopted may be determined based on a selection instruction of the user. It can be understood that, for the text types possibly containing multiple special effects in the rich text types, the special effects on the text can be reserved to the greatest extent through corresponding attribute information in the target visual attribute information and corresponding target image generation, so as to ensure the reduction degree of the target page.
In a specific embodiment, the step S602 may include:
and responding to a rendering confirmation instruction of the target page, and generating target analysis data based on the updated target visual attribute information and the target images.
In practical application, after receiving a rendering instruction of a target page, the second script virtual machine may generate target analysis data based on the updated target visual attribute information and the multiple target images, and generate code information of the target page according to the target analysis data.
In the embodiment, the script virtual machine in the image processing software is used for analyzing and processing the visual information to be compiled, so that the offline analysis of the visual information to be compiled can be realized, network limitation can be avoided, waiting time can be greatly reduced, the analysis timeliness of the visual information to be compiled is improved, in the analysis process, the layer types of a plurality of layer information to be compiled are combined, the layer information to be compiled is subjected to target compiling operation corresponding to the layer types to obtain compiled target layer information, the reduction degree of a target page rendered by the compiled target layer information can be greatly improved, the analysis of a complex visual file can be realized, the visual attribute extraction processing is also combined to obtain target visual attribute information, the analysis efficiency in the page rendering process can be improved based on the target visual attribute information, and the reduction degree of the target page can be improved.
Fig. 8 is a schematic diagram of an operation interface in a page rendering process according to an exemplary embodiment, and fig. 9 is a graph of comparing effects of a target page and a visual file according to an exemplary embodiment. As shown in fig. 8 (a) to 8 (g), the rendering process of the target page is as follows:
before analysis, the user needs to download and install related plug-ins comprising the first script virtual machine and the second script virtual machine, so that the first script virtual machine is pre-installed in canvas of image processing software operated by the user, and the second script virtual machine is pre-installed in the extensible platform.
Alternatively, as shown in fig. 8 (a), the user may select a desired resolution type as needed, and generate a resolution instruction after confirming the resolution type. Referring to fig. 8 (b), after receiving the analysis instruction, the first script virtual machine in the image processing software may call the visual information to be compiled from the image processing software, and analyze the tree structure information in the visual information to be compiled. Before layer-by-layer compiling, background locking detection can be performed on a plurality of layer information to be compiled in the visual information to be compiled, and unlocking processing is performed on the layer information, the detection result of which indicates background locking, so that unlocked visual information to be compiled is obtained. After the unlocked visual information to be compiled is obtained, layer type identification can be carried out on the multiple layer information to be compiled in the unlocked visual information to be compiled, so that layer types corresponding to the multiple layer information to be compiled can be obtained. And after the layer types corresponding to the layer information to be compiled are obtained, compiling the layer information to be compiled layer by layer according to the layer-by-layer compiling sequence from the root node based on the hierarchical relation of the layer information to be compiled.
In the layer-by-layer compiling process, target compiling operation corresponding to the layer type can be executed on the layer information to be compiled, so that compiled target layer information is obtained. Specifically, for layer information of the image type and the text type, a preset level threshold value may be preset; moving the layer information into a root node under the condition that the current layer information belongs to a layer greater than or equal to a preset layer threshold, wherein the current layer information belongs to a layer corresponding to the layer group where the layer information is located; under the condition that the current layer of the layer information is smaller than a preset layer threshold, the layer information is reserved in the original layer group; whether the layer moves into the root node can be determined by comparing the current layer of the layer information with a preset layer threshold value so as to control the structural framework of the target visual attribute information, and the requirement of a user on the target visual attribute information of a non-layer structure can be met. For the layer information of which the layer type is a layer group, if the layer group is empty, deleting the layer group; if the layer group is not empty, the layer group is entered, layer-by-layer compiling is carried out on the layer information in the layer group, and the lower layer of the layer group is compiled until the layer information in the layer group is compiled. And obtaining target layer information until the layer information to be compiled is completely compiled. It can be appreciated that the target layer information can be inherited to obtain the hierarchical relationship in the plurality of layer information to be compiled through layer-by-layer compiling.
After compiling is completed, carrying out rasterization processing on non-text layer information in target layer information to obtain first layer information after rasterization; and determining out-of-range layer information from the first layer information and the text layer information, cutting a target area in the out-of-range layer information to obtain second layer information, and extracting visual attributes of the non-out-of-range layer information and the second layer information to obtain target visual attribute information. For the rich text layer information with the layer type being the rich text type, the attribute information corresponding to each of a plurality of rich text words in the rich text layer information can be obtained by carrying out word-by-word analysis processing on the rich text layer information, and the target visual attribute information corresponding to the rich text layer information can be obtained by carrying out combination processing on the attribute information corresponding to each of the plurality of rich text words. And performing image export processing on the target layer information, and rapidly exporting each layer information in the target layer information in a bitmap format to obtain a plurality of target images. Alternatively, as shown in fig. 8 (c), the second script virtual machine may display the target visual attribute information and the preview page corresponding to the target visual attribute information based on the plurality of target images. As shown in connection with fig. 8 (d), the user may modify the target visual attribute information and trigger an update instruction; the second script virtual machine may update the target visual attribute information in response to an update instruction based on the target visual attribute information, and update the preview page based on the updated target visual attribute information. And after confirming the displayed page effect of the preview page, the user can trigger a rendering confirmation instruction of the target page. Optionally, as shown in fig. 8 (e), in the parsing process, there may be parsing abnormality caused by the oversized visual file, and the user may retry after optimizing the size of the material. Referring to fig. 8 (f), after the user adjusts the size of the material to meet the requirement, the second script virtual machine may generate target resolution data based on the updated target visual attribute information and the plurality of target images. Referring to fig. 8 (g) and fig. 9, the second script virtual machine may generate code information of the target page according to the target parsing data, and send the code information of the target page to the page rendering engine, so that the page rendering engine performs page rendering according to the code information of the target page to obtain the target page.
Fig. 10 is a block diagram illustrating a structure of a page rendering apparatus according to an exemplary embodiment. Referring to fig. 10, the apparatus includes:
the visual information to be compiled obtaining module 1010 is configured to obtain visual information to be compiled of a target page, where the visual information to be compiled includes a plurality of layer information to be compiled;
a layer type determining module 1020, configured to determine a layer type of the plurality of layer information to be compiled;
the compiling module 1030 is configured to execute a target compiling operation corresponding to a layer type on the layer information to be compiled, to obtain compiled target layer information;
the attribute extraction module 1040 is configured to perform visual attribute extraction processing based on the target layer information, so as to obtain target visual attribute information;
the rendering module 1050 is configured to render the target page based on the target visual attribute information.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In an exemplary embodiment, there is also provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement a page rendering method as in the embodiments of the present disclosure.
In an exemplary embodiment, a computer-readable storage medium is also provided, which when executed by a processor of an electronic device, enables the electronic device to perform the page rendering method in the embodiments of the present disclosure.
In an exemplary embodiment, a computer program product containing instructions is also provided which, when run on a computer, cause the computer to perform the page rendering method in the embodiments of the present disclosure.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A method of page rendering, the method being applied to a script virtual machine in image processing software, the method comprising:
acquiring visual information to be compiled of a target page, wherein the visual information to be compiled comprises a plurality of layer information to be compiled;
determining the layer types of the plurality of layer information to be compiled;
performing target compiling operation corresponding to the layer type on the layer information to be compiled to obtain compiled target layer information;
Based on the target layer information, performing visual attribute extraction processing to obtain target visual attribute information;
and rendering the target page based on the target visual attribute information.
2. The method of claim 1, wherein the target layer information comprises non-text layer information of which the layer type is a non-text type and text layer information of which the layer type is a text type;
the visual attribute extraction processing is performed based on the target layer information to obtain target visual attribute information, including:
rasterizing the non-text layer information to obtain first layer information after rasterizing;
determining out-of-range layer information from the first layer information and the text layer information, wherein the out-of-range layer information is layer information exceeding a background layer boundary;
cutting a target area in the out-of-range layer information to obtain second layer information, wherein the target area is an area exceeding the background layer boundary in the out-of-range layer information;
and performing visual attribute extraction processing on the non-boundary crossing layer information and the second layer information to obtain the target visual attribute information, wherein the non-boundary crossing layer information is layer information except the boundary crossing layer information in the first layer information and the text layer information.
3. The method according to claim 1, wherein the visual information to be compiled is tree structure information with the plurality of layer information to be compiled as nodes and the hierarchical relationship between layers to which the plurality of layer information to be compiled belongs as node hierarchical relationship;
the executing the target compiling operation corresponding to the layer type on the layer information to be compiled at least comprises one of the following steps:
deleting the layer information of which the layer type comprises an invisible type or an empty layer group type in the layer information to be compiled;
determining to-be-processed layer information of a masked layer information mask from the plurality of to-be-compiled layer information, wherein the masked layer information is layer information of which the layer type is a mask type in the plurality of to-be-compiled layer information; combining the mask layer information and the layer information to be processed to obtain first combined layer information, and moving the first combined layer information into a root node in the tree structure information;
combining the layer information with the image types of a plurality of layers in the layer information to be compiled and continuous layers to obtain second combined layer information, and moving the second combined layer information into the root node;
Moving single layer information of which the layer type is an image type in the layer information to be compiled into the root node;
combining the mixed layer information and lower layer information of the mixed layer information to obtain third combined layer information, and moving the third combined layer information into the root node; the mixed layer information is layer information of which the layer type is a mixed layer in the layer information to be compiled;
and under the condition that the layer type is a mixed layer group type, moving layer information in the mixed layer group into the root node, and deleting the mixed layer group.
4. A method according to any of claims 1-3, wherein the target layer information comprises rich text layer information having a layer type that is a rich text type;
the visual attribute extraction processing is performed based on the target layer information to obtain target visual attribute information, including:
performing word-by-word analysis processing on the rich text layer information to obtain attribute information corresponding to each of a plurality of rich text words in the rich text layer information;
and combining the attribute information corresponding to each of the plurality of rich text characters to obtain target visual attribute information corresponding to the rich text layer information.
5. A method according to any one of claims 1 to 3, wherein after the obtaining the visual information to be compiled of the target page, the method further comprises:
performing background locking detection on the multiple layers of layer information to be compiled to obtain a detection result;
performing unlocking treatment on the layer information of which the detection result indicates background locking to obtain unlocked visual information to be compiled;
the determining the layer type of the plurality of layer information to be compiled includes:
and carrying out layer type identification on the plurality of layer information to be compiled in the unlocked visual information to be compiled to obtain the layer types corresponding to the plurality of layer information to be compiled.
6. A method according to any one of claims 1-3, characterized in that the method further comprises:
performing image export processing on the target layer information to obtain a plurality of target images;
the rendering the target page based on the target visual attribute information comprises the following steps:
generating target resolution data based on the target visual attribute information and the plurality of target images;
rendering the target page based on the target parsing data.
7. The method of claim 6, wherein the method further comprises:
Based on the plurality of target images, displaying the target visual attribute information and a preview page corresponding to the target visual attribute information;
responding to an updating instruction based on the target visual attribute information, updating the target visual attribute information, and updating the preview page based on the updated target visual attribute information;
the generating target resolution data based on the target visual attribute information and the plurality of target images includes:
and responding to a rendering confirmation instruction of the target page, and generating the target analysis data based on the updated target visual attribute information and the target images.
8. A page rendering apparatus, the apparatus comprising:
the system comprises a visual information acquisition module to be compiled, a visual information compiling module and a visual information compiling module, wherein the visual information acquisition module is used for acquiring visual information to be compiled of a target page, and the visual information to be compiled comprises a plurality of layer information to be compiled;
the layer type determining module is used for determining layer types of the plurality of layer information to be compiled;
the compiling module is used for executing target compiling operation corresponding to the layer type on the layer information to be compiled to obtain compiled target layer information;
The attribute extraction module is used for carrying out visual attribute extraction processing based on the target layer information to obtain target visual attribute information;
and the rendering module is used for rendering the target page based on the target visual attribute information.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the executable instructions to implement the page rendering method of any one of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the page rendering method of any of claims 1 to 7.
11. A computer program product comprising computer instructions which, when executed by a processor, implement the page rendering method of any one of claims 1 to 7.
CN202210455351.6A 2022-04-27 2022-04-27 Page rendering method and device, electronic equipment and storage medium Pending CN117009685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210455351.6A CN117009685A (en) 2022-04-27 2022-04-27 Page rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210455351.6A CN117009685A (en) 2022-04-27 2022-04-27 Page rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117009685A true CN117009685A (en) 2023-11-07

Family

ID=88574876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210455351.6A Pending CN117009685A (en) 2022-04-27 2022-04-27 Page rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117009685A (en)

Similar Documents

Publication Publication Date Title
CN109408764B (en) Page area dividing method, device, computing equipment and medium
CN107544806A (en) Visualize list method for drafting
US20210103515A1 (en) Method of detecting user interface layout issues for web applications
US20100095204A1 (en) Information processing apparatus, information processing method, and storage medium
US20080104082A1 (en) Method of constructing a remotely editable machine-readable document
US20080104493A1 (en) Method of constructing a machine-readable document
US20080104508A1 (en) Method of constructing an editable machine-readable document
US20140215306A1 (en) In-Context Editing of Output Presentations via Automatic Pattern Detection
US20080104497A1 (en) Method of identifying an extractable portion of a source machine-readable document
US20080104504A1 (en) Method of controlling construction of a machine-readable document
CN112307720A (en) PSD (position sensitive Detector) document-based HTML (Hypertext markup language) design template automatic conversion method and system
DE102022129588A1 (en) Facilitate identification of fillable areas in a form
CN112364496A (en) Avionics simulation panel generation system based on HTML5 and VUE technology
CN105512096B (en) A kind of optimization method and device based on font embedded in document
DE69425480T2 (en) Document editing apparatus
EP1837776A1 (en) Document processing device and document processing method
CN117009685A (en) Page rendering method and device, electronic equipment and storage medium
Chamberlin et al. JANUS: An interactive document formatter based on declarative tags
US7493557B2 (en) Source file generation apparatus
CN115658056A (en) Front-end VUE code automatic generation method and system based on picture
JP6014794B1 (en) Web page comparison apparatus, Web page comparison method, recording medium, and program
CN113971253A (en) Webpage file generation method, device, equipment and storage medium
JP5791115B2 (en) Image region dividing apparatus, method and program thereof
JP2018036843A (en) Device, method, and program
JP6531855B2 (en) INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR INFORMATION PROCESSING APPARATUS, AND PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination