CN116521165A - Cross-end rendering method and system and electronic equipment - Google Patents

Cross-end rendering method and system and electronic equipment Download PDF

Info

Publication number
CN116521165A
CN116521165A CN202310425201.5A CN202310425201A CN116521165A CN 116521165 A CN116521165 A CN 116521165A CN 202310425201 A CN202310425201 A CN 202310425201A CN 116521165 A CN116521165 A CN 116521165A
Authority
CN
China
Prior art keywords
rendering
image
data
layer
canvas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310425201.5A
Other languages
Chinese (zh)
Inventor
黄鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Alibaba Overseas Internet Industry Co ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310425201.5A priority Critical patent/CN116521165A/en
Publication of CN116521165A publication Critical patent/CN116521165A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a cross-end rendering method, a system and electronic equipment, wherein after receiving image data, the method determines rendering resources, rendering environments and rendering tools for rendering the image data, adopts a bottom layer rendering frame corresponding to opposite ends, and executes rendering work on the image data according to the rendering resources, the rendering environments and the rendering tools. The local end and the opposite end adopt corresponding bottom layer rendering frames to render the image data, so that the two ends obtain the rendering images with consistent effects. In other words, the difference between the two ends of the rendering image finally obtained by the two ends is negligible, so that the image difference and the maintenance cost in the isomorphic rendering process of the two ends are reduced.

Description

Cross-end rendering method and system and electronic equipment
Technical Field
The application relates to the technical field of computers, in particular to a cross-end rendering method. The application also relates to a cross-end rendering system and electronic equipment.
Background
At present, the application of large-scale image combination of electronic pictures in the field of electronic commerce is relatively wide, wherein image combination is to modify the original image information of each layer in the electronic pictures, and then combine and render each modified layer in sequence to obtain a new target picture.
In the prior art, when the process of rendering an electronic picture is performed between two ends, there are often image differences between the images rendered by the two ends, and the maintenance cost of the two ends for image rendering increases.
Therefore, how to reduce the image difference and maintenance cost in the double-ended isomorphic rendering process is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides a cross-end rendering method for reducing image difference and maintenance cost in a double-end isomorphic rendering process. The embodiment of the application also relates to a cross-end rendering system and electronic equipment.
The embodiment of the application provides a cross-end rendering method, which comprises the following steps: receiving image data; determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data; and performing rendering work on the image data based on the rendering resources, the rendering environment and the rendering tool by adopting an underlying rendering frame corresponding to the opposite end.
Optionally, the method further comprises: analyzing the image data to obtain analyzed image protocol data; and sending the parsed image protocol data to the opposite end.
Optionally, the determining a rendering resource, a rendering environment and a rendering tool for rendering the image data includes: determining, by an application layer, rendering resources for rendering the image data; determining a rendering environment for rendering the image data through a canvas layer; a rendering tool for rendering the image data is determined by an abstraction layer.
Optionally, the corresponding underlying rendering framework is a canvas kit. Wasm environment, where the canvas kit. Wasm environment includes the rendering logic of the Skia and the API application interface.
Optionally, the analyzing the image data to obtain analyzed image protocol data includes: and the protocol layer analyzes the image data according to the predefined data attribute and the analysis rule to obtain the analyzed image protocol data.
Optionally, the performing, with an underlying rendering framework corresponding to the opposite end, a rendering job on the image data based on the rendering resource, the rendering environment, and the rendering tool includes: and adopting a bottom layer rendering frame corresponding to the opposite end, and carrying out mapping processing on a plurality of layers corresponding to the data in the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain a target image.
Optionally, the performing, with an underlying rendering framework corresponding to the opposite end, a rendering job on the image data based on the rendering resource, the rendering environment, and the rendering tool includes: adopting a bottom layer rendering frame corresponding to the opposite end, and performing rendering operation on the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain rendered multi-layer image rendering data; and executing the rendering on-screen operation on the local end node by the multi-layer image rendering data.
Optionally, the performing, with an underlying rendering framework corresponding to the opposite end, a rendering job on the image data based on the rendering resource, the rendering environment, and the rendering tool includes: adopting a bottom layer rendering frame corresponding to the opposite end, and carrying out mapping processing on a plurality of layers corresponding to data in the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain a first intermediate image; obtaining update information of the first intermediate image, and carrying out protocol processing on the update information of the first intermediate image to obtain update image protocol data; transmitting the updated image protocol data to an opposite end; and obtaining the analyzed updated image protocol data obtained after the opposite end analyzes the updated image protocol data again.
Optionally, the method further comprises: the protocol layer sends the parsed image protocol data to an application layer of the local terminal; or the protocol layer receives the rendering data exported by the application layer of the local terminal, packages the exported rendering data through a predefined data attribute, and provides the packaged rendering data for the communication sending port.
Optionally, the determining, by the canvas layer, a rendering environment for rendering the image data includes: and determining static canvas rendering logic and providing a static canvas rendering environment through the canvas layer according to the image protocol data analyzed by the image data.
Optionally, the method further comprises: on the basis of the static canvas rendering environment, determining dynamic canvas rendering logic and providing a dynamic visual canvas rendering environment according to the image protocol data; based on the trigger action monitored by the static canvas, the position of the canvas layer corresponding to the trigger action is adjusted according to the moving track of the trigger action.
The embodiment of the application also provides a cross-end rendering system, which comprises: a first end and a second end; the first end and the second end comprise a protocol layer unit, an application layer unit, a canvas layer unit and an abstract layer unit; the protocol layer unit is used for analyzing data between the first end and the second end, and defining data attributes, analysis rules and expansion rules for realizing cross-end interaction; the application layer unit is used for realizing initialization of a rendering environment, loading of required resources and exporting of rendered data; the canvas layer unit is used for defining canvas rendering logic and providing a visual canvas rendering environment; the abstract layer unit is used for defining various functions and methods required by rendering; wherein the first end and the second end further comprise rendering layer units based on the same underlying rendering framework for performing rendering work.
The embodiment of the application also provides electronic equipment, which comprises a processor and a memory; the memory stores a computer program, and the processor executes the method after running the computer program.
The embodiment of the application also provides a computer storage medium, which stores a computer program, and the computer program executes the method after being executed by a processor.
Compared with the prior art, the embodiment of the application has the following advantages:
the embodiment of the application provides a cross-end rendering method, which comprises the following steps: receiving image data; determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data; and performing rendering work on the image data based on the rendering resources, the rendering environment and the rendering tool by adopting an underlying rendering frame corresponding to the opposite end.
After receiving the image data, the method determines the rendering resources, the rendering environment and the rendering tool for rendering the image data, adopts the bottom layer rendering frame corresponding to the opposite end, and executes the rendering work on the image data according to the rendering resources, the rendering environment and the rendering tool. The local end and the opposite end adopt corresponding bottom layer rendering frames, and render the image data based on the same rendering code, so that the two ends obtain the rendering images with consistent effects. In other words, the difference between the two ends of the finally obtained rendering image is negligible, and the image difference and the maintenance cost in the double-end synchronous rendering process are reduced.
Drawings
Fig. 1 is a first application scenario diagram of a cross-end rendering method provided in an embodiment of the present application.
Fig. 2 is a second application scene graph of the cross-end rendering method provided in the embodiment of the present application.
Fig. 3 is a flowchart of a cross-end rendering method according to a first embodiment of the present application.
Fig. 4 is a logic framework diagram of a cross-end rendering system according to a second embodiment of the present application.
Fig. 5 is a schematic diagram of a cross-end rendering device according to a third embodiment of the present application.
Fig. 6 is a schematic diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. The manner of description used in this application and in the appended claims is for example: "a", "a" and "a" etc. are not limited in number or order, but are used to distinguish the same type of information from each other.
The embodiment of the application provides a cross-end rendering method for reducing image difference and maintenance cost in a double-end isomorphic rendering process. The embodiment of the application also relates to a cross-end rendering system and electronic equipment.
First, concepts involved in the embodiments of the present application will be described:
isomorphism rendering: the method refers to that based on the same code, rendering operations are executed on different ends (such as a client and a server) to render data, so as to obtain the same rendering effect, and the method is also called cross-end isomorphic rendering. In this embodiment, taking a client and a server as examples, isomorphic rendering is implemented by using an image rendering framework implemented by JavaScript scripting language.
Wasm (WebAssembly): is a virtual instruction set architecture, and the whole architecture comprises ISA definition of a core, binary coding, definition and execution of program semantics, and application programming interfaces (WebAssemble APIs) facing different embedded environments. Wasm may run on the client and may coexist with JavaScript.
Json: (JavaScript Object Notation, JS object profile) is a lightweight data exchange format. It stores and presents data in a text format that is completely independent of the programming language based on a subset of ECMAScript (European Computer Manufacturers Association, js specification by the european computer institute). The compact and clear hierarchical structure makes JSON an ideal data exchange language. Is easy to read and write by people, is easy to analyze and generate by machines, and effectively improves the network transmission efficiency.
In order to facilitate understanding of the cross-end rendering method and the cross-end rendering system provided by the embodiment of the present application, before the embodiment of the present application is described, the background of the embodiment of the present application is described.
At present, the application of large-scale image combination of electronic pictures in the field of electronic commerce is relatively wide, wherein image combination is to modify the original image information of each layer in the electronic pictures, and then combine and render each modified layer in sequence to obtain a new target picture.
In the prior art, when the process of rendering an electronic picture is performed between two ends, there are often image differences between the images rendered by the two ends, and the maintenance cost of the two ends for image rendering increases.
For example, the two ends may be a client end and a server end, image rendering is performed on the client end, a layer rendering framework based on a browser native canvas such as fabric js is generally adopted, and image rendering on the server end is generally implemented by using a class canvas library similar to an open source version such as node-canvas. Therefore, the client and the server adopt different rendering frames, so that the difference easily perceived by a user exists in the rendered image, and in addition, the two ends adopt two rendering frames, so that the maintenance cost is increased in the later maintenance stage of the rendering frames.
Therefore, how to reduce the image difference and maintenance cost in the double-ended isomorphic rendering process is a technical problem to be solved.
Those skilled in the art can understand the problems existing in the prior art, and the application scenario of the cross-end rendering method provided in the embodiment of the present application is described in detail below. Firstly, an application scene of the cross-end rendering method provided by the embodiment of the application is described in detail. The cross-end rendering method can be applied to application scenes in the process of large-scale image combination of electronic images in the field of electronic commerce.
For example, a piece of image data is obtained, the image data is analyzed at the browser end or the server end, and then the analyzed image protocol data is sent to the opposite end. And then, the two ends perform image rendering processing according to the analyzed image protocol data, and finally, the rendered image displayed by the browser end node is consistent with the image rendered and synthesized by the server end, so that the difference is avoided from the visual point of the user, and the consistency of the cross-end isomorphic rendering image is reflected. Because the cross-end rendering method provided in the present application uses the same rendering logic between two ends according to the cross-end rendering system, specifically uses the same underlying rendering framework at the rendering layer, that is, canvas, wasm (canvas, wasm: the capability of directly using the Skia in the browser to render the Sketch file; sketch is a vector drawing application suitable for all designers. In addition, the protocol layer units, the application layer units, the canvas layer units and the abstract layer units of the first end and the second end adopt the same rendering logic to provide respective functions in the rendering process.
For another example, a section of image protocol data is obtained, original text information of an image is replaced by translated text information corresponding to the original text information, the translated text information is rendered into the image, and a target image is obtained, and the process is a mapping process. Aiming at large-scale electronic pictures, in order to acquire images of various different translation texts of each electronic picture and improve the picture combining efficiency of the plurality of electronic pictures, the embodiment of the application adopts a cross-end rendering method to process the picture combining process of the images.
Please refer to fig. 1, which is a first application scenario diagram of a cross-end rendering method provided in an embodiment of the present application.
FIG. 1 depicts various application scenarios for image rendering using a cross-end rendering method. In fig. 1, first, the picture information of the commodity is converted into Json protocol data, specifically, the picture information is extracted through algorithm engineering, and the picture information includes url (Uniform Resource Locator ) of the picture, size information, text information contained in the picture, a region where the text information is distributed, and the like. Wherein url of the picture represents a standard network address of the picture resource on the internet.
The image information is extracted through algorithm engineering, for example, the attribute of the image information is customized, the classification information in the image information, for example, the size chart or the text information chart is obtained, and the text area of the image and the text information in the text area are identified through OCR (Optical Character Recognition, text recognition) technology.
After the picture information of the picture is obtained, the text information of the picture is mainly obtained, and the text information is translated to obtain the translated text information of the text information. And formatting the original text information of the picture and the translated text information corresponding to the original text information to obtain Json protocol data of the picture.
And obtaining the target map containing the translation text by using the Json protocol data at least in the following three application modes.
Firstly, json protocol data is used as the entry information of a browser end (also called a client end), and the browser end analyzes the Json protocol data to obtain url of an image, size information, text information contained in the image and translation text information corresponding to the text information.
Then, the browser end modifies the text information of the translated text, edits or corrects the dislocation information in the translated text, and sends the modified information to the server end as the parameter entering information of the server end for mapping. And simultaneously, the modified information is provided for an application layer unit at the browser end, and the application layer unit provides rendering resources required for rendering the translation text information of the image.
After obtaining the parameter entering information, the server side performs picture combining processing on the translated text information and the image, specifically, the cross-end rendering system provided by the embodiment of the application is adopted to perform picture combining processing to obtain a target picture, and the text information of the target picture is translated text information corresponding to the text information of the original picture.
The browser end and the server end respectively adopt the same rendering logic to render the translation text information into the virtual image for rendering processing through the protocol layer unit, the application layer unit, the canvas layer unit, the abstract layer unit and the rendering layer unit. Finally, the difference between the rendered image information presented on the browser-end node and the information of the target image synthesized by the server-end is negligible.
Secondly, the Json protocol data is used as the parameter entering information of the server, and the server analyzes the Json protocol data of the picture, wherein processing logic adopted by the analysis of the server is the same as that adopted by the browser. And then, carrying out image combining processing according to the analyzed data to obtain a target image.
Thirdly, the Json protocol data is used as the parameter entering information of the server, the server analyzes the Json protocol data, and then the picture combining processing is carried out according to the translated text information obtained through analysis, so that an intermediate picture is obtained. And adding update or supplementary information to the intermediate picture according to the comparison of the original picture and the intermediate picture, converting the update or supplementary information into intermediate Json protocol data, and sending the intermediate Json protocol data to a browser end. The browser end edits the intermediate Json protocol data, sends the edited intermediate Json protocol data to the server end, and the server end performs picture combining processing according to the intermediate picture and the modified intermediate Json protocol data to obtain a target picture.
In the three application modes, the browser side and the server side adopt the same rendering logic framework, so that a text region is distinguished between a target picture obtained by combining pictures and an original picture, and the difference of the rest parts is negligible. The text information of the target picture is translated text information corresponding to the text information of the original picture. Moreover, the rendered image information displayed on the browser side is consistent with the target image synthesized on the server side, and the rendered image information and the target image are indistinguishable from each other in visual sense and sense of the user.
Please refer to fig. 2, which is a second application scenario diagram of the cross-end rendering method provided in the embodiment of the present application.
In fig. 2, the cross-end rendering method uses the cross-end rendering system provided by the application to simultaneously perform rendering processing on one image protocol data at the browser end and the server end, so that the difference between the rendered image information displayed at the browser end and the target image pictured at the server end is indistinguishable to the vision of a user. The logical framework of the cross-end rendering system is described in detail below.
The cross-end rendering system comprises a browser end and a server end, wherein the browser end and the server end comprise: protocol layer, application layer, canvas layer, abstract layer.
The protocol layer is used for data analysis between the browser end and the server end, and defining data attributes, analysis rules and expansion rules for realizing cross-end interaction.
In fig. 2, the protocol layer includes Json protocol, data structure, field definition, parsing rules, and extension rules.
The method comprises the steps that a protocol layer of a browser end and a protocol layer of any one end of a protocol layer of a server end are specifically used for receiving image protocol data sent by the protocol layer of the opposite end, analyzing the image protocol data according to defined data attributes and analysis rules, and providing an analyzed image data protocol data analysis result to an application layer of the server end; and the application layer is also used for packaging the rendering data exported by the application layer according to the defined data attribute through the protocol layer and providing the packaged data for the communication sending port.
The application layer is used for realizing initialization of the rendering environment, loading of required resources and exporting of rendered data; specifically, according to a received image protocol data analysis result, determining rendering resources required in the image protocol analysis result, and loading the rendering resources. Wherein the browser and server load rendering resources and use various loaders and software tools to develop packages respectively. The application layer of the browser end adopts Web-SDK (browser software tool development kit, SDK (Software Development Kit, software tool development kit), and the application layer of the server end adopts Node-SDK (Node, server end JavaScript interpreter).
Wherein a software tool development kit is a collection of development tools that a software engineer builds application software for a particular software package, software framework, hardware platform, operating system, or the like.
JavaScript (abbreviated as JS) is a lightweight, interpreted or just-in-time compiled programming language with function prioritization.
The canvas layer is used to define canvas rendering logic and provide a visual canvas rendering environment.
In FIG. 2, the canvas layers include static canvases (Staticcanvas) and interactable canvases (Interactcanvas). The static canvas includes at least one of the following information: layer, filter, painting brush. The interactable canvas comprises at least one of the following information: events, controllers, and animations.
The canvas layer at the browser end may render a static canvas and an interactable canvas. In rendering the static canvas, the canvas layer defines the static canvas rendering logic and provides a static canvas rendering environment, and a rendering tool required for rendering each layer of image is extracted from the abstract layer unit according to a rendering method provided by the application layer unit at the browser end.
When the canvas layer of the browser side renders the interactable canvas, defining dynamic canvas rendering logic and providing a dynamic visual canvas rendering environment on the basis of a static canvas; based on the monitored departure event in the static canvas, dynamically adjusting the position of the canvas layer corresponding to the trigger action according to the movement track of the trigger action, and displaying the position information of the adjusted canvas layer on the browser end node.
The abstraction layer is used to define the various functions and methods required for rendering. The abstract layer concrete is used for abstracting various graphic rendering tools, and specifically defines graphic definition and logic base class of the graphic rendering tools required by the canvas layer. The graphic definition includes at least one of the following definition information: image, text, square, line, dot. The logical base class includes at least one of the following definition information: objects, underlying canvas, geometry, blending, utility.
The browser side and the server side further comprise rendering layers based on the same underlying rendering framework, and the rendering layers are used for executing rendering work.
The rendering layer performs rendering operation on each layer of graphics in a virtual canvas in the canvas layer; rendering the rendered data on a browser end node by a rendering layer of the browser end; and the rendering layer of the server side exports the rendered data to a picture to obtain a rendered picture file.
The rendering layer includes an underlying dependency framework and APIs. The underlying dependency framework is a logical framework rendering with Skia (cpp) as an underlying layer, and build (combined) canvas, wasm, is a framework that can run at both a browser end and a server end.
Skia is a C++ open source 2D vector graphics processing function library (Cairo is a vector library) comprising fonts, coordinate transformations, bitmaps and the like, which are equivalent to lightweight Cairo, skia coordinates OpenGL/ES with specific hardware features to enhance the display effect. OpenGL/ES (OpenGL for Embedded Systems) is a subset of the OpenGL three-dimensional graphics application programming interface designed for embedded devices such as cell phones, tablets, and game hosts.
The above is a basic framework description of the cross-end rendering system shown in fig. 2. The following describes a process of mapping an original image using the cross-end rendering method provided by the present application. The image data of the original image is taken as the input data, the image data of the original image can be called as the image data of the first image, and the image obtained after the original text information of the first image is subjected to substitution rendering processing by adopting the translation text information corresponding to the original text information is taken as the second image. In the process of mapping, each layer of the first image is mapped, specifically, the translation text information in each layer is rendered into the corresponding layer, and then each rendered layer is mapped.
The first step: the method comprises the steps that a protocol layer of a browser side obtains a piece of image data, wherein the image data can be Json protocol data, URL of a first image described in the Json protocol data is obtained, region information contained in the first image and original text information of each region, size information of each region and translation text information corresponding to the original text information are obtained. And determining the text information of the translation to be rendered according to the Json protocol data of the first image. The browser side analyzes the Json protocol data of the first image, and particularly, if the Json protocol data is required to be edited or modified, the Json protocol data of the first image after being edited or modified is sent to a protocol layer of the server side.
And a second step of: the application layer of the browser side obtains Json protocol data of the first image sent by the protocol layer, a downloading device of the browser side is adopted to load the first image on the browser side, and rendering resources for rendering the text information of the translation are obtained according to the text information of the translation which needs to be rendered in the first image. And starting a rendering process according to the rendering resources and the translated text information.
And a third step of: and constructing a virtual bottom canvas by a canvas layer of the browser according to the size information of the first image, and determining the number of layers contained in the first image according to Json protocol data of the first image in the protocol layer. And sequentially constructing each layer according to the layers of the first image, and rendering the positions containing text information in the layers to the corresponding layers by using the translated text information in the process of constructing each layer. Specifically, the first image is divided into a plurality of areas containing text information, in each area, the background color and the text color of the original picture of the first image are extracted, the translation corresponding to the text of the first area of the first image is laid out into the virtual canvas according to the size information and the layout information corresponding to the text of the first area. And so on, adding the translation text information corresponding to the text information of all the areas of the first image into the corresponding image layer of the first image. The process is a static canvas construction process, the browser side displays the static canvas on a node of the browser side and provides static canvas information to the server side, and the server side performs mapping processing according to the static canvas information provided by the browser side to obtain target mapping.
In addition, the canvas layer of the browser end can also construct an interactable canvas, which is specifically as follows:
the static canvas is positioned on a canvas node at the browser end, and the triggering action and the influence range of the action in the static canvas are monitored on the basis of the static canvas. And traversing the positions of the layers on the canvas and the cross relation among the layers to determine which layer is triggered by the mouse.
The pressing operation is described as an example, and includes a mouse pressing operation, a mouse dragging operation, and a mouse releasing operation. After the mouse pressing operation is monitored, determining a layer corresponding to the mouse pressing operation according to the coordinate position, wherein the mouse is displaced due to the mouse pressing operation and the mouse dragging operation in the process, and the position of the layer pressed by the mouse moves along with the displacement of the mouse. And determining the movement of the layer according to the movement track of the mouse, and calculating the final position of the layer when the mouse stops moving. When the loosening operation of the mouse is monitored, the mouse stops moving, so that the final position of the layer can be determined according to the moving position of the mouse. The above process is the interactive canvas constructed by the canvas layer of the browser end.
The canvas layer of the browser side and the canvas layer of the server side adopt the same static rendering logic and provide a visual static canvas rendering environment. The canvas layer at the browser end can also construct an interactable canvas, and the canvas layer at the server end cannot construct an interactable canvas.
Fourth step: the abstract layer at the browser end provides a basic tool for rendering each layer of the canvas layer.
Fifth step: the rendering layer of the browser end and the server end adopt corresponding bottom layer rendering frames, and execute rendering work on the image data based on rendering resources, rendering environments and rendering tools determined by the image protocol data.
In the step, the rendering layer of the browser and the server adopt corresponding bottom rendering frames, and the rendering resources determined in the steps are adopted for the image protocol data, so that the rendering environment and the rendering tool are subjected to rendering processing. The browser end and the server end adopt corresponding bottom layer rendering frames, the corresponding bottom layer rendering frames comprise the same bottom layer rendering frames adopted by the local end and the opposite end, and the bottom layer rendering frames adopted by the local end and the opposite end comprise partial same functional modules or partial same layers and other different operation modules.
For example, the corresponding underlying rendering framework employed by the browser side and the server side is a canvas environment that includes the rendering logic of the Skia and the API application interface.
In the canvas environment, the image information rendered by the browser end and the target image rendered by the server end can be guaranteed to be consistent, and the difference between the image information and the target image is indistinguishable from the view angle of a user.
The above is a process of rendering processing at the browser end by the cross-end rendering method provided by the application, the browser end adopts the cross-end rendering system provided by the application to render the text information of the translation to each layer, performs merging rendering on each layer, and finally displays the merged and rendered data information on the browser end node. In addition, each layer of the server side adopts the same logic as the browser side. According to the Json protocol data of the image, the original text information is replaced by the translated text information, and the text information contained in the second image after the rendering is the translated text information corresponding to the original text information of the first image is rendered to the description of the application scene in the image, wherein the text information is in a form corresponding to the translated text, and the display of the other contents is not different except that the text information is in a form corresponding to the translated text.
In addition, the cross-end rendering method and the cross-end rendering system can also be applied to other application scenes, for example, the method and the system can acquire image information of an original image and image information of a rendered image in a section of image protocol data, wherein the rendered image information is obtained by performing color font transformation on the image information of the original image, modifying font size and the like.
The embodiment of the application provides a cross-end rendering method, which comprises the following steps: receiving image data; determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data; and performing rendering work on the image data based on the rendering resources, the rendering environment and the rendering tool by adopting an underlying rendering frame corresponding to the opposite end.
After receiving the image data, the method determines the rendering resources, the rendering environment and the rendering tool for rendering the image data, adopts the bottom layer rendering frame corresponding to the opposite end, and executes the rendering work on the image data according to the rendering resources, the rendering environment and the rendering tool. The local end and the opposite end adopt corresponding bottom layer rendering frames, and render the image data based on the same rendering code, so that the two ends obtain the rendering images with consistent effects. In other words, the difference between the two ends of the finally obtained rendering image is negligible, and the image difference and the maintenance cost in the double-end synchronous rendering process are reduced.
The embodiment of the application provides a cross-end rendering system, which comprises: a first end and a second end; the first end and the second end comprise a protocol layer unit, an application layer unit, a canvas layer unit and an abstract layer unit; the protocol layer unit is used for analyzing data between the first end and the second end, and defining data attributes, analysis rules and expansion rules for realizing cross-end interaction; the application layer unit is used for realizing initialization of a rendering environment, loading of required resources and exporting of rendered data; the canvas layer unit is used for defining canvas rendering logic and providing a visual canvas rendering environment; the abstract layer unit is used for defining various functions and methods required by rendering; wherein the first end and the second end further comprise rendering layer units based on the same underlying rendering framework for performing rendering work.
The system comprises a rendering layer unit, a protocol layer unit, an application layer unit, a canvas layer unit and an abstract layer unit which are based on the same bottom layer rendering frame. And rendering the images with consistent effects based on the same bottom rendering frame of the rendering layer unit. The protocol layer unit, the application layer unit, the canvas layer unit and the abstract layer unit adopt the same logic codes between the first end and the second end, and a double-end isomorphic rendering mode of the first end and the second end is realized. Specifically, the protocol layer unit is configured to parse the data, and transfer the parsed data between the first end and the second end in an interactive manner. The application layer unit is specifically used for initializing a rendering environment by protocol data provided by the protocol layer unit, loading rendering resources required by rendering and exporting the rendered data. The canvas layer unit is used to define canvas rendering logic and provide a visual canvas rendering environment. The abstraction layer unit abstracts various functions and methods required for rendering. And the protocol layer unit, the application layer unit, the canvas layer unit and the abstract layer unit respectively adopt the same logic to process each step in image rendering, so that the difference between the two ends of the finally obtained rendering image is negligible, and the image difference and the maintenance cost in the isomorphic rendering process of the two ends are reduced.
First embodiment
Fig. 3 is a flowchart of a cross-end rendering method according to a first embodiment of the present application. The cross-end rendering method provided in this embodiment is described in detail below with reference to fig. 3. The specific description process of the cross-end rendering method provided in the first embodiment may refer to the description of the scene embodiment. The cross-end rendering method shown in fig. 3 includes steps S301-S303.
As shown in fig. 3, in step S301, image data is received.
The step is used for receiving image data, and the received image data can be a browser side or a client side or a server side. The received image data includes data of the image itself, such as text data and image data, and may also include rendered information that is desired to be obtained after rendering the image. As shown in fig. 1, the image itself includes commodity information, so that the image is rendered again, and the rendered image includes translation text information corresponding to the commodity information.
Thus, the image data acquired here contains information of the image itself and target rendering information.
After the image data is obtained in this step, the image data needs to be analyzed, specifically, for example, formatted, so as to obtain Json protocol data of the image.
Therefore, after receiving the image data, this step S301 further includes: analyzing the image data to obtain analyzed image protocol data; and sending the parsed image protocol data to the opposite end.
As shown in fig. 1, after receiving the image data, the browser analyzes the image data, and a specific analysis manner may include the following manners:
the analyzing the image data to obtain the analyzed image protocol data comprises the following steps: and the protocol layer analyzes the image data according to the predefined data attribute and the analysis rule to obtain the analyzed image protocol data.
For example, the image data is formatted according to a Json protocol data form to obtain Json format data of the image data, and then the browser end performs editing processing or modification processing according to the Json format data of the image data to obtain Json format data after image analysis. And sending the analyzed image protocol data to a server side. And then, respectively determining rendering resources, rendering environments and rendering tools according to the same image protocol data by the browser side and the server side, and performing rendering processing on the image protocol data by adopting the same rendering bottom layer frame to obtain rendered image information or performing mapping processing to obtain a target image.
The above description is that the image data is used as the input data of the browser end, the browser end receives the image data, analyzes the image data to obtain the image protocol data, and sends the image protocol data to the server end. The second application mode is shown in fig. 1, wherein the image data is used as the input data of the server, the server obtains the image data, analyzes the image data to obtain the image protocol data, and sends the image protocol data to the browser. And then, the server performs rendering processing on the image protocol data according to the method provided by the cross-end rendering system to obtain a target image after the mapping. And the browser end performs rendering processing by adopting rendering logic which is the same as that of the server end according to the image protocol data, and displays the rendered image information on a browser end node.
In addition, as shown in fig. 1, the third application mode is further included, the image data is used as input data of the server, the server analyzes the image data to obtain image protocol data, rendering processing is performed according to the image protocol data to obtain a first intermediate image, and update information of the first intermediate image is obtained. And carrying out protocol processing on the update information of the first intermediate image to obtain Json format data of the update information, and sending the Json format data to a browser end. The browser end analyzes the Json format data of the updated information, comprises editing processing or modifying processing, and sends the edited or modified updated information to the server end. Then, the browser side and the server side adopt the same rendering bottom layer framework at the same time, and render the updated information according to the rendering resources, rendering environments and rendering resources corresponding to the updated information.
The application scene is rendered after the image data are respectively used as the input data of the browser end and the server end.
The above-mentioned Json format data of the image obtained by analyzing and processing the image data is obtained by processing a protocol layer of a browser side or a protocol layer of a server side, and specifically comprises the following steps:
the analyzing the image data to obtain the analyzed image protocol data comprises the following steps: and the protocol layer analyzes the image data according to the predefined data attribute and the analysis rule to obtain the analyzed image protocol data.
After the protocol layer analyzes the image data, the method further comprises the following steps: the protocol layer sends the parsed image protocol data to an application layer of the local terminal; or the protocol layer receives the rendering data exported by the application layer of the local terminal, packages the exported rendering data through a predefined data attribute, and provides the packaged rendering data for the communication sending port.
As shown in fig. 3, in step S302, a rendering resource, a rendering environment, and a rendering tool for rendering the image data are determined.
The present step is for determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data from the image data. The method specifically comprises the steps of determining rendering resources, rendering environments and rendering tools according to various layers of the cross-end rendering system. The method comprises the following steps:
The determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data, comprising: determining, by an application layer, rendering resources for rendering the image data; determining a rendering environment for rendering the image data through a canvas layer; a rendering tool for rendering the image data is determined by an abstraction layer.
And the application layer of the cross-end rendering system downloads rendering resources required for rendering the image protocol data by adopting a downloader according to the image protocol data. The canvas layer is used for defining canvas rendering logic and constructing a canvas rendering environment, and determining the canvas rendering logic and the canvas rendering environment required by the image protocol data according to the image protocol data.
The canvas layer determines a rendering environment corresponding to the image protocol data, and may include the following specific modes:
the determining, by the canvas layer, a rendering environment for rendering the image data, including:
and determining static canvas rendering logic and providing a static canvas rendering environment through the canvas layer according to the image protocol data analyzed by the image data.
Either the browser side or the server side may determine a static canvas environment for the image protocol data.
In addition, the browser side may define a dynamic canvas rendering environment in addition to the static canvas rendering environment:
on the basis of the static canvas rendering environment, determining dynamic canvas rendering logic and providing a dynamic visual canvas rendering environment according to the image protocol data; based on the trigger action monitored by the static canvas, the position of the canvas layer corresponding to the trigger action is adjusted according to the moving track of the trigger action.
The abstraction layer contains various basic tools required for rendering, and the rendering tools required in the image protocol data rendering process are determined according to the image protocol data.
As shown in fig. 3, in step S303, a rendering job is performed on the image data based on the rendering resources, the rendering environment, and the rendering tool using the underlying rendering frame corresponding to the opposite end.
The step is used for rendering the image protocol data by using the rendering resources, the rendering environment and the rendering tool determined by the steps. The rendering adopts a bottom layer rendering frame corresponding to the opposite end, the corresponding bottom layer rendering frame comprises the same bottom layer rendering frame adopted by the local end and the opposite end, and the bottom layer rendering frame adopted by the local end and the opposite end comprises partial same functional modules or partial same layers and other different operation modules.
The corresponding bottom layer rendering framework adopted by the cross-end rendering method constructed by the application is a canvas kit/wasm environment, and the canvas kit/wasm environment comprises a rendering logic of Skia and an API application interface.
In the canvas environment, the image information rendered by the browser end and the target image rendered by the server end can be guaranteed to be consistent, and the difference between the image information and the target image is indistinguishable from the view angle of a user.
Rendering results specifically obtained by the browser side and the server side are respectively as follows:
at the browser end: the performing a rendering job on the image data based on the rendering resources, the rendering environment, and the rendering tool using an underlying rendering frame corresponding to the opposite end, includes:
adopting a bottom layer rendering frame corresponding to the opposite end, and performing rendering operation on the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain rendered multi-layer image rendering data; and executing the rendering on-screen operation on the local end node by the multi-layer image rendering data.
At the browser end, rendering processing is carried out on the image protocol data by adopting a cross-end rendering method, namely, the image information is gradually rendered on the browser end node, and finally, the data of each layer is distributed and rendered on the screen and then displayed in the screen for users to browse the final rendering result.
At the server side: the performing, by using the bottom layer rendering frame corresponding to the opposite end, the rendering job on the image data based on the rendering resource, the rendering environment, and the rendering tool may be performed by:
and adopting a bottom layer rendering frame corresponding to the opposite end, and carrying out mapping processing on a plurality of layers corresponding to the data in the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain a target image.
And the server side can carry out image combining processing on the image protocol data according to the cross-end rendering system, and finally obtain a target image.
The server side performs rendering processing on the image protocol data, and the method further comprises the following rendering modes:
the performing a rendering job on the image data based on the rendering resources, the rendering environment, and the rendering tool using an underlying rendering frame corresponding to the opposite end, includes:
adopting a bottom layer rendering frame corresponding to the opposite end, and carrying out mapping processing on a plurality of layers corresponding to data in the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain a first intermediate image; obtaining update information of the first intermediate image, and carrying out protocol processing on the update information of the first intermediate image to obtain update image protocol data; transmitting the updated image protocol data to an opposite end; and obtaining the analyzed updated image protocol data obtained after the opposite end analyzes the updated image protocol data again.
As shown in fig. 1, the image data is used as input data of a server, the server analyzes the image data to obtain image protocol data, and performs rendering processing according to the image protocol data to obtain a first intermediate image. Then, the updated information of the first intermediate image is obtained and sent to the browser side. The browser end analyzes the updated information, comprises editing processing or modifying processing, and sends the edited or modified updated information to the server end. And then, the browser side and the server side adopt corresponding bottom layer rendering frames to render the updated information of the editing or modifying process, so as to obtain the rendering image information or the target image with consistent effect.
The embodiment of the application provides a cross-end rendering method, which comprises the following steps: receiving image data; determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data; and performing rendering work on the image data based on the rendering resources, the rendering environment and the rendering tool by adopting an underlying rendering frame corresponding to the opposite end.
After receiving the image data, the method determines the rendering resources, the rendering environment and the rendering tool for rendering the image data, adopts the bottom layer rendering frame corresponding to the opposite end, and executes the rendering work on the image data according to the rendering resources, the rendering environment and the rendering tool. The local end and the opposite end adopt corresponding bottom layer rendering frames, and render the image data based on the same rendering code, so that the two ends obtain the rendering images with consistent effects. In other words, the difference between the two ends of the rendering image finally obtained by the two ends is negligible, so that the image difference and the maintenance cost in the isomorphic rendering process of the two ends are reduced.
Second embodiment
Fig. 4 is a logic framework diagram of a cross-end rendering system according to a second embodiment of the present application. The cross-end rendering method in the first embodiment adopts the cross-end rendering system provided by the second embodiment to render the image data. The following describes the logical framework diagram of the cross-end rendering system in detail with reference to fig. 4, where reference may also be made to descriptions of scene embodiments and method embodiments regarding specific rendering manners of various layers at the browser end and the server end.
The cross-end rendering system 400 in fig. 4 includes a first end 401 and a second end 402. The first end and the second end each comprise a protocol layer unit, an application layer unit, a canvas layer unit and an abstract layer unit. The first end 401 includes a first protocol layer unit 401-1, a first application layer unit 401-2, a first canvas layer unit 401-3, a first abstraction layer unit 401-4, and a first rendering layer unit 401-5. The second end 402 includes a second protocol layer unit 402-1, a second application layer unit 402-2, a second canvas layer unit 402-3, a second abstraction layer unit 402-4, and a second rendering layer unit 402-5.
The protocol layer unit is used for analyzing data between the first end and the second end, and defining data attributes, analysis rules and expansion rules for realizing cross-end interaction; the application layer unit is used for realizing initialization of a rendering environment, loading of required resources and exporting of rendered data; the canvas layer unit is used for defining canvas rendering logic and providing a visual canvas rendering environment; the abstract layer unit is used for defining various functions and methods required by rendering; wherein the first end and the second end further comprise rendering layer units based on the same underlying rendering framework for performing rendering work.
The protocol layer unit of any one of the first protocol layer unit of the first end and the second protocol layer unit of the second end is specifically configured to receive image protocol data sent by the protocol layer unit of the opposite end, perform analysis processing according to the defined data attribute and analysis rule, and provide an analysis result of the analyzed image protocol data to the application layer unit of the local end; or the rendering data exported by the application layer is packed by the protocol layer unit according to the defined data attribute, and the packed data is provided for the communication sending port.
As shown in fig. 4, the first protocol layer unit 401-1 receives the image protocol data sent by the second protocol layer unit 402-1, performs analysis processing according to the defined data attribute and the analysis rule, and provides the analysis result of the analyzed image protocol data to the first application layer unit 401-2; or, the rendering data exported by the first application layer unit 401-2 is packaged by the first protocol layer unit 401-1 according to the defined data attribute, and the packaged data is provided to the communication sending port. The communication transmitting port may transmit data to other layers of the first end, or may transmit data after the packet processing to the first protocol layer unit 401-1 of the second end.
Correspondingly, the second protocol layer unit 402-1 receives the image protocol data sent by the first protocol layer unit 401-1, performs analysis processing according to the defined data attribute and the analysis rule, and provides the analysis result of the analyzed image protocol data to the second application layer unit 402-2; or, the rendering data exported by the second application layer unit 402-2 is packaged by the second protocol layer unit 402-1 according to the defined data attribute, and the packaged data is provided to the communication sending port. The communication transmitting port can transmit data to other layers of the second end, and can also transmit data after packaging processing to the first protocol layer unit.
The protocol rule adopted by the protocol layer unit is Json protocol.
For example, the received image protocol data is a section of image Json protocol data, and the image Json protocol data is analyzed to obtain original text information of the image and translation text information corresponding to the original text information. The object is to replace original text information with translated text information, render the target image so that the text information of the target image is the translated text information corresponding to the original text information of the original image. In the above example, in the protocol layer, the protocol layer unit at any end is configured to parse the image Json protocol data according to the defined data attribute and the parsing rule, where the parsed protocol data obtained by parsing includes the original text information of the image and the translated text information corresponding to the original text information. And providing the original text information of the image and the translated text information corresponding to the original text information to an application layer unit of the local terminal.
The application layer unit is specifically configured to determine, according to a received image protocol data analysis result, a rendering resource required in the image protocol data analysis result, and load the rendering resource.
Continuing with the above example, the first application layer unit 401-2 at the first end is specifically configured to obtain, according to the original text information of the image and the translated text information, a rendering resource for rendering the translated text information. The second application layer unit 402-2 of the second end is specifically configured to receive the original text information of the image and the translated text information from the second protocol layer unit of the second end, and obtain a rendering resource for rendering the translated text information.
The canvas layer unit of the first end is specifically used for defining static canvas rendering logic and providing a static canvas rendering environment, and a rendering tool required by each layer of image rendering is extracted from the abstract layer unit of the first end according to a rendering method provided by the application layer unit of the first end; the canvas layer unit of the second end is specifically used for defining static canvas rendering logic and providing a static canvas rendering environment, and a rendering tool required by each layer of image rendering is extracted from the abstract layer unit of the second end according to the rendering method provided by the application layer unit of the second end.
The first end and the second end are respectively a client end, a browser end or a server end.
The protocol layer, the application layer, the canvas layer and the abstract layer of the first end and the second end adopt the same rendering logic, so that the image display information rendered by the first end is consistent with the image display information rendered by the second end.
If the first end is a client end or a browser end and the second end is a server end, the first canvas layer unit 401-3 of the first end is further used for defining dynamic canvas rendering logic and providing a dynamic visual canvas rendering environment on the basis of a static canvas rendering environment; based on the triggering action monitored in the static canvas, dynamically adjusting the position of the canvas layer corresponding to the triggering action according to the moving track of the triggering action; and displaying the position information of the adjusted canvas layer on the first end node.
The abstract layer unit is specifically used for abstracting various graphic rendering tools, defining graphic definitions and logic base classes of the graphic rendering tools required by the canvas layer, wherein the graphic definitions comprise definitions of a plurality of graphic subclasses and realization methods for realizing the graphic subclasses; the logic base class comprises a plurality of logic sub-classes and an implementation method for implementing the logic sub-classes; providing a required graphic rendering tool for the canvas layer unit.
The first abstract layer unit of the first end and the second abstract layer unit of the second end adopt the same rendering logic. Therefore, the abstract layer units provide various rendering tools required for rendering for the corresponding canvas layer units in the rendering process of the first end and the second end.
The rendering layer unit is specifically used for performing rendering operation on each layer of graphics in the virtual canvas in the canvas layer unit; the rendering layer unit of the first end is further specifically configured to render the rendered data on the screen on the node of the first end; the rendering layer unit of the second end is further specifically configured to export the rendered data to a picture, and obtain a rendered picture file.
When the first end is a browser end or a client end and the second end is a server end, the images rendered by the browser end can not be synthesized into pictures, and the rendered images are displayed on a browser end node. The server side performs rendering post-processing on each layer, and performs mapping on each layer to form a target picture, and the target picture is stored in the server side.
The same underlying rendering framework is a canvas environment comprising the rendering logic of Skia and an API application interface, running simultaneously at the first end and the second end.
The following describes a process of mapping an original image using the cross-end rendering system provided by the present application. The first end is taken as a browser end, and the second end is taken as a server end for illustration. The Json protocol data of the original image is taken as the input data, the Json protocol data of the original image can be called as the Json protocol data of the first image, and the image obtained after the original text information of the first image is subjected to substitution rendering processing by adopting the translated text information corresponding to the original text information is taken as the second image. In the process of mapping, each layer of the first image is mapped, specifically, the translation text information in each layer is rendered into the corresponding layer, and then each rendered layer is mapped.
The first step: the method comprises the steps that a protocol layer of a browser side obtains a section of Json protocol data, and according to the Json protocol data, the URL of a first image described in the Json protocol data, region information contained in the first image and original text information of each region, size information of each region and translation text information corresponding to the original text information are obtained. And determining the text information of the translation to be rendered according to the Json protocol data of the first image.
And a second step of: the application layer of the browser adopts a downloading device of the browser, a first image is loaded on the browser, and rendering resources for rendering the text information of the translation are obtained according to the text information of the translation which needs to be rendered in the first image. And starting a rendering process according to the rendering resources and the translated text information.
And a third step of: and constructing a virtual bottom canvas by a canvas layer of the browser according to the size information of the first image, and determining the number of layers contained in the first image according to Json protocol data of the first image in the protocol layer. And sequentially constructing each layer according to the layers of the first image, and rendering the positions containing text information in the layers to the corresponding layers by using the translated text information in the process of constructing each layer. Specifically, the first image is divided into a plurality of areas containing text information, in each area, the background color and the text color of the original picture of the first image are extracted, the translation corresponding to the text of the first area of the first image is laid out into the virtual canvas according to the size information and the layout information corresponding to the text of the first area. And so on, adding the translation text information corresponding to the text information of all the areas of the first image into the corresponding image layer of the first image. The process is a static canvas construction process, the browser side displays the static canvas on a node of the browser side and provides static canvas information to the server side, and the server side performs mapping processing according to the static canvas information provided by the browser side to obtain target mapping.
In addition, the canvas layer of the browser end can also construct an interactable canvas, which is specifically as follows:
the static canvas is positioned on a canvas node at the browser end, and the trigger action and the influence range of the trigger action in the static canvas are monitored on the basis of the static canvas. And traversing the positions of the layers on the canvas and the cross relation among the layers to determine which layer is triggered by the mouse.
The pressing operation is described as an example, and includes a mouse pressing operation, a mouse dragging operation, and a mouse releasing operation. After the mouse pressing operation is monitored, determining a layer corresponding to the mouse pressing operation according to the coordinate position, wherein the mouse is displaced due to the mouse pressing operation and the mouse dragging operation in the process, and the position of the layer pressed by the mouse moves along with the displacement of the mouse. And determining the movement of the layer according to the movement track of the mouse, and calculating the final position of the layer when the mouse stops moving. When the loosening operation of the mouse is monitored, the mouse stops moving, so that the final position of the layer can be determined according to the moving position of the mouse. The above process is the interactive canvas constructed by the canvas layer of the browser end.
The canvas layer of the browser side and the canvas layer of the server side adopt the same static rendering logic and provide a visual static canvas rendering environment. The canvas layer at the browser end can also construct an interactable canvas, and the canvas layer at the server end cannot construct an interactable canvas.
Fourth step: the abstract layer at the browser end provides a basic tool for rendering each layer of the canvas layer.
Fifth step: the rendering layer of the browser end completes the rendering process.
The above is a process that the browser end adopts the cross-end rendering system provided by the application to render the text information of the translation to each layer, and the layers are combined and rendered. In addition, each layer of the server side adopts the same logic as the browser side. According to the Json protocol data of the image, the original text information is replaced by the translated text information, and the text information contained in the second image after the rendering is the translated text information corresponding to the original text information of the first image is rendered to the description of the application scene in the image, wherein the text information is in a form corresponding to the translated text, and the display of the other contents is not different except that the text information is in a form corresponding to the translated text.
The embodiment of the application provides a cross-end rendering system, which comprises: a first end and a second end; the first end and the second end comprise a protocol layer unit, an application layer unit, a canvas layer unit and an abstract layer unit; the protocol layer unit is used for analyzing data between the first end and the second end, and defining data attributes, analysis rules and expansion rules for realizing cross-end interaction; the application layer unit is used for realizing initialization of a rendering environment, loading of required resources and exporting of rendered data; the canvas layer unit is used for defining canvas rendering logic and providing a visual canvas rendering environment; the abstract layer unit is used for defining various functions and methods required by rendering; wherein the first end and the second end further comprise rendering layer units based on the same underlying rendering framework for performing rendering work.
The system comprises a rendering layer unit, a protocol layer unit, an application layer unit, a canvas layer unit and an abstract layer unit which are based on the same bottom layer rendering frame. And rendering the images with consistent effects based on the same bottom rendering frame of the rendering layer unit. The protocol layer unit, the application layer unit, the canvas layer unit and the abstract layer unit adopt the same logic codes between the first end and the second end, and a double-end isomorphic rendering mode of the first end and the second end is realized. Specifically, the protocol layer unit is configured to parse the data, and transfer the parsed data between the first end and the second end in an interactive manner. The application layer unit is specifically used for initializing a rendering environment by protocol data provided by the protocol layer unit, loading rendering resources required by rendering and exporting the rendered data. The canvas layer unit is used to define canvas rendering logic and provide a visual canvas rendering environment. The abstraction layer unit abstracts various functions and methods required for rendering. And the protocol layer unit, the application layer unit, the canvas layer unit and the abstract layer unit respectively adopt the same logic to process each step in image rendering, so that the difference between the two ends of the finally obtained rendering image is negligible, and the image difference and the maintenance cost in the isomorphic rendering process of the two ends are reduced.
Third embodiment
On the basis that the first embodiment provides a cross-end rendering method, a third embodiment of the application provides a cross-end rendering device. Fig. 5 is a schematic diagram of a cross-end rendering device according to a third embodiment of the present application. The following describes the device provided in this embodiment with reference to fig. 5, and the description of the device provided in the third embodiment of the present application, which is the same as that of the scene embodiment and the first embodiment, is specifically referred to the scene embodiment and the first embodiment, which are not repeated herein.
The embodiments referred to in the following description are intended to illustrate the method principles and not to limit the practical use.
The cross-end rendering device shown in fig. 5 includes:
a receiving unit 501 for receiving image data;
a determining unit 502 for determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data;
and a rendering unit 503 for performing a rendering work on the image data based on the rendering resources, the rendering environment, and the rendering tool using an underlying rendering frame corresponding to the opposite end.
Fourth embodiment
The fourth embodiment of the present application also provides an electronic device corresponding to the method of the first embodiment of the present application. Fig. 6 is a schematic diagram of an electronic device according to a fourth embodiment of the present application, as shown in fig. 6. The electronic device includes: at least one processor 601, at least one communication interface 602, at least one memory 603 and at least one communication bus 604; alternatively, the communication interface 602 may be an interface of a communication module, such as an interface of a GSM module; the processor 601 may be a processor CPU or a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement embodiments of the present invention. The memory 603 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory. Wherein the memory 603 stores a program, the processor 601 calls the program stored in the memory 603 to execute the method of the first embodiment of the present invention.
Fifth embodiment
The fifth embodiment of the present application also provides a computer storage medium corresponding to the method of the first embodiment of the present application. The computer storage medium stores a computer program that is executed by a processor to perform the method of the first embodiment.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable Media, as defined herein, does not include non-Transitory computer-readable Media (transmission Media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that, in the embodiments of the present application, the use of user data may be involved, and in practical applications, user specific personal data may be used in the schemes described herein within the scope allowed by applicable legal regulations in the country where the applicable legal regulations are met (for example, the user explicitly agrees to the user to actually notify the user, etc.).
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.

Claims (14)

1. A cross-end rendering method, comprising:
receiving image data;
determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data;
and performing rendering work on the image data based on the rendering resources, the rendering environment and the rendering tool by adopting an underlying rendering frame corresponding to the opposite end.
2. The method as recited in claim 1, further comprising:
analyzing the image data to obtain analyzed image protocol data;
and sending the parsed image protocol data to the opposite end.
3. The method of claim 1, wherein the determining a rendering resource, a rendering environment, and a rendering tool for rendering the image data comprises:
determining, by an application layer, rendering resources for rendering the image data;
determining a rendering environment for rendering the image data through a canvas layer;
a rendering tool for rendering the image data is determined by an abstraction layer.
4. The method of claim 1, wherein the corresponding underlying rendering framework is a canvas kit. Wasm environment comprising the rendering logic of Skia and API application interfaces.
5. The method according to claim 2, wherein the parsing the image data to obtain parsed image protocol data includes:
and the protocol layer analyzes the image data according to the predefined data attribute and the analysis rule to obtain the analyzed image protocol data.
6. The method of claim 2, wherein performing a rendering job on the image data based on the rendering resources, rendering environment, and rendering tool using an underlying rendering framework corresponding to the opposite end, comprises:
and adopting a bottom layer rendering frame corresponding to the opposite end, and carrying out mapping processing on a plurality of layers corresponding to the data in the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain a target image.
7. The method of claim 2, wherein performing a rendering job on the image data based on the rendering resources, rendering environment, and rendering tool using an underlying rendering framework corresponding to the opposite end, comprises:
adopting a bottom layer rendering frame corresponding to the opposite end, and performing rendering operation on the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain rendered multi-layer image rendering data;
And executing the rendering on-screen operation on the local end node by the multi-layer image rendering data.
8. The method of claim 2, wherein performing a rendering job on the image data based on the rendering resources, rendering environment, and rendering tool using an underlying rendering framework corresponding to the opposite end, comprises:
adopting a bottom layer rendering frame corresponding to the opposite end, and carrying out mapping processing on a plurality of layers corresponding to data in the image protocol data based on the rendering resources, the rendering environment and the rendering tool to obtain a first intermediate image;
obtaining update information of the first intermediate image, and carrying out protocol processing on the update information of the first intermediate image to obtain update image protocol data;
transmitting the updated image protocol data to an opposite end;
and obtaining the analyzed updated image protocol data obtained after the opposite end analyzes the updated image protocol data again.
9. The method as recited in claim 2, further comprising:
the protocol layer sends the parsed image protocol data to an application layer of the local terminal;
or the protocol layer receives the rendering data exported by the application layer of the local terminal, packages the exported rendering data through a predefined data attribute, and provides the packaged rendering data for the communication sending port.
10. A method according to claim 3, wherein said determining a rendering environment for rendering said image data by means of a canvas layer comprises:
and determining static canvas rendering logic and providing a static canvas rendering environment through the canvas layer according to the image protocol data analyzed by the image data.
11. The method as recited in claim 10, further comprising:
on the basis of the static canvas rendering environment, determining dynamic canvas rendering logic and providing a dynamic visual canvas rendering environment according to the image protocol data;
based on the trigger action monitored by the static canvas, the position of the canvas layer corresponding to the trigger action is adjusted according to the moving track of the trigger action.
12. A cross-end rendering system, comprising: a first end and a second end;
the first end and the second end comprise a protocol layer unit, an application layer unit, a canvas layer unit and an abstract layer unit;
the protocol layer unit is used for analyzing data between the first end and the second end, and defining data attributes, analysis rules and expansion rules for realizing cross-end interaction;
the application layer unit is used for realizing initialization of a rendering environment, loading of required resources and exporting of rendered data;
The canvas layer unit is used for defining canvas rendering logic and providing a visual canvas rendering environment;
the abstract layer unit is used for defining various functions and methods required by rendering;
wherein the first end and the second end further comprise rendering layer units based on the same underlying rendering framework for performing rendering work.
13. An electronic device comprising a processor and a memory;
the memory has stored therein a computer program, which, when executed by the processor, performs the method of any of claims 1-11.
14. A computer storage medium, characterized in that the computer storage medium stores a computer program which, when executed by a processor, performs the method of any of claims 1-11.
CN202310425201.5A 2023-04-18 2023-04-18 Cross-end rendering method and system and electronic equipment Pending CN116521165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310425201.5A CN116521165A (en) 2023-04-18 2023-04-18 Cross-end rendering method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310425201.5A CN116521165A (en) 2023-04-18 2023-04-18 Cross-end rendering method and system and electronic equipment

Publications (1)

Publication Number Publication Date
CN116521165A true CN116521165A (en) 2023-08-01

Family

ID=87395218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310425201.5A Pending CN116521165A (en) 2023-04-18 2023-04-18 Cross-end rendering method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN116521165A (en)

Similar Documents

Publication Publication Date Title
KR100996738B1 (en) Markup language and object model for vector graphics
KR100962920B1 (en) Visual and scene graph interfaces
US6995765B2 (en) System, method, and computer program product for optimization of a scene graph
KR101109395B1 (en) Markup language and object model for vector graphics
CA2618862C (en) Extensible visual effects on active content in user interfaces
CN101421761A (en) Visual and scene graph interfaces
US10825129B2 (en) Eliminating off screen passes using memoryless render target
CN114564630A (en) Method, system and medium for visualizing graph data Web3D
CN109634611B (en) Mobile terminal three-dimensional model ply file analysis and display method based on OpenGL
US7743387B2 (en) Inheritance context for graphics primitives
CN116521165A (en) Cross-end rendering method and system and electronic equipment
CN116503529A (en) Rendering, 3D picture control method, electronic device, and computer-readable storage medium
CN117392301B (en) Graphics rendering method, system, device, electronic equipment and computer storage medium
CN117251231B (en) Animation resource processing method, device and system and electronic equipment
WO2024056100A1 (en) Page rendering method and apparatus, device, storage medium, and computer program product
US20240126577A1 (en) Visualization of application capabilities
CN118037923A (en) Image rendering method and device, storage medium and electronic equipment
CN116382828A (en) Non-rectangular window display method, computing device and storage medium
CN116738540A (en) Method for presenting and using BIM data on mobile device through graphic interaction engine
CN117075894A (en) List display method and device, computing equipment and readable storage medium
CN116932356A (en) Cross-platform testing method, device, computer equipment and storage medium
CN114528515A (en) Model rendering method and device, electronic equipment and computer readable storage medium
Barnwell et al. Petri Net Analyser–Group 4
Kris et al. Interactive Data Units: A Framework to Support Rich Graphical Data Presentations on Heterogeneous Devices
Larsson et al. Visual Effects Management in a Mobile User Interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240219

Address after: Room 303, 3rd Floor, Building 5, No. 699 Wangshang Road, Changhe Street, Binjiang District, Hangzhou City, Zhejiang Province, 310056

Applicant after: Hangzhou Alibaba Overseas Internet Industry Co.,Ltd.

Country or region after: China

Address before: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba (China) Co.,Ltd.

Country or region before: China