CN111400024B - Resource calling method and device in rendering process and rendering engine - Google Patents

Resource calling method and device in rendering process and rendering engine Download PDF

Info

Publication number
CN111400024B
CN111400024B CN201910004657.8A CN201910004657A CN111400024B CN 111400024 B CN111400024 B CN 111400024B CN 201910004657 A CN201910004657 A CN 201910004657A CN 111400024 B CN111400024 B CN 111400024B
Authority
CN
China
Prior art keywords
rendering
engine
command buffer
commands
engines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910004657.8A
Other languages
Chinese (zh)
Other versions
CN111400024A (en
Inventor
郑宇琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910004657.8A priority Critical patent/CN111400024B/en
Publication of CN111400024A publication Critical patent/CN111400024A/en
Application granted granted Critical
Publication of CN111400024B publication Critical patent/CN111400024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The embodiment of the invention provides a resource calling method, a device and a rendering engine in a rendering process. Wherein the method comprises the following steps: acquiring rendering requirement information of canvas objects in a rendering process; if the rendering requirement information comprises resources which need to call a plurality of rendering engines, the rendering commands of the rendering engines are stored in a rendering command buffer zone according to a set sequence; and executing each rendering command in the rendering command buffer according to the set sequence in the canvas object so as to call the resources of a plurality of rendering engines in the canvas object. According to the embodiment of the invention, the rendering commands of the plurality of rendering engines are stored in the rendering command buffer area according to the set sequence, so that the resources of a plurality of rendering engines can be rendered in the same canvas object, the joint use of the plurality of rendering engines is realized, the rendering engines with a plurality of architectures can be compatible, a richer rendering effect is provided, and higher-level creative is realized.

Description

Resource calling method and device in rendering process and rendering engine
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for invoking resources in a rendering process, and a rendering engine.
Background
With the need for Feed flow (information flow) advertising innovation, more and more creatives containing special effects and movable elements are proposed. Achieving these special effects is often very difficult and requires mathematical and graphic knowledge as support.
The scheme for realizing the graphic special effect in the Feed stream comprises the following steps: the mature third party game engine is accessed using a small number of special effects interfaces provided by the apple UIKit (User Interface Kit, user interface tool), coreGraphics (core graphics) and other native frameworks.
Current rendering engines include a variety of, e.g., spriteKit, sceneKit, ARKit, etc. Wherein Spritekit is a 2D (two-dimensional) game engine. SceneKit is a 3D (three-dimensional) game engine. ARKit is an augmented reality engine.
These rendering engines have their own canvas on which graphics can only be drawn separately, and the architecture is incompatible.
Disclosure of Invention
The embodiment of the invention provides a resource calling method, a resource calling device and a rendering engine in a rendering process, which are used for solving one or more technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides a method for calling a resource in a rendering process, including:
acquiring rendering requirement information of canvas objects in a rendering process;
If the rendering requirement information comprises resources which need to call a plurality of rendering engines, the rendering commands of the rendering engines are stored in a rendering command buffer zone according to a set sequence;
and executing each rendering command in the rendering command buffer according to the set sequence in the canvas object so as to call the resources of a plurality of rendering engines in the canvas object.
In one embodiment, storing rendering commands of a plurality of rendering engines in a rendering command buffer in a set order, includes:
at a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine and a three-dimensional game rendering engine in a first rendering command buffer in series according to a set sequence; and/or
At least two rendering commands of the current rendering engine, the two-dimensional game rendering engine and the three-dimensional game rendering engine are stored in series in a second rendering command buffer in a set order in a graphics rendering stage.
In one embodiment, the method further comprises:
and if the rendering requirement information also comprises resources which need to call a shader library, serially storing rendering commands of a plurality of rendering engines and the rendering commands of the shader library in the rendering command buffer according to a set sequence.
In one embodiment, storing rendering commands of a plurality of rendering engines and rendering commands of the shader library in series in a set order in the rendering command buffer, includes:
in a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library in series in a first rendering command buffer according to a set sequence; and/or
At a graphics rendering stage, at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library are stored in series in a second rendering command buffer in a set order.
In one embodiment, the method further comprises:
and if the rendering requirement information also comprises resources needing to call a three-dimensional model tool library, calling the required coordinate information from the three-dimensional model tool library.
In one embodiment, the method further comprises:
and if the rendering requirement information also comprises resources required to call the augmented reality engine, calculating an effect parameter corresponding to the augmented reality effect in a calculation stage.
In a second aspect, an embodiment of the present invention provides a resource calling device in a rendering process, including:
The acquisition module is used for acquiring rendering requirement information of the canvas object in the rendering process;
the first storage module is used for storing rendering commands of the plurality of rendering engines in a rendering command buffer according to a set sequence if the rendering demand information comprises resources which need to call the plurality of rendering engines;
and the execution module is used for executing each rendering command in the rendering command buffer area according to the set sequence in the canvas object so as to call the resources of a plurality of rendering engines in the canvas object.
In one embodiment, the first storage module is further configured to:
at a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine and a three-dimensional game rendering engine in a first rendering command buffer in series according to a set sequence; and/or
At least two rendering commands of the current rendering engine, the two-dimensional game rendering engine and the three-dimensional game rendering engine are stored in series in a second rendering command buffer in a set order in a graphics rendering stage.
In one embodiment, the apparatus further comprises:
and the second storage module is used for storing the rendering commands of the plurality of rendering engines and the rendering commands of the shader library in the rendering command buffer according to a set sequence in series if the rendering requirement information also comprises resources needing to call the shader library.
In one embodiment, the second storage module is further configured to:
in a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library in series in a first rendering command buffer according to a set sequence; and/or
At a graphics rendering stage, at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library are stored in series in a second rendering command buffer in a set order.
In one embodiment, the apparatus further comprises:
and the three-dimensional model calling module is used for calling the needed coordinate information from the three-dimensional model tool library if the rendering requirement information also comprises resources needing to call the three-dimensional model tool library.
In one embodiment, the apparatus further comprises:
and the augmented reality calling module is used for calculating the effect parameters corresponding to the augmented reality effect in the calculation stage if the rendering requirement information also comprises the resources required to call the augmented reality engine.
In a third aspect, an embodiment of the present invention provides a resource calling device in a rendering process, where a function of the device may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In one embodiment, the apparatus includes a processor and a memory in a structure thereof, the memory storing a program for supporting the apparatus to perform the resource calling method in the rendering process described above, and the processor is configured to execute the program stored in the memory. The apparatus may also include a communication interface for communicating with other devices or communication networks.
In a fourth aspect, an embodiment of the present invention provides a rendering engine, including: the resource calling device in any rendering process of the embodiment of the invention.
In a fifth aspect, an embodiment of the present invention provides a computer readable storage medium storing computer software instructions for use by a resource calling device in a rendering process, including a program for executing a resource calling method in the rendering process.
One of the above technical solutions has the following advantages or beneficial effects: the rendering commands of the plurality of rendering engines are stored in the rendering command buffer according to the set sequence, so that the resources of the plurality of rendering engines can be rendered in the same canvas object, the joint use of the plurality of rendering engines is realized, the rendering engines with various architectures can be compatible, richer rendering effects are provided, and higher-level originality is realized.
The other technical scheme has the following advantages or beneficial effects: GPU resources are fully cached, and rendering speed and rendering efficiency can be further improved.
The foregoing summary is for the purpose of the specification only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will become apparent by reference to the drawings and the following detailed description.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
FIG. 1 illustrates a flow diagram of a method of resource invocation in a rendering process according to an embodiment of the invention.
FIG. 2 illustrates an example diagram of one rendering cycle in a rendering engine according to an embodiment of the present invention.
FIG. 3 illustrates a flow chart of a method of resource invocation in a rendering process according to an embodiment of the invention.
FIG. 4 illustrates a flow diagram of a method of resource invocation in a rendering process in accordance with an embodiment of the invention.
Fig. 5 shows a block diagram of a resource calling device in a rendering process according to an embodiment of the present invention.
Fig. 6 shows a block diagram of a resource calling device in a rendering process according to an embodiment of the present invention.
FIG. 7 illustrates an internal structural diagram of canvas objects in a rendering engine in accordance with an embodiment of the present invention.
FIG. 8 shows a schematic diagram of a rendering flow of a rendering engine according to an embodiment of the invention.
Fig. 9 is a block diagram showing the structure of a resource calling device in a rendering process according to an embodiment of the present invention.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
FIG. 1 illustrates a flow diagram of a method of resource invocation in a rendering process according to an embodiment of the invention. As shown in fig. 1, the method may include:
step S11, obtaining rendering requirement information of canvas objects in a rendering process;
Step S12, if the rendering requirement information comprises resources which need to call a plurality of rendering engines, the rendering commands of the rendering engines are stored in a rendering command buffer area according to a set sequence;
and step S13, executing each rendering command in the rendering command buffer area in the canvas object according to the set sequence so as to call the resources of a plurality of rendering engines in the canvas object.
In one example, the current rendering engine may include a system screen refresh notification class, a core controller, and a plurality of canvas objects. In the current rendering engine, one or more canvas objects may be run. Each canvas object is an instance. If multiple canvas objects are run simultaneously, a system screen refresh notification class such as CADisplayLink may be employed to trigger rendering driven events. Wherein the system screen refresh notification class may issue a render-driven event per frame such that the canvas object remains the same refresh time as the screen. The core controller, e.g., vgmetacore, may throw the render driver notification upon receipt of the render driver event. Multiple canvas objects may be created in a rendering engine in a certain order. Each canvas object may listen for rendering driver notifications by the core controller. After listening for rendering driver notifications, each canvas object may trigger rendering events in a certain order, e.g., in a respective order of creation. For example, if the creation order is canvas object A, canvas object B, canvas object C, canvas object A triggers a rendering event first. After the canvas object A has performed this frame rendering process, the canvas object B triggers a rendering event. After the canvas object B performs this frame rendering process, the canvas object C triggers a rendering event. After the canvas object C performs the rendering process of the frame, the frame completes the rendering process. Each canvas object may then continue to monitor for rendering driver notifications, starting the rendering process for the next frame.
In one embodiment, in step S12, the rendering commands of the plurality of rendering engines are stored in a rendering command buffer in a set order, including one or more of the following:
mode one: at a ready-to-render stage, at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, and a three-dimensional game rendering engine are stored in series in a set order in a first rendering command buffer.
Mode two: at least two rendering commands of the current rendering engine, the two-dimensional game rendering engine and the three-dimensional game rendering engine are stored in series in a second rendering command buffer in a set order in a graphics rendering stage.
The two-dimensional game rendering engine (may be abbreviated as 2D engine) may be sprite kit. Spritekit internally contains a large number of content needed by the 2D game engine, such as the 2D physical engine, the particle system, etc. The 2D effects that the sprite kit can achieve include conventional effects of translation, rotation, cutting, etc., and also effects of color conversion (e.g., various filters, black whitening, etc.), smoothing algorithms (e.g., blurring), etc. These contents may be drawn in an instance of one canvas object concurrently with other contents of the current rendering engine.
The three-dimensional game rendering engine (which may be abbreviated as 3D engine) may be SceneKit. The SceneKit internally contains a large amount of content required by the 3D game engine, such as three-dimensional model loading, 3D physics engine, lighting, shadows, etc. The 3D physical engine can achieve physical effects of free falling, collision and the like. The 3D effects that SceneKit can achieve also include tessellation, three-dimensional transformations (e.g., effects that can wave a square after tessellation), ambient light shading (e.g., effects that shadow corners, etc.), etc. These content may also be drawn in an instance of a canvas object concurrently with other content of the current rendering engine. In addition to these, sceneKit also supports more advanced game effects, such as ambient light shielding. These functions may be made in the SceneKit tool built in Xcode and drawn in the instance of the current rendering engine.
The current rendering engine may open the rendering layer. Spritekit and Scenekit each have their own canvas objects, and the current rendering engine itself also has its own canvas objects, and they share the common feature of being able to select the use of the Metal kernel for rendering. The current rendering engine can serially connect rendering commands of the current rendering engine, spritekit and Scenekit together through the rendering command buffer of the Metal bottom layer, so that the rendering command of SpriteKit, sceneKit and other rendering commands of the current rendering engine are submitted to the same rendering command buffer together. When this buffer is executed, all the content will be rendered in the same canvas object, enabling the engine to be used in conjunction.
In one embodiment, the rendering requirements information for the canvas object may include effects (which may also be referred to as special effects) that need to be rendered. The effects that need to be rendered may include effects of the current rendering engine itself, effects of the two-dimensional game rendering engine, effects of the three-dimensional game rendering engine, and the like. Wherein the effects of the current rendering engine itself may be saved in an effects list of canvas objects. For example, the list of effects of canvas object A includes effects of illumination, fly-in, fly-out, and the like.
The process of obtaining rendering requirement information for the canvas object may include traversing a list of effects for the canvas object, an effect for a two-dimensional game rendering engine, an effect for a three-dimensional game rendering engine, and the like. After traversing, all effects of the canvas object to be rendered and the rendering sequence thereof can be obtained. And then, in the rendering process of each frame, sequentially applying corresponding effects to the graphics to be rendered in the canvas object according to the rendering sequence. For example, the rendering effect of the canvas object A after traversing includes illumination, flying in and flying out, the effect of the two-dimensional game rendering engine includes rotation, and the effect of the three-dimensional game rendering engine includes collision. The rendering order of the effects is fly-in, rotation, collision, illumination, and fly-out.
Depending on the content to be rendered, a plurality of rendering commands may be saved in series to the buffer in a set order at a corresponding stage in the rendering process. For example, rendering commands corresponding to the effects of canvas object a may be saved in series in the buffer in the rendering order described above. The rendering commands may be executed in the order in which they were saved while the buffer was executing. For example, the effects of fly-in, rotation, collision, illumination, fly-out are sequentially applied to the graphics in canvas object a.
As shown in FIG. 2, in the rendering of canvas objects, multiple stages of event casting, numerical computation, ready to render, graphics rendering, and exchanging buffers may be included.
Generally, each effect has a corresponding effect parameter according to its own characteristics. After the canvas object triggers a rendering event, a rendering process may be entered. In the rendering process, if a calculation event is triggered, a calculation stage can be entered, and effect parameters corresponding to each effect are calculated. For example, for flame effects, the number of flames may be calculated, with parameters such as size, height, color, brightness, etc. of each flame at each moment. For another example, for some static effects, each frame has no change and the calculated effect parameters may be empty.
Some static effects are free of redrawing, while dynamic effects mostly require redrawing. After the effect parameters corresponding to the effect are calculated, whether the effect needs redrawing or not can be judged according to the effect parameters. If the effect parameters of an effect vary at multiple times, the effect may need to be redrawn. For example, the flame effect may be redrawn if the flame effect varies in size, height, color, and brightness at different times.
For effects requiring redrawing, a ready-to-render event may be triggered, entering a ready-to-draw phase, to process GPU resources. The graphics rendering event is then triggered into the graphics rendering phase to generate a rendering context. And transmitting the rendering context to the rendering object, and calling a rendering command of a rendering application programming interface such as Metal by the rendering object to complete rendering of the graphics to be rendered in the canvas object.
In one embodiment, as shown in fig. 3, the method further comprises:
and step S14, if the rendering requirement information also comprises resources which need to call a shader library, the rendering commands of the plurality of rendering engines and the rendering commands of the shader library are stored in the rendering command buffer in series according to a set sequence.
In one example, the shader library can include MPS. MPS (english, collectively Metal Performance Shaders) is a set of graphics processor (GPU, graphic Processing Unit) shader libraries (which may be abbreviated as shader libraries). The set of shader libraries includes conventional filter effects such as gaussian blur, and also computer vision-related functions such as image color histogram, edge detection, etc. The functions of gaussian blur, histogram, etc. of MPS can be encapsulated in the current rendering engine. After the resources of the current rendering engine and the resource intercommunication interface of the MPS are encapsulated, the output of the MPS can be seamlessly used for other rendering processes in the current rendering engine.
In one embodiment, in step S14, the rendering commands of the plurality of rendering engines and the rendering commands of the shader library are stored in series in the rendering command buffer in a set order, including one or more of the following ways:
mode three: at a ready-to-render stage, at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine, and a shader library are stored in series in a set order in a first rendering command buffer.
Mode four: at a graphics rendering stage, at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library are stored in series in a second rendering command buffer in a set order.
In one embodiment, the effects of the canvas object that need to be rendered may also include effects of a shader library. In this way, the rendering command corresponding to the effect may be invoked from the shader library.
In one embodiment, if the effects of the canvas object that need to be rendered include both effects of the rendering engine and effects of the shader library, the rendering commands of the rendering engine and the shader library are stored in a buffer, and when the buffer is executed, the rendering commands may be executed in the order that was saved.
In one embodiment, as shown in fig. 4, the method further comprises:
and S15, if the rendering requirement information also comprises resources needing to call a three-dimensional model tool library, calling the required coordinate information from the three-dimensional model tool library.
In one example, the three-dimensional Model tool library may be a Model I/O. Model I/O has built-in interfaces for generating three-dimensional objects such as cubes, cylinders, spheres, etc. These three-dimensional objects may be drawn directly in the instance of the current rendering engine. In addition to these, model I/O also supports advanced Model transformation operations such as rectangular subdivision, tessellation, and the like. The shader code of the current rendering engine may define a set of coordinate criteria. The corresponding coordinate standard is also configured in the Model I/O, and the shader program of the current rendering engine can accurately identify the coordinate data such as the vertex, texture, normal line and the like output by the Model I/O. Thus, the three-dimensional graphics of various types output by the Model I/O can be used correctly by the current rendering engine.
In one embodiment, some resources and data may be pre-cached to increase rendering speed. The method further comprises the steps of: and pre-caching time-consuming resources requiring GPU compiling in the GPU, wherein the time-consuming resources comprise at least one of rendering pipelines of canvas objects, key data of built-in graphics and preset maps, and the key data of the built-in graphics comprise at least one of vertex data and index data of the built-in graphics. When caching certain rendering pipelines of canvas objects, the MPS may be invoked if there are such rendering pipelines in the MPS, e.g., a gaussian blur texture. If the MPS is not, the developer can design itself. When key data of built-in graphics is cached, if the key data is graphics in the Model I/O, coordinate data of the three-dimensional graphics in the Model I/O can be called. In the case of graphics not available in a Model I/O, the developer may design himself.
In one embodiment, as shown in fig. 4, the method further comprises:
and S16, if the rendering requirement information also comprises resources required to call the augmented reality engine, calculating effect parameters corresponding to the augmented reality effect in a calculation stage.
In one example, the augmented reality engine may be ARKit content. The ARKit internally includes an identification, inertial navigation and mapping system. ARKit may assist the engine in achieving AR effect rendering. In the calculation stage, parameters such as coordinates needing to be tracked in the augmented reality effect can be calculated. A set of three-dimensional coordinate system conforming to a physical rule can be arranged in the current rendering engine, and the three-dimensional coordinate system comprises three-dimensional transformation of an object when the camera moves. The coordinate system of the object tracking output of the ARkit also accords with the physical rule, and the coordinate of the object tracked by the ARkit can be directly output to the current rendering engine, so that the effect of the physical rule AR can be achieved.
The following is an example of a specific rendering process of an AR effect.
In the preparation phase, vertex data for the cube is obtained from Model I/0. The current rendering engine itself loads a picture for pasting onto the six faces of the cube. The canvas object is created by creating objects of several 3D engines at will and then associating several anchors respectively. The anchor point is the object tracked by ARkit. After adding the anchor point, if the terminal equipment such as a mobile phone moves, the ARKit can modify the coordinates of the anchor point in real time. For example, if the handset is retracted along the z-axis, the z-coordinate of the anchor point coordinate is increased. When rendering, the AR effect can be realized by only reading out the current anchor point coordinates and assigning the current anchor point coordinates to the cubes needing to be drawn.
In the numerical calculation stage, the current rendering engine acquires coordinates of anchor points from the ARkit and updates coordinates of cubes in the canvas object.
In the ready-to-render phase, the 2D engine takes the camera's picture from the ARKit, makes textures to upload to the GPU, and adds commands to draw the background on the canvas object to the command buffer.
In the rendering phase, the coordinates of the cube have been updated in the numerical calculation phase, so the 3D engine can directly add the commands to render the cube to the command buffer. At this stage, the cube may also be drawn using the current rendering engine's own standard graphics and the commands to draw the cube are added to the command buffer.
Finally, executing the command buffer area, the background can be drawn first and then several cubes can be drawn according to the sequence of rendering commands in the buffer area. This achieves this AR effect.
In embodiments of the present invention, the current rendering engine may function to unify multiple rendering engines and tool libraries together. The current rendering engine is able to cache resources, update coordinates, and may also provide a common canvas object such that various rendering engines and tool libraries may draw together in the canvas object, rendering a richer effect.
Fig. 5 shows a block diagram of a resource calling device in a rendering process according to an embodiment of the present invention. As shown in fig. 5, the resource calling device in the rendering process may include:
an obtaining module 51, configured to obtain rendering requirement information of a canvas object in a rendering process;
a first storage module 52, configured to store rendering commands of a plurality of rendering engines in a rendering command buffer according to a set order if the rendering requirement information includes resources that need to call the plurality of rendering engines;
and an execution module 53, configured to execute each rendering command in the rendering command buffer in the setting order in the canvas object, so as to call resources of a plurality of rendering engines in the canvas object.
In one embodiment, the first storage module 52 is further configured to:
at a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine and a three-dimensional game rendering engine in a first rendering command buffer in series according to a set sequence; and/or
At least two rendering commands of the current rendering engine, the two-dimensional game rendering engine and the three-dimensional game rendering engine are stored in series in a second rendering command buffer in a set order in a graphics rendering stage.
In one embodiment, as shown in fig. 6, the apparatus further comprises:
and a second storage module 61, configured to store the rendering commands of the plurality of rendering engines and the rendering commands of the shader library in series in the rendering command buffer according to a set order if the rendering requirement information further includes a resource that needs to call the shader library.
In one embodiment, the second storage module 61 is further configured to:
in a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library in series in a first rendering command buffer according to a set sequence; and/or
At a graphics rendering stage, at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library are stored in series in a second rendering command buffer in a set order.
In one embodiment, the apparatus further comprises:
and the three-dimensional model calling module 62 is used for calling the needed coordinate information from the three-dimensional model tool library if the rendering requirement information also comprises resources needing to call the three-dimensional model tool library.
In one embodiment, the apparatus further comprises:
and the augmented reality calling module 63 is configured to calculate, in a calculation stage, an effect parameter corresponding to the augmented reality effect if the rendering requirement information further includes a resource for calling the augmented reality engine.
The functions of each module in each device of the embodiments of the present invention may be referred to the corresponding descriptions in the above methods, and are not described herein again.
The embodiment of the invention provides a rendering engine, which comprises a resource calling device in any rendering process.
In one example application, a set of graphics rendering engines is developed based on a rendering application programming interface, such as Metal. The rendering engine may perform the resource calling method in the rendering process of any of the above embodiments. The Metal is a low-level rendering application programming interface, provides the lowest level required by software, and ensures that the software can run on different graphics chips. The rendering engine can be applied to iOS equipment and has the characteristics of light weight, easy access, high performance, multiple instantiations and the like. In addition, the rendering engine has the capability of rendering graphical effects in three dimensions, illumination, etc. that provide multiple instances in a Feed stream.
In one example, the graphics-rendering engine implementation essentially comprises:
1. the infrastructure of the Metal is managed with a single instance core controller (e.g., a single instance vgmetacore) and the buffer objects (e.g., vgmetacache objects) are managed. Rendering events are driven with a system screen refresh notification class (e.g., CADisplayLink). For example, a CADisplayLink issuing one rendering driver event per frame may cause canvas objects in the rendering engine to draw at the same frequency as the display's refresh screen display interface.
2. Kernel function core controllers (e.g., vgmetalkernellcore) and augmented reality core controllers (e.g., vgarocore) are employed as portals for system high performance shader function tools (e.g., metalperformanceshapers) and augmented reality tools (e.g., ARKit).
3. The rendering event is triggered in turn by VGMetalCore controlling each canvas object named VGMetalCanva.
4. The structure inside the canvas object may be seen in FIG. 2. As shown in fig. 2, in one example, three canvas objects (abbreviated as canvases in fig. 2) trigger rendering events in series within one rendering period. The rendering driving event is sent from the system screen refreshing notification class to the end of rendering all the running canvas objects, and the rendering driving event can be regarded as one rendering period. Each canvas object includes multiple stages of event casting, numerical computation, ready to render, graphics rendering, and exchanging buffers in a rendering process. In the event throwing stage, the canvas object monitors the rendering driving notice thrown by the single core controller, and the notice can comprise a plurality of character strings which are used for indicating that the system screen refreshing notice class sends out the rendering driving event. In the numerical calculation phase, the effect parameters for each effect of the canvas object may be calculated. In the ready-to-render phase, GPU resources may be processed. In the graphics rendering stage, a render command may be invoked to complete rendering of effects of graphics to be rendered within the canvas object. In the exchange buffer stage, a currently used buffer of the canvas object may be exchanged with an unused buffer in preparation for displaying a rendering effect in the screen. For example: after exchanging the currently used buffer H1 with the unused buffer H2, the next frame may be rendered at H2. After rendering is completed, H2 and H1 are exchanged again, so that rendering effect can be more continuous.
Referring to the example of FIG. 2, three canvas objects have their own two buffers, the first being H1-1 and H2-1, the second being H1-2 and H2-2, and the third being H1-3 and H2-3, respectively. Assume that the screen also has two buffers C1 and C2. One is hidden in the background while the other is displayed in the foreground. Within one rendering cycle, at a frame, the rendering results of the three canvas objects are in buffers H1-1, H1-2, and H1-3, respectively. The rendering results of these three buffers may all be included in the buffer C1 of the screen. At this point, C1 may be displayed in the foreground, hiding C2. At the next frame, C2 may include the rendering effects of H2-1, H2-2, H2-3, and then C1 is swapped with C2 to display the rendering effect of the next frame in the screen. By exchanging different buffer areas, the rendering effect is continuously displayed on the screen.
In this example, after the rendering process of the first canvas object ends, the rendering process of the second object begins. After the rendering process of the second canvas object ends, the rendering process of the third canvas object begins. Only the stages involved in the rendering of the first canvas object are shown in fig. 2, the rendering of the second canvas object and the third canvas object being similar to the rendering of the first canvas object, although not shown.
In one example, as shown in FIG. 7, a schematic diagram of the internal structure of a Canvas object (Canvas) is shown. Assuming that the Canvas object is named VanGogh Canvas, the Canvas object may comprise a system class, for example: system layer (cametallilayer), system drawing (cametaldable), color Texture (Color Texture). Wherein the camallilayer may display content presented in the layers by the Metal.
The canvas object may also include a Depth Texture (Depth Texture), a pipeline descriptor (MTL Render Pass Descriptor), and an Effect List (Effect List). Wherein, a plurality of effects (effects) may be included in the Effect list. Each effect may include, for example: light source descriptors (Multi Light Descriptor), cameras (cameras), drawing lists (Draw List), etc. The camera may include, among Other things, perspective descriptors (Perspective Descriptor), perspective transformation descriptors (Eye Transform Descriptor), other descriptors (Other descriptors), etc. The drawing list includes a plurality of drawing objects (Draw). Each drawing object may include resources required for drawing of each stroke of the effect. For example: texture descriptor (Material Descriptor), vertex Content (Vertex Content), fragment Content (Fragment Content), pipeline state (Metal Pipeline State), depth template state (Metal Depth Stencil State), vertex uniform buffer (Vertex Uniform Buffer), fragment uniform buffer (Fragment Uniform Buffer). Vertex buffers (Vertex buffers), index buffers (Index buffers), and other Vertex descriptors (Other Vertex Descriptor) may be included in the Vertex content. Texture (Texture) such as RGB, Y-map, and UV-map, and other source descriptors (Other Fragment Descriptor) may be included in the source content. Among them, textures (Texture) such as Vertex Uniform Buffer, fragment Uniform Buffer, vertex Buffer, index Buffer, and RGB, Y-map, and UV-map may be provided in the GPU.
As shown in fig. 8, the main rendering flow of this application example may include:
in step S81, a system screen refresh notification class, for example, a CADisplayLink triggers a rendering driver event, and the CADisplayLink may trigger a rendering driver event once every frame. The core controller, e.g., vgmetacore, throws out the rendering driver notification upon receipt of the rendering driver event. One or more canvas objects may listen for the notification. Wherein the canvas object is the host of the rendering graphics. If multiple canvas objects exist in an Application (App) at the same time, they may trigger rendering events serially and sequentially after receiving notification.
In step S82, a Canvas object, such as VGMetalCanvas (where VGMetalCanvas may be a class name of VanGogh Canvas in code), holds an effect object, such as VGEffect, that should be rendered at the present time. VGEffect may be a cluster of classes, with different effects achieved by different subclasses. The effect object that should be rendered at present may include one effect to be rendered, or may include an effect list composed of a plurality of effects to be rendered. The canvas object supports drawing multiple effects together. When the canvas object receives the notification, the effect object is traversed, triggering a "calculation" event of the effect object.
If it is desired to render 2D game effects, 3D game effects, AR effects in canvas objects, these effects can be found during the traversal process.
For example, if an AR effect is present, after triggering a "calculate" event into the calculation phase, the coordinates of the object tracked by the ARKit may be output to the current rendering engine.
Step S83, after receiving the 'calculation' event, the effect object performs different numerical calculations according to different classes of the class cluster. These calculations may be performed by a central processing unit (CPU, central Processing Unit). The effect parameters calculated by different effect objects may be different, for example: the effect parameters of some effect objects can be rotated by a certain angle, and the effect parameters of some effect objects can be moved by a certain distance according to the specific characteristics of the effect objects.
After the calculation is completed in step S84, the canvas object may determine whether the effect object needs redrawing based on the calculation result. For example, if the calculated effect parameters have not changed, it may be a static effect, some of which are not to be redrawn. For some effects which do not need redrawing, rendering commands can be omitted, unnecessary redrawing can be reduced, and performance and electric quantity consumption are saved.
In step S85, for the effect object that needs redrawing, the canvas object may further trigger a "ready to render" event. The event may be used to process GPU resources. Such as generating GPU buffers, or generating texture resources. One example of generating texture resources includes: in preparation for rendering, a shadow depth map is generated that is required when the scene renders the shadow.
Step S86, after the effect object processes the 'ready to render' event, the canvas object triggers the 'graphic rendering' event. The method comprises the steps of preparing an effect object for rendering, calculating a transformation matrix, calculating a light source descriptor, generating a rendering context structure body finally, and transmitting the rendering context structure body to a plurality of rendering objects held in the effect object for rendering. Wherein the rendering object may be, for example, a drawing object (Draw) in fig. 7. The rendering object may also be a cluster of classes, and different subclasses may have different implementations.
The resources of the MPS may be utilized in the current rendering engine. For example, after entering the ready-to-render stage, generation of gaussian blur results with MPS may be invoked. In the graphics rendering stage, mapping is performed by using Gaussian blur results.
In step S87, after receiving the rendering context, the rendering object accesses the rendering object, i.e., the rendering object, associated with both the internally held vertex content (vertex content) and the fragment content (fragment content). The two objects are updated into a rendering buffer shared with the GPU according to the rendering context, the FragmentContent also uploads textures from the CPU to the GPU at this time, and the rendering command of the Metal is called by the vertex content to perform final graphics rendering.
After entering the ready-to-render phase or the graphics rendering phase, if 2D game effects, 3D game effects, etc. need to be rendered in the canvas object, the rendering commands for these effects may also be saved in a buffer ready for rendering in a certain order. For example, the rendering commands of the current rendering engine, the 2D game engine Spritekit and the 3D game engine Scenekit are connected in series through the rendering command buffer of the Metal bottom layer. The rendering command of SpriteKit, sceneKit and other rendering commands of the current rendering engine are submitted to the same rendering command buffer, and when the buffer is executed, all contents are rendered in the same canvas, so that the joint use of the engines is realized.
In addition, in the graphics rendering stage, if graphics of three-dimensional objects such as cubes, cylinders, spheres and the like need to be drawn, coordinate data such as vertexes, textures, normals and the like in a Model I/O and the like three-dimensional Model tool library can be called. Of course, if at the GPU, for example: the coordinate data is buffered in a Vertex coincidence Buffer (Vertex Uniform Buffer), a slice source coincidence Buffer (Fragment Uniform Buffer), a Vertex Buffer (Vertex Buffer), an Index Buffer (Index Buffer), or the like, and the buffered coordinate data may be used directly to draw a three-dimensional object.
In this application example, the rendering application programming interface of the rendering engine uses Metal technology instead of conventional OpenGL ES technology, having the following features.
a) And the method is more suitable for modern multi-core GPU, and the rendering engine can have higher performance and more stable frame rate.
b) By adopting the C/S model, the communication with the GPU is easier to manage, and the structure of the rendering engine is clearer.
c) The rendering engine has good stability and robustness, and fewer crashes (Crash) on the line. On the one hand, application programming interfaces (API, application Programming Interface) check to help the developer find problems during debugging. On the other hand, the APP cannot be crashed directly when the GPU is suspended (hang) and other problems are caused by the protection during running, so that risks are reduced.
d) The shader language MSL is based on C++14 expansion, so that the shader codes of a rendering engine are more modern and have better performance.
e) By adopting a pre-compiling mechanism, a grammar tree is generated during compiling, so that the loading of the shader codes of a rendering engine is faster, and the shader codes can be loaded faster during running.
Further, GPU resources are fully cached, so that rendering speed can be increased. In one case, because Feed needs to frequently slide in and slide out of a screen, resource loading time is very critical, and a little longer time can cause Feed stream to be blocked, so that experience is seriously affected. The rendering engine caches time-consuming resources such as rendering pipelines and the like requiring GPU compilation, so that the switching and loading of the rendering pipelines can be completed in a very short time. In another case, the rendering engine internally provides built-in shapes such as rectangles, cubes, etc. Since the vertices and index data of the built-in shapes are unchanged, the built-in shapes can be cached when they are accessed for the first time, so that loading the vertices, index data, etc. of the built-in shapes can be completed in a very short time.
By means of the rendering engine, advanced styles based on graphics rendering can be quickly developed. These advanced creative styles based on graphical rendering, whose eye-catching effect and advanced feel can be favored by the sponsored guaranty impression (Guaranteed Delivery, GD) advertiser. In addition, the rendering engine has the characteristics of light weight, powerful function, easiness in transplanting, no dependence on other third party libraries and the like, and can be quickly transplanted to other product lines.
The rendering commands of the plurality of rendering engines are stored in the rendering command buffer according to the set sequence, so that the resources of the plurality of rendering engines can be rendered in the same canvas object, the joint use of the plurality of rendering engines is realized, the rendering engines with various architectures can be compatible, richer rendering effects are provided, and higher-level originality is realized.
Fig. 9 is a block diagram showing the structure of a resource calling device in a rendering process according to an embodiment of the present invention. As shown in fig. 9, the apparatus includes: memory 910 and processor 920, memory 910 stores a computer program executable on processor 920. The processor 920 implements the resource calling method in the rendering process in the above embodiment when executing the computer program. The number of the memories 910 and the processors 920 may be one or more.
The apparatus further comprises:
and the communication interface 930 is used for communicating with external equipment and carrying out data interaction transmission.
The memory 910 may include high-speed RAM memory or may further include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 910, the processor 920, and the communication interface 930 are implemented independently, the memory 910, the processor 920, and the communication interface 930 may be connected to each other and perform communication with each other through buses. The bus may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 9, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 910, the processor 920, and the communication interface 930 are integrated on a chip, the memory 910, the processor 920, and the communication interface 930 may communicate with each other through internal interfaces.
An embodiment of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method as in any of the above embodiments.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that various changes and substitutions are possible within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A method for invoking resources in a rendering process, comprising:
acquiring rendering requirement information of canvas objects in a rendering process;
if the rendering requirement information comprises resources which need to call a plurality of rendering engines, the rendering commands of the rendering engines are stored in a rendering command buffer zone according to a set sequence; wherein the rendering layers of the plurality of rendering engines have a common rendering command buffer;
and executing each rendering command in the rendering command buffer according to the set sequence in the canvas object so as to call the resources of a plurality of rendering engines in the canvas object.
2. The method of claim 1, wherein storing the rendering commands of the plurality of rendering engines in the rendering command buffer in a set order comprises:
At a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine and a three-dimensional game rendering engine in a first rendering command buffer in series according to a set sequence; and/or
At least two rendering commands of the current rendering engine, the two-dimensional game rendering engine and the three-dimensional game rendering engine are stored in series in a second rendering command buffer according to a set order in a graphics rendering stage.
3. The method as recited in claim 1, further comprising:
and if the rendering requirement information also comprises resources which need to call a shader library, serially storing rendering commands of a plurality of rendering engines and the rendering commands of the shader library in the rendering command buffer according to a set sequence.
4. A method according to claim 3, wherein storing rendering commands of a plurality of rendering engines and rendering commands of the shader library in series in a set order in the rendering command buffer comprises:
in a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library in series in a first rendering command buffer according to a set sequence; and/or
At the graphics rendering stage, at least two rendering commands of the current rendering engine, the two-dimensional game rendering engine, the three-dimensional game rendering engine and the shader library are stored in series in a second rendering command buffer according to a set order.
5. The method according to any one of claims 1 to 4, further comprising:
and if the rendering requirement information also comprises resources needing to call a three-dimensional model tool library, calling the required coordinate information from the three-dimensional model tool library.
6. The method according to any one of claims 1 to 4, further comprising:
and if the rendering requirement information also comprises resources required to call the augmented reality engine, calculating an effect parameter corresponding to the augmented reality effect in a calculation stage.
7. A resource calling device in a rendering process, comprising:
the acquisition module is used for acquiring rendering requirement information of the canvas object in the rendering process;
the first storage module is used for storing rendering commands of the plurality of rendering engines in a rendering command buffer according to a set sequence if the rendering demand information comprises resources which need to call the plurality of rendering engines; wherein the rendering layers of the plurality of rendering engines have a common rendering command buffer;
And the execution module is used for executing each rendering command in the rendering command buffer area according to the set sequence in the canvas object so as to call the resources of a plurality of rendering engines in the canvas object.
8. The apparatus of claim 7, wherein the first storage module is further configured to:
at a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine and a three-dimensional game rendering engine in a first rendering command buffer in series according to a set sequence; and/or
At least two rendering commands of the current rendering engine, the two-dimensional game rendering engine and the three-dimensional game rendering engine are stored in series in a second rendering command buffer according to a set order in a graphics rendering stage.
9. The apparatus as recited in claim 7, further comprising:
and the second storage module is used for storing the rendering commands of the plurality of rendering engines and the rendering commands of the shader library in the rendering command buffer according to a set sequence in series if the rendering requirement information also comprises resources needing to call the shader library.
10. The apparatus of claim 9, wherein the second storage module is further configured to:
In a preparation rendering stage, storing at least two rendering commands of a current rendering engine, a two-dimensional game rendering engine, a three-dimensional game rendering engine and a shader library in series in a first rendering command buffer according to a set sequence; and/or
At the graphics rendering stage, at least two rendering commands of the current rendering engine, the two-dimensional game rendering engine, the three-dimensional game rendering engine and the shader library are stored in series in a second rendering command buffer according to a set order.
11. The apparatus according to any one of claims 7 to 10, further comprising:
and the three-dimensional model calling module is used for calling the needed coordinate information from the three-dimensional model tool library if the rendering requirement information also comprises resources needing to call the three-dimensional model tool library.
12. The apparatus according to any one of claims 7 to 10, further comprising:
and the augmented reality calling module is used for calculating the effect parameters corresponding to the augmented reality effect in the calculation stage if the rendering requirement information also comprises the resources required to call the augmented reality engine.
13. A resource calling device in a rendering process, comprising:
One or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
14. A rendering engine, comprising: a resource calling device in a rendering process according to any one of claims 7 to 13.
15. A computer readable storage medium storing a computer program, which when executed by a processor implements the method of any one of claims 1 to 6.
CN201910004657.8A 2019-01-03 2019-01-03 Resource calling method and device in rendering process and rendering engine Active CN111400024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910004657.8A CN111400024B (en) 2019-01-03 2019-01-03 Resource calling method and device in rendering process and rendering engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910004657.8A CN111400024B (en) 2019-01-03 2019-01-03 Resource calling method and device in rendering process and rendering engine

Publications (2)

Publication Number Publication Date
CN111400024A CN111400024A (en) 2020-07-10
CN111400024B true CN111400024B (en) 2023-10-10

Family

ID=71435834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910004657.8A Active CN111400024B (en) 2019-01-03 2019-01-03 Resource calling method and device in rendering process and rendering engine

Country Status (1)

Country Link
CN (1) CN111400024B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797192B (en) * 2020-07-27 2023-09-01 平安科技(深圳)有限公司 GIS point data rendering method and device, computer equipment and storage medium
CN112052097A (en) * 2020-10-15 2020-12-08 腾讯科技(深圳)有限公司 Rendering resource processing method, device and equipment for virtual scene and storage medium
CN112652025B (en) * 2020-12-18 2022-03-22 完美世界(北京)软件科技发展有限公司 Image rendering method and device, computer equipment and readable storage medium
CN114756359A (en) * 2020-12-29 2022-07-15 华为技术有限公司 Image processing method and electronic equipment
CN112445624B (en) * 2021-02-01 2021-04-23 江苏北弓智能科技有限公司 Task-oriented GPU resource optimal configuration method and device
CN114247138B (en) * 2022-02-25 2022-05-13 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium
CN114821002A (en) * 2022-04-12 2022-07-29 支付宝(杭州)信息技术有限公司 AR-based interaction method and device and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101080698A (en) * 2004-12-20 2007-11-28 辉达公司 Real-time display post-processing using programmable hardware
CN101789030A (en) * 2010-02-03 2010-07-28 南京师范大学 Virtual geographical environment (VGE) symbolic model and map symbol sharing system and method based on same
CN102246146A (en) * 2008-11-07 2011-11-16 谷歌公司 Hardware-accelerated graphics for web applications using native code modules
CN103713891A (en) * 2012-10-09 2014-04-09 阿里巴巴集团控股有限公司 Method and device for graphic rendering on mobile device
CN103942823A (en) * 2014-02-27 2014-07-23 优视科技有限公司 Game engine rendering method and device
CN105096373A (en) * 2015-06-30 2015-11-25 华为技术有限公司 Media content rendering method, user device and rendering system
CN105487929A (en) * 2015-11-19 2016-04-13 山东大学 Method for managing shared data of lens in cluster rendering process
TW201719571A (en) * 2015-09-25 2017-06-01 英特爾股份有限公司 Position only shader context submission through a render command streamer
CN108305316A (en) * 2018-03-08 2018-07-20 网易(杭州)网络有限公司 Rendering intent, device, medium based on AR scenes and computing device
CN108352051A (en) * 2015-11-13 2018-07-31 英特尔公司 Promote to handle the efficient graph command of the bundle status at computing device
CN108780438A (en) * 2016-01-05 2018-11-09 夸克逻辑股份有限公司 The method for exchanging visual element and the personal related display of filling with interactive content

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2638453C (en) * 2006-03-14 2010-11-09 Transgaming Technologies Inc. General purpose software parallel task engine

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101080698A (en) * 2004-12-20 2007-11-28 辉达公司 Real-time display post-processing using programmable hardware
CN102246146A (en) * 2008-11-07 2011-11-16 谷歌公司 Hardware-accelerated graphics for web applications using native code modules
CN101789030A (en) * 2010-02-03 2010-07-28 南京师范大学 Virtual geographical environment (VGE) symbolic model and map symbol sharing system and method based on same
CN103713891A (en) * 2012-10-09 2014-04-09 阿里巴巴集团控股有限公司 Method and device for graphic rendering on mobile device
CN103942823A (en) * 2014-02-27 2014-07-23 优视科技有限公司 Game engine rendering method and device
CN105096373A (en) * 2015-06-30 2015-11-25 华为技术有限公司 Media content rendering method, user device and rendering system
TW201719571A (en) * 2015-09-25 2017-06-01 英特爾股份有限公司 Position only shader context submission through a render command streamer
CN108352051A (en) * 2015-11-13 2018-07-31 英特尔公司 Promote to handle the efficient graph command of the bundle status at computing device
CN105487929A (en) * 2015-11-19 2016-04-13 山东大学 Method for managing shared data of lens in cluster rendering process
CN108780438A (en) * 2016-01-05 2018-11-09 夸克逻辑股份有限公司 The method for exchanging visual element and the personal related display of filling with interactive content
CN108305316A (en) * 2018-03-08 2018-07-20 网易(杭州)网络有限公司 Rendering intent, device, medium based on AR scenes and computing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Aleksandar M. Dimitrijević 等.Ellipsoidal Clipmaps – A planet-sized terrain rendering algorithm.《Computers & Graphics》.2015,第52卷43-61. *
徐泽骅 等.基于OpenGL的地图渲染引擎设计与实现.《地理信息世界》.2015,第22卷(第6期),32-36,50. *

Also Published As

Publication number Publication date
CN111400024A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN111400024B (en) Resource calling method and device in rendering process and rendering engine
US11769294B2 (en) Patched shading in graphics processing
US9978115B2 (en) Sprite graphics rendering system
US9799088B2 (en) Render target command reordering in graphics processing
CN112381918A (en) Image rendering method and device, computer equipment and storage medium
US20130127858A1 (en) Interception of Graphics API Calls for Optimization of Rendering
US11908039B2 (en) Graphics rendering method and apparatus, and computer-readable storage medium
US11094036B2 (en) Task execution on a graphics processor using indirect argument buffers
CN113112579A (en) Rendering method, rendering device, electronic equipment and computer-readable storage medium
US10319068B2 (en) Texture not backed by real mapping
CN111402349B (en) Rendering method, rendering device and rendering engine
CN111402348B (en) Lighting effect forming method and device and rendering engine
CN111402375B (en) Shutter effect forming method and device and rendering engine
CN117557703A (en) Rendering optimization method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant