CN115908685A - Scene rendering method, device, equipment and storage medium - Google Patents

Scene rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN115908685A
CN115908685A CN202211436352.2A CN202211436352A CN115908685A CN 115908685 A CN115908685 A CN 115908685A CN 202211436352 A CN202211436352 A CN 202211436352A CN 115908685 A CN115908685 A CN 115908685A
Authority
CN
China
Prior art keywords
rendering
scene
texture
pixel
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211436352.2A
Other languages
Chinese (zh)
Inventor
王晓松
付伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211436352.2A priority Critical patent/CN115908685A/en
Publication of CN115908685A publication Critical patent/CN115908685A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Generation (AREA)

Abstract

The application provides a scene rendering method, a scene rendering device and a storage medium. The method comprises the following steps: when a plurality of textures in a scene to be rendered are rendered in sequence through a rendering pipeline, if the vertex data of a plurality of continuous target textures to be rendered at present are the same, processing the vertex data through the rendering pipeline to obtain corresponding scene fragments; performing pixel coloring on a scene fragment according to first pixel data after the texture of the first target texture is rendered and second pixel data after the plurality of target textures are mixed by a rendering pipeline so as to render the plurality of target textures; and when the rendering pipeline finishes the rendering of each texture in the scene to be rendered, generating a scene image of the scene to be rendered. According to the method and the device, synchronous rendering of a plurality of continuous target textures with the same vertex data can be completed by operating the rendering pipeline once, and on the basis of ensuring scene rendering accuracy, the memory occupancy rate and the operation power consumption during scene rendering are greatly reduced.

Description

Scene rendering method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a scene rendering method, a scene rendering device, scene rendering equipment and a storage medium.
Background
For any kind of three-dimensional scene that has been created, different scene objects, such as virtual cameras, models, light sources, etc., will typically be contained at different spatial positions. Then, when rendering a corresponding scene image on a display screen, a plurality of textures represented by different scene objects in the three-dimensional scene generally need to be rendered in a mixed manner, so that a scene image can be successfully rendered.
Currently, for rendering any three-dimensional scene, a rendering pipeline is usually adopted to sequentially render a plurality of textures in the three-dimensional scene, so that each texture completely goes through each stage of the rendering pipeline. However, each stage of the rendering pipeline requires the use of Graphics resources of a Graphics Processing Unit (GPU) to successfully operate. Therefore, when a plurality of textures in any scene sequentially pass through the rendering pipeline, the operating memory of the GPU is greatly occupied, and the power consumption of the GPU is increased.
Disclosure of Invention
The application provides a scene rendering method, a scene rendering device and a storage medium, wherein synchronous rendering of a plurality of continuous target textures with the same vertex data is performed through a rendering pipeline, and on the basis of ensuring the scene rendering accuracy, the GPU occupancy rate and the use power consumption during scene rendering are greatly reduced.
In a first aspect, an embodiment of the present application provides a scene rendering method, where the method includes:
when a plurality of textures in a scene to be rendered are rendered in sequence through a rendering pipeline, if the vertex data of a plurality of continuous target textures to be rendered at present in the rendering pipeline are the same, processing the vertex data through the rendering pipeline to obtain corresponding scene fragments;
according to the rendering pipeline, performing pixel coloring on the scene fragment to complete rendering of a plurality of target textures according to first pixel data obtained after the rendering of the texture of the first target texture and second pixel data obtained after mixing of the plurality of target textures;
and when the rendering pipeline finishes the rendering of each texture in the scene to be rendered, generating a scene image of the scene to be rendered.
In a second aspect, an embodiment of the present application provides a scene rendering apparatus, where the apparatus includes:
the vertex processing module is used for processing the vertex data through the rendering pipeline to obtain corresponding scene fragments if the vertex data of a plurality of continuous target textures to be rendered currently by the rendering pipeline are the same when the plurality of textures in the scene to be rendered are rendered sequentially through the rendering pipeline;
the texture rendering module is used for performing pixel coloring on the scene fragment according to first pixel data obtained by rendering the first texture of the first target texture and second pixel data obtained by mixing a plurality of target textures to complete the rendering of the plurality of target textures;
and the scene rendering module is used for generating a scene image of the scene to be rendered when the rendering pipeline finishes the rendering of each texture in the scene to be rendered.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor and a memory, the memory being configured to store a computer program, the processor being configured to call and run the computer program stored in the memory to perform the scene rendering method provided in the first aspect of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium for storing a computer program, where the computer program makes a computer execute the scene rendering method provided in the first aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, which includes a computer program/instructions, the computer program/instructions causing a computer to execute the scene rendering method as provided in the first aspect of the present application.
According to the technical scheme, when the plurality of textures in the scene to be rendered are rendered in sequence through the rendering pipeline, if the vertex data of the continuous target textures to be rendered at present are the same through the rendering pipeline, the vertex data are processed through the rendering pipeline at first, and the corresponding scene fragment is obtained. Then, according to the rendering pipeline, pixel coloring is carried out on the scene fragment according to the first pixel data of the first target texture after the texture of the first target texture is rendered and the second pixel data of the plurality of target textures after the texture of the first target texture is mixed, so that the rendering of the plurality of target textures is completed, when the rendering pipeline completes the rendering of each texture under the scene to be rendered, the scene image of the scene to be rendered can be generated, and the accurate rendering of the scene to be rendered is achieved. In addition, in the scene rendering process, synchronous rendering of a plurality of continuous target textures with the same vertex data can be completed by operating the rendering pipeline once, and on the basis of ensuring the scene rendering accuracy, the GPU occupancy rate and the use power consumption in the scene rendering process are greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a prior art workflow of a rendering pipeline provided herein;
fig. 2 is a flowchart of a scene rendering method according to an embodiment of the present application;
FIG. 3 is a schematic view of an improved work flow of a rendering pipeline according to an embodiment of the present application;
fig. 4 is a flowchart of another scene rendering method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a scene rendering apparatus according to an embodiment of the present application;
fig. 6 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the embodiments of the present application, the words "exemplary" or "such as" are used to indicate that, for example, illustration or description, any embodiment or aspect described in the embodiments of the present application as "exemplary" or "such as" is not to be construed as preferred or advantageous over other embodiments or aspects. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Before introducing the specific technical solutions of the present application, the following explains the existing architecture of the rendering pipeline involved in the present application:
typically, the primary purpose of the rendering pipeline is to convert a three-dimensional scene into a corresponding two-dimensional image, which is ultimately rendered on a display screen for display to a user. In any three-dimensional scene to be rendered, various scene objects are generally present at different spatial positions. Different irregular patterns are mapped on the surfaces of different scene objects in different modes and serve as textures of the scene objects, so that different scene objects have different textures, and a three-dimensional scene can comprise a plurality of textures.
Therefore, when any three-dimensional scene is converted into a two-dimensional scene image, each texture mapped on the surface of different scene objects in the three-dimensional scene is usually rendered sequentially through a rendering pipeline, and then a scene image can be successfully rendered.
As shown in fig. 1, the workflow according to the rendering pipeline may be several stages as follows: an application phase, a geometry phase, and a rasterization phase.
1. Application phase
Various scene data under a three-dimensional scene to be rendered are acquired, such as the position of a camera, a view cone, a model contained in the scene, light source information and the like. Then, the scene data is removed in coarse granularity, so that the invisible three-dimensional objects in the three-dimensional scene are removed. Furthermore, the geometric information (i.e., vertex data) and pixel data of each texture required for rendering are output by setting corresponding rendering states, including but not limited to textures, shaders, and the like.
2. Geometric phase
This stage may include, but is not limited to, the five processing stages of vertex shader, primitive assembly, geometry shader, clipping, screen mapping. And transforming the vertex data of a certain texture to be rendered currently from the object coordinate system to the world coordinate system through a vertex shader, so that the vertexes in all scene objects in the three-dimensional scene have a uniform coordinate system. And then, transforming vertex data in the world coordinate system to be in the camera coordinate system, thereby obtaining each vertex in the scene observed when the virtual camera shooting is taken as the origin of the view space. Furthermore, the vertices may be assembled into corresponding geometric primitives through primitive assembly. And performing corresponding operation on the geometric primitive through a geometric shader to output each vertex forming the geometric primitive. Vertices in the geometric primitive that are outside the viewport of the camera are then cropped for display screen size. Finally, the cut geometric primitives are often three-dimensional, and the three-dimensional geometric primitives can be mapped to a two-dimensional display screen space, so that two-dimensional geometric primitives can be obtained.
3. Stage of rasterization
And mapping the two-dimensional geometric primitive to the fragment through rasterization, and acquiring the pixel value of the corresponding texture to be rendered currently through a fragment shader. Then, the pixel value of the texture is mixed with the existing pixel value in a memory buffer (Framebuffer), and a new pixel value is mixed in the Framebuffer, thereby completing the rendering of one texture.
For any three-dimensional scene to be rendered, vertex data and pixel data of a plurality of textures contained below the three-dimensional scene are generally acquired in an application stage. The textures may then be input into the rendering pipeline in turn, in the order in which they were rendered in the rendering pipeline, such that each texture would go through the various stages of the rendering pipeline in its entirety.
However, each stage of the rendering pipeline requires the use of the graphics resources of the GPU to successfully operate. Therefore, when a plurality of textures in any three-dimensional scene to be rendered sequentially pass through the rendering pipeline, the running memory of the GPU is greatly occupied, and the use power consumption of the GPU is increased.
In order to solve the problems, a new rendering scheme is designed based on the workflow of the rendering pipeline. When a plurality of textures in a scene to be rendered are rendered in sequence through a rendering pipeline, if the vertex data of a plurality of continuous target textures to be rendered currently by the rendering pipeline are the same, the vertex data is processed through the rendering pipeline firstly to obtain a corresponding scene fragment. Then, according to the rendering pipeline, pixel coloring is carried out on the scene fragment according to the first pixel data of the first target texture after the texture of the first target texture is rendered and the second pixel data of the plurality of target textures after the texture of the first target texture is mixed, so that the rendering of the plurality of target textures is completed, when the rendering pipeline completes the rendering of each texture under the scene to be rendered, the scene image of the scene to be rendered can be generated, and the accurate rendering of the scene to be rendered is achieved. In addition, in the scene rendering process, synchronous rendering of a plurality of continuous target textures with the same vertex data can be completed by operating the rendering pipeline once, and on the basis of ensuring the scene rendering accuracy, the GPU occupancy rate and the use power consumption in scene rendering are greatly reduced.
Fig. 2 is a flowchart of a scene rendering method according to an embodiment of the present application. The method can be executed by the scene rendering device provided by the application, wherein the scene rendering device can be implemented by any software and/or hardware manner. Illustratively, the scene rendering apparatus is not limited to tablet computers, mobile phones (e.g., folding screen mobile phones, large screen mobile phones, etc.), wearable devices, vehicle-mounted devices, augmented Reality (AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computers (UMPC), netbooks, personal Digital Assistants (PDAs), smart televisions, smart screens, high-definition televisions, 4K televisions, smart speakers, smart projectors, and other internet of things (IOT) devices, and the present disclosure does not set any limit to specific types of electronic devices.
Specifically, as shown in fig. 2, the method may include the following steps:
s210, when a plurality of textures in a scene to be rendered are rendered sequentially through a rendering pipeline, if the vertex data of a plurality of continuous target textures to be rendered currently through the rendering pipeline are the same, the vertex data are processed through the rendering pipeline to obtain corresponding scene fragments.
For any scene to be rendered, multiple textures mapped on different scene object surfaces are usually included. Furthermore, each texture carries corresponding vertex data and pixel data to indicate the pattern shape, texture, etc. of the texture.
The vertex data of each texture may represent the geometric shape of the texture, and the pixel data of each texture may represent information such as color and transparency of each pixel point in the texture.
In some implementations, the pixel data of each texture in the present application may be represented by four pixel values in a color space (which may be referred to as an RGBA space) composed of four channels, i.e., red (Red), green (Green), blue (Blue), and transparency (Alpha), so that the pixel data of each texture may include the pixel color (i.e., RGB value) and the pixel transparency (Alpha value) of each pixel point in the texture.
In general, when any scene to be rendered is rendered through a rendering pipeline, corresponding rendering is performed on each texture through the rendering pipeline in sequence according to the rendering sequence of each texture in the scene to be rendered, so that each texture can traverse through each stage of the rendering pipeline.
In the present application, it is stated that the geometric processing operations of such textures are the same, while the pixel shading is different, taking into account that there may be textures of the same geometry and different pixel colors within the scene to be rendered. Therefore, in order to improve the efficiency of the rendering pipeline for multi-texture rendering, when a certain texture in a scene to be rendered is rendered through the rendering pipeline each time, it is first determined whether vertex data between the texture currently to be rendered by the rendering pipeline and a next texture located after the texture is the same, so as to analyze whether the same geometric processing operation exists in a plurality of continuous textures starting from the texture currently to be rendered.
If the vertex data between the texture currently ready to be rendered by the rendering pipeline and the next texture after the texture is different, the texture currently ready to be rendered is conventionally rendered directly by the rendering pipeline. Then, after the texture is completely rendered, the rendering pipeline continues to sequentially render the next texture of the texture in the manner described above.
However, if the texture that the rendering pipeline is currently preparing to render is the same as the vertex data between the next texture after the texture, the texture performs at least the same geometric processing operation as the next texture. Therefore, the present application may find, starting from the texture currently ready to be rendered, a plurality of continuous textures that follow the texture and are the same as the vertex data of the texture, thereby forming a plurality of continuous target textures to be rendered currently by the rendering pipeline in the present application. Wherein, the continuous multiple target textures at least comprise two textures of the texture and the texture next to the texture, which are currently prepared to be rendered by the rendering pipeline.
Furthermore, as shown in fig. 3, the same vertex data in a plurality of consecutive target textures is input into the rendering pipeline, so that the same vertex data is correspondingly aggregated through stages of vertex shader, primitive assembly, geometry shader, clipping, and screen mapping in the rendering pipeline, thereby obtaining a geometric primitive mapped to a display screen space.
Then, in order to ensure the accuracy of the subsequent pixel rendering of each target texture, the present application further performs corresponding rasterization processing on the obtained geometric primitive to map the geometric primitive to a corresponding fragment, so as to obtain a corresponding scene fragment, and performs accurate pixel rendering processing on the scene fragment through a fragment shader in a rendering pipeline.
And S220, performing pixel coloring on the scene fragment according to the first pixel data after the rendering of the texture of the first target texture is completed and the second pixel data after the mixing of the plurality of target textures, so as to complete the rendering of the plurality of target textures.
For each of the aforementioned textures preceding multiple target textures in the scene to be rendered, the aforementioned textures have already completed the corresponding rendering flow through the rendering pipeline. Furthermore, after the rendering of each texture is completed through the rendering pipeline, it is determined that the first pixel data of the rendering is completed, and the first pixel data is buffered in a memory buffer (Framebuffer) of the rendering pipeline.
Then, after vertex data with the same target texture is processed through the rendering pipeline to obtain a corresponding scene fragment, the existing first pixel data needs to be obtained from a memory buffer (Framebuffer) of the rendering pipeline, and the first pixel data is used as the original pixel data of the scene fragment. Then, based on the first pixel data, the pixel data of each target texture is continuously rendered to the scene fragment in turn, so that the scene fragment has a new texture.
In the present application, it is contemplated that each run of the rendering pipeline will perform an actual texture blending operation. Therefore, to achieve one-time blending of scene fragments with respect to multiple target textures, the present application first determines a blending factor defined in a blending (blend) function in the rendering pipeline. Then, a blending parameter used when a plurality of target textures are blended is determined from the first pixel data and the pixel data of each target texture.
The mixing parameter adopted when the plurality of target textures are mixed is the transparency of the target pixel matched with the defined mixing factor of the rendering pipeline in the first pixel data after the rendering of the textures is completed and the pixel data of each target texture.
It should be understood that the texture blending in the rendering pipeline usually mixes the existing pixel data before blending and the pixel data to be blended this time to calculate new pixel data. The existing pixel data before mixing is used as a target pixel, and the pixel data to be mixed at this time is used as a source pixel. Then, by using the blending factors respectively defined for the source pixel and the target pixel in the blending (blend) function, the source pixel and the target pixel can be multiplied by the corresponding blending factors respectively and then added, so as to obtain a new pixel after blending. In the blend function, the blend factor defined for the source pixel may be a source factor, and the blend factor defined for the target pixel may be a target factor.
Also, blending factors respectively defined for the source pixel and the target pixel in a blend function in the rendering pipeline may include the following:
1) GL _ ZERO: indicating that 0 is used as a blending factor, which is equivalent to not using this pixel to participate in the blending operation;
2) GL _ ONE: indicating that using 1 as a blending factor is equivalent to using this pixel entirely to participate in the blending operation;
3) GL _ SRC _ ALPHA: indicating that pixel transparency in the source pixel is used as a blending factor;
4) GL _ DST _ ALPHA: indicating that pixel transparency in the target pixel is used as a blending factor;
5) GL _ ONE _ minor _ SRC _ ALPHA: representing a value obtained by subtracting the transparency of the pixel in the source pixel from 1 as a blending factor;
6) GL _ ONE _ MINUS _ DST _ ALPHA: the value obtained by subtracting the transparency of the pixel in the target pixel from 1 is used as the blending factor.
In the present application, a blending factor defined by a blend function in a rendering pipeline may be adopted, and according to pixel results when a plurality of target textures are sequentially rendered after completing rendering of first pixel data of the textures, second pixel data after the plurality of target textures are blended is first calculated. Then, a new pixel value obtained by mixing the first pixel data and the second pixel data is calculated again by using a mixing factor defined by a mixing (blend) function in the rendering pipeline. Furthermore, as shown in fig. 3, the fragment shader in the rendering pipeline performs pixel shading on the obtained scene fragment with the new pixel value, thereby completing rendering of multiple target textures in the rendering pipeline at one time.
It should be understood that the pixel data of each texture in the scene to be rendered includes pixel color and pixel transparency, and then the pixel data obtained after the rendering of each texture is completed by the rendering pipeline blending the pixel data of the texture also includes pixel color and pixel transparency. That is, the first pixel data and the second pixel data applied in the rendering process of each texture also include both pixel color and pixel transparency.
Then, for each of the following textures of the multiple target textures, the same texture rendering process is continuously executed through the rendering pipeline, and the rendering of each texture in the scene to be rendered can be completed.
S230, when the rendering pipeline finishes rendering of each texture in the scene to be rendered, generating a scene image of the scene to be rendered.
When the rendering pipeline finishes rendering each texture in the scene to be rendered, the mixed rendering of all textures in the scene to be rendered is completed, and a final two-dimensional image is obtained, wherein the two-dimensional image can be a scene image of the scene to be rendered.
According to the technical scheme provided by the embodiment of the application, when the plurality of textures in the scene to be rendered are rendered in sequence through the rendering pipeline, if the vertex data of the plurality of continuous target textures to be rendered currently by the rendering pipeline are the same, the vertex data is processed through the rendering pipeline firstly, and the corresponding scene fragment is obtained. Then, according to the rendering pipeline, pixel coloring is carried out on the scene fragment according to the first pixel data of the first target texture after the texture of the first target texture is rendered and the second pixel data of the plurality of target textures after the texture of the first target texture is mixed, so that the rendering of the plurality of target textures is completed, when the rendering pipeline completes the rendering of each texture under the scene to be rendered, the scene image of the scene to be rendered can be generated, and the accurate rendering of the scene to be rendered is achieved. In addition, in the scene rendering process, synchronous rendering of a plurality of continuous target textures with the same vertex data can be completed by operating the rendering pipeline once, and on the basis of ensuring the scene rendering accuracy, the GPU occupancy rate and the use power consumption in the scene rendering process are greatly reduced.
According to one or more embodiments of the present application, considering that the rendering positions of the continuous multiple target textures to be currently rendered by the rendering pipeline in the scene to be rendered are variable, there may be a case where the multiple target textures do not have the aforementioned textures. Therefore, different blending operations need to be performed on multiple target textures for different situations, so as to ensure the accuracy of multi-texture blending in a scene to be rendered.
Next, the present application describes in detail a specific process of rendering a plurality of target textures through a rendering pipeline.
Fig. 4 is a flowchart of another scene rendering method provided in an embodiment of the present application, where the method may include the following steps:
s410, when a plurality of textures in a scene to be rendered are rendered in sequence through a rendering pipeline, if the vertex data of a plurality of continuous target textures to be rendered at present are the same, the vertex data are processed through the rendering pipeline to obtain corresponding scene fragments.
And S420, if the texture of the first target texture is empty, sequentially mixing the pixel data of each target texture according to a set sequential mixing rule to obtain second pixel data mixed by a plurality of target textures.
If the first target texture in the plurality of target textures does not have the texture, the first target texture belongs to the texture which is rendered in the rendering pipeline first, and the first pixel data which is rendered completely does not exist. That is, the multiple target textures are located at the bottom layer of the picture in the scene to be rendered, and it is necessary to render the multiple target textures first and then render other textures after the multiple target textures.
Therefore, according to the blending factor in the blending (blend) function in the rendering pipeline, the pixel data of each target texture is sequentially blended by adopting a conventional sequential blending rule, and the second pixel data after the blending of a plurality of target textures can be obtained.
Taking two target textures as an example, assume that the pixel data of the target texture are s1 and s2, respectively, the pixel transparency included therein is a1 and a2, respectively, the rendering order of the target texture is s1- > s2, and the blending factor in the blending (blend) function is set to (GL _ SRC _ ALPHA, GL _ ONE _ MINUS _ SRC _ ALPHA). As can be seen from the above, the source pixel is s2, the target pixel is s1, the source factor is the pixel transparency a2 of the source pixel, and the target factor is the value (1-a 2) obtained by subtracting the pixel transparency in the source pixel from 1. Accordingly, the second pixel data after the two target textures are mixed can be calculated as follows: outColor = s2 a2+ s1 (1-a 2).
And S430, performing pixel coloring on the scene fragment according to the second pixel data.
Since the texture does not exist, that is, the first pixel data after the rendering of the texture does not exist, the fragment shader in the rendering pipeline directly performs pixel shading on the scene fragment according to the second pixel data after the mixing of the target textures, thereby completing the rendering of the target textures.
And S440, if the texture of the first target texture is non-empty, synchronously mixing the pixel data of each target texture according to a synchronous mixing rule corresponding to the plurality of target textures to obtain second pixel data after the plurality of target textures are mixed.
If the first target texture in the multiple targets has the texture, which indicates that the rendering pipeline has the texture which is completely rendered before rendering the multiple target textures, the pixel data of the multiple target textures needs to be continuously mixed on the first pixel data after the rendering of the texture is completed. That is, a plurality of target textures are located at an upper layer in a picture in a scene to be rendered, and the plurality of target textures are continuously rendered on the basis of the previously-described textures whose rendering has been completed.
At this time, for the texture blending process in the rendering pipeline, different synchronous blending rules are set for different numbers of target textures. According to the number of the target textures, firstly, a proper synchronous mixing rule is determined. And then, synchronously mixing the pixel data of each target texture according to a synchronous mixing rule to obtain second pixel data mixed by a plurality of target textures.
As an optional implementation scheme in the present application, in order to ensure that a rendering effect finally achieved when sequentially mixing a plurality of target textures and mixing the plurality of target textures with first pixel data of the texture after mixing the plurality of target textures with the first pixel data of the texture first through a rendering pipeline is consistent, for a synchronous mixing rule corresponding to the plurality of target textures, the present application may adopt the following steps to determine:
the method comprises the following steps of firstly, determining a pixel variable of each target texture, a first pixel variable after rendering of the texture is completed and a second pixel variable after mixing of a plurality of target textures.
And determining a synchronous mixing rule of the target textures, namely analyzing a calculation formula of the second pixel data after the target textures are mixed. Therefore, the present application can set a corresponding pixel variable for each pixel data related to the second pixel data obtained by calculating the mixture of the plurality of target textures.
That is, a corresponding pixel variable is set for the pixel data of each target texture, a corresponding first pixel variable is set for the first pixel data after the rendering of the texture is completed, and a corresponding second pixel variable is set for the second pixel data after the mixing of a plurality of target textures.
And secondly, sequentially mixing the first pixel variable and the pixel variable of each target texture according to a set sequential mixing rule to obtain a first sequential mixing result.
In order to ensure that the rendering effect finally achieved by sequentially mixing a plurality of target textures on the basis of the first pixel data of the textures through a rendering pipeline and mixing the plurality of target textures with the first pixel data of the textures after mixing the plurality of target textures, the method and the device can perform corresponding sequential mixing on the first pixel variable of the textures after rendering and the pixel variable of each target texture according to a set sequential mixing rule, and then obtain a first sequential mixing result. The first sequential blending result may represent a final blending result after sequentially blending a plurality of target textures on the basis of the first pixel variable after the blending of the textures is completed.
Taking two target textures as an example, assume that the pixel variables of the target texture are respectively S1 and S2, the pixel transparencies contained therein are respectively A1 and A2, and the first pixel variable after the rendering of the texture is Screen, and the pixel transparency contained therein is A0. Then, the rendering order of the aforementioned texture and the target texture is Screen- > S1- > S2, and the blending factor in the blending (blend) function is set to (the source factor is GL _ SRC _ ALPHA, and the target factor is GL _ ONE _ MINUS _ SRC _ ALPHA).
Then, the first sequential blending result after the sequential blending of the first pixel variable and the pixel variable of each target texture is S2 × A2+ (S1 × A1+ Screen × 1-A1)) (1-A2) = S2 × A2+ S1 × A1 = (1-A2) + Screen: (1- (A1 + A2-A1 ×) A2.
And thirdly, sequentially mixing the first pixel variable and the second pixel variable according to a sequential mixing rule to obtain a second sequential mixing result.
After the second pixel variables after the mixing of the multiple target textures are set, the first pixel variables after the rendering of the textures and the second pixel variables after the mixing of the multiple target textures are subjected to corresponding sequential mixing according to a set sequential mixing rule, so that a second sequential mixing result can be obtained. The second sequential blending result may represent a final blending result after sequentially blending a plurality of target textures on the basis of the first pixel variables after the blending of the textures is completed.
Taking two target textures as an example, assuming that the second pixel variable after the two target textures are mixed is Si, the transparency of the included pixels is Ai, and the first pixel variable after the texture rendering is Screen, and the transparency of the included pixels is A0. Then, the rendering order of the first pixel variable after the completion of rendering the texture and the second pixel variable after the blending of the two target textures is Screen- > Si, and the blending factor in the blending (blend) function is set to (the source factor is GL _ SRC _ ALPHA, and the target factor is GL _ ONE _ minute _ SRC _ ALPHA).
Then, the second sequential blending result after the sequential blending of the first pixel variable and the second pixel variable is Si _ Ai + Screen (1-Ai).
And fourthly, determining a variable solving function of a second pixel variable as a synchronous mixing rule corresponding to the target textures according to the equivalence between the first sequence mixing result and the second sequence mixing result.
Wherein the independent variables in the variable solver function for the second pixel variables may include the first pixel variables and the pixel variables for each target texture.
The equivalence exists between the first sequential blending result in the second step and the second sequential blending result in the third step considering that both represent the final blending result after sequentially blending a plurality of target textures on the basis of the first pixel variable after the blending of the textures is completed. Therefore, a variable solving function for the second pixel variable may be calculated from an equation between the first sequential blending result and the second sequential blending result, and the independent variable in the variable solving function may include the first pixel variable and the pixel variable of each target texture, and the dependent variable is the second pixel variable. Then, the variable solving function of the second pixel variable is used as a synchronous blending rule corresponding to a plurality of target textures in the application, so that corresponding second pixel data can be calculated subsequently.
Taking the two target textures described above as an example of the first sequential blending result and the second sequential blending result, S2 A2+ S1 A1 (1-A2) + Screen (1- (A1 + A2-A1 A2) = Si Ai + Screen (1-Ai) can be determined, and thus Ai =1- (A1 + A2-A1 A2).
Then, the variable solving function for the second pixel variable is:
Si=(S2*A2+S1*A1*(1-A2))/(1-(A1+A2-A1*A2)。
further, in the present application, calculating the second pixel data after mixing the multiple target textures may specifically be: and respectively substituting the first pixel data after the texture rendering and the pixel data of each target texture into a variable solving function of a second pixel variable, and calculating the second pixel data after the multiple target textures are mixed.
That is, from the first pixel data after rendering of the texture and the pixel data of each target texture, the transparency of the target pixel represented by each blending factor can be determined. Then, the actual values of the first pixel data and the pixel data of each target texture and the corresponding target pixel transparency are respectively substituted into the corresponding independent variables in the variable solving function of the second pixel variable, and the second pixel data mixed with the target textures can be calculated.
S450, according to a sequential mixing rule, sequentially mixing the first pixel data and the second pixel data after the texture rendering is completed to obtain third pixel data of the scene fragment.
After the second pixel data obtained by mixing a plurality of target textures is calculated, the second pixel data needs to be sequentially mixed again on the basis of the first pixel data obtained by finishing rendering of the textures, so as to calculate corresponding third pixel data, wherein the third pixel data is pixel data suitable for the scene fragment in the application.
And S460, performing pixel coloring on the scene fragment according to the third pixel data.
And directly performing pixel shading on the scene fragment according to the third pixel data of the scene fragment through a fragment shader in a rendering pipeline, thereby finishing the rendering of a plurality of target textures.
It should be understood that S420-S430 and S440-S460 in this application pertain to different rendering processes for rendering multiple target textures in the presence or absence of the two different instances of the aforementioned textures. Therefore, for S420-S430 and S440-S460, a suitable rendering mode can be selected according to whether the target textures have the aforementioned textures, so as to complete the rendering of the target textures by the rendering pipeline.
S470, when the rendering pipeline finishes rendering each texture in the scene to be rendered, generating a scene image of the scene to be rendered.
According to the technical scheme provided by the embodiment of the application, when the plurality of textures in the scene to be rendered are rendered in sequence through the rendering pipeline, if the vertex data of the plurality of continuous target textures to be rendered currently by the rendering pipeline are the same, the vertex data is processed through the rendering pipeline firstly, and the corresponding scene fragment is obtained. And then, according to the rendering pipeline, pixel coloring is carried out on the scene fragment according to the first pixel data of the first target texture after the texture rendering is completed and the second pixel data of the plurality of target textures after the texture rendering is mixed, so that the rendering of the plurality of target textures is completed, when the rendering pipeline completes the rendering of each texture in the scene to be rendered, the scene image of the scene to be rendered can be generated, and the accurate rendering of the scene to be rendered is realized. In addition, in the scene rendering process, synchronous rendering of a plurality of continuous target textures with the same vertex data can be completed by operating the rendering pipeline once, and on the basis of ensuring the scene rendering accuracy, the GPU occupancy rate and the use power consumption in the scene rendering process are greatly reduced.
Fig. 5 is a schematic diagram of a scene rendering apparatus according to an embodiment of the present application, where the scene rendering apparatus 500 may include:
the vertex processing module 510 is configured to, when a plurality of textures in a scene to be rendered are sequentially rendered through a rendering pipeline, if vertex data of a plurality of continuous target textures to be rendered currently by the rendering pipeline are the same, process the vertex data through the rendering pipeline to obtain a corresponding scene fragment;
a texture rendering module 520, configured to perform pixel rendering on the scene fragment according to the first pixel data obtained by completing rendering on the aforementioned texture of the first target texture by the rendering pipeline and the second pixel data obtained by mixing multiple target textures, so as to complete rendering of multiple target textures;
a scene rendering module 530, configured to generate a scene image of the scene to be rendered when the rendering pipeline completes rendering of each texture in the scene to be rendered.
In some implementations, the texture rendering module 520 may include:
the first texture rendering unit is used for sequentially mixing the pixel data of each target texture according to a set sequential mixing rule if the texture of the first target texture is empty, so as to obtain second pixel data after mixing of a plurality of target textures; performing pixel rendering on the scene fragment according to the second pixel data;
the second texture rendering unit is used for synchronously mixing the pixel data of each target texture according to a synchronous mixing rule corresponding to a plurality of target textures to obtain second pixel data after the plurality of target textures are mixed if the texture of the first target texture is not empty; sequentially mixing the first pixel data and the second pixel data after the texture rendering is finished according to the sequential mixing rule to obtain third pixel data of the scene fragment; and performing pixel coloring on the scene fragment according to the third pixel data.
In some implementations, the scene rendering apparatus 500 may further include a synchronous blending determination module. The synchronous blending determination module may be configured to:
determining a pixel variable of each target texture, a first pixel variable after the rendering of the texture is finished and a second pixel variable after a plurality of target textures are mixed;
sequentially mixing the first pixel variable and the pixel variable of each target texture according to a set sequential mixing rule to obtain a first sequential mixing result;
sequentially mixing the first pixel variable and the second pixel variable according to the sequential mixing rule to obtain a second sequential mixing result;
determining a variable solving function of the second pixel variable as a synchronous mixing rule corresponding to a plurality of target textures according to the equivalence between the first sequence mixing result and the second sequence mixing result;
wherein the arguments in the variable solver function include the first pixel variable and the pixel variable of each target texture.
In some implementations, the second texture rendering unit may be specifically configured to:
and respectively substituting the first pixel data after the texture rendering and the pixel data of each target texture into a variable solving function of the second pixel variable, and calculating the second pixel data after the multiple target textures are mixed.
In some implementations, the pixel data of each texture in the scene to be rendered includes a pixel color and a pixel transparency, and then the pixel data obtained after rendering each texture in the scene to be rendered through the rendering pipeline also includes a pixel color and a pixel transparency.
In some implementations, the blending parameter used when blending the plurality of target textures is the transparency of the first pixel data after the completion of rendering of the texture and the target pixel in the pixel data of each target texture, which matches the blending factor defined by the rendering pipeline.
In the embodiment of the application, when a plurality of textures in a scene to be rendered are sequentially rendered through a rendering pipeline, if the vertex data of a plurality of continuous target textures to be rendered currently by the rendering pipeline are the same, the vertex data is firstly processed through the rendering pipeline to obtain a corresponding scene fragment. And then, according to the rendering pipeline, pixel coloring is carried out on the scene fragment according to the first pixel data of the first target texture after the texture rendering is completed and the second pixel data of the plurality of target textures after the texture rendering is mixed, so that the rendering of the plurality of target textures is completed, when the rendering pipeline completes the rendering of each texture in the scene to be rendered, the scene image of the scene to be rendered can be generated, and the accurate rendering of the scene to be rendered is realized. In addition, in the scene rendering process, synchronous rendering of a plurality of continuous target textures with the same vertex data can be completed by operating the rendering pipeline once, and on the basis of ensuring the scene rendering accuracy, the GPU occupancy rate and the use power consumption in the scene rendering process are greatly reduced.
It is to be understood that the apparatus embodiments and the method embodiments in the present application may correspond to each other and similar descriptions may be made with reference to the method embodiments in the present application. To avoid repetition, further description is omitted here.
Specifically, the apparatus 500 shown in fig. 5 may perform any method embodiment provided by the present application, and the foregoing and other operations and/or functions of each module in the apparatus 500 shown in fig. 5 are respectively for implementing corresponding flows of the above method embodiments, and are not described herein again for brevity.
The above-described method embodiments of the present application are described above from the perspective of functional blocks in conjunction with the accompanying drawings. It should be understood that the functional modules may be implemented by hardware, by instructions in software, or by a combination of hardware and software modules. Specifically, the steps of the method embodiments in the present application may be implemented by integrated logic circuits of hardware in a processor and/or instructions in the form of software, and the steps of the method disclosed in conjunction with the embodiments in the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in random access memory, flash memory, read only memory, programmable read only memory, electrically erasable programmable memory, registers, and the like, as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps in the above method embodiments in combination with hardware thereof.
Fig. 6 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
As shown in fig. 6, the electronic device 600 may include:
a memory 610 and a processor 620, the memory 610 being configured to store a computer program and to transfer the program code to the processor 620. In other words, the processor 620 may call and run the computer program from the memory 610 to implement the method in the embodiment of the present application.
For example, the processor 620 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 620 may include, but is not limited to:
general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
In some embodiments of the present application, the memory 610 includes, but is not limited to:
volatile memory and/or non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), enhanced Synchronous SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DR RAM).
In some embodiments of the present application, the computer program may be partitioned into one or more modules, which are stored in the memory 610 and executed by the processor 620 to perform the methods provided herein. The one or more modules may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of the computer program by the electronic device 600.
As shown in fig. 6, the electronic device may further include:
a transceiver 630, the transceiver 630 may be connected to the processor 620 or the memory 610.
The processor 620 may control the transceiver 630 to communicate with other devices, and specifically, may transmit information or data to the other devices or receive information or data transmitted by the other devices. The transceiver 630 may include a transmitter and a receiver. The transceiver 630 may further include one or more antennas.
It should be understood that the various components in the electronic device 600 are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments.
Embodiments of the present application also provide a computer program product comprising a computer program/instructions, which when executed by a computer, causes the computer to perform the method of the above method embodiments.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application occur, in whole or in part, when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of scene rendering, comprising:
when a plurality of textures in a scene to be rendered are rendered in sequence through a rendering pipeline, if the vertex data of a plurality of continuous target textures to be rendered at present in the rendering pipeline are the same, processing the vertex data through the rendering pipeline to obtain corresponding scene fragments;
according to the rendering pipeline, pixel coloring is carried out on the scene fragment according to first pixel data obtained after the rendering of the texture of the first target texture is completed and second pixel data obtained after the mixing of a plurality of target textures so as to complete the rendering of the plurality of target textures;
and when the rendering pipeline finishes the rendering of each texture in the scene to be rendered, generating a scene image of the scene to be rendered.
2. The method of claim 1, wherein the pixel shading the scene fragment according to the first pixel data after rendering the previous texture of the first target texture and the second pixel data after mixing the plurality of target textures of the rendering pipeline comprises:
if the texture of the first target texture is empty, sequentially mixing the pixel data of each target texture according to a set sequential mixing rule to obtain second pixel data after mixing of a plurality of target textures;
performing pixel rendering on the scene fragment according to the second pixel data;
if the texture of the first target texture is not empty, synchronously mixing the pixel data of each target texture according to a synchronous mixing rule corresponding to a plurality of target textures to obtain second pixel data after mixing of the plurality of target textures;
sequentially mixing the first pixel data and the second pixel data after the texture rendering is finished according to the sequential mixing rule to obtain third pixel data of the scene fragment;
and performing pixel coloring on the scene fragment according to the third pixel data.
3. The method according to claim 2, wherein before synchronously blending the pixel data of each of the target textures according to a synchronous blending rule corresponding to a plurality of target textures to obtain second pixel data after blending the plurality of target textures, the method further comprises:
determining a pixel variable of each target texture, a first pixel variable after the texture is rendered and a second pixel variable after a plurality of target textures are mixed;
sequentially mixing the first pixel variable and the pixel variable of each target texture according to a set sequential mixing rule to obtain a first sequential mixing result;
sequentially mixing the first pixel variable and the second pixel variable according to the sequential mixing rule to obtain a second sequential mixing result;
determining a variable solving function of the second pixel variable as a synchronous mixing rule corresponding to a plurality of target textures according to the equivalence between the first sequence mixing result and the second sequence mixing result;
wherein the arguments in the variable solver function include the first pixel variable and the pixel variable of each target texture.
4. The method according to claim 3, wherein synchronously blending the pixel data of each target texture according to a synchronous blending rule corresponding to a plurality of target textures to obtain second pixel data after blending a plurality of target textures, comprises:
and respectively substituting the first pixel data after the texture rendering and the pixel data of each target texture into a variable solving function of the second pixel variable, and calculating the second pixel data after the multiple target textures are mixed.
5. The method according to any one of claims 1 to 4, wherein the pixel data of each texture in the scene to be rendered comprises a pixel color and a pixel transparency, and then the pixel data obtained after rendering each texture in the scene to be rendered through the rendering pipeline also comprises a pixel color and a pixel transparency.
6. The method according to claim 5, wherein the blending parameters used for blending the plurality of target textures are first pixel data of the textures after rendering is completed and target pixel transparency of each target texture in the pixel data, which is matched with the blending factor defined by the rendering pipeline.
7. A scene rendering apparatus, comprising:
the vertex processing module is used for processing the vertex data through the rendering pipeline to obtain corresponding scene fragments if the vertex data of a plurality of continuous target textures to be rendered currently by the rendering pipeline are the same when the plurality of textures in the scene to be rendered are rendered sequentially through the rendering pipeline;
the texture rendering module is used for performing pixel rendering on the scene fragment according to first pixel data obtained by rendering the texture of the first target texture and second pixel data obtained by mixing a plurality of target textures so as to render the plurality of target textures;
and the scene rendering module is used for generating a scene image of the scene to be rendered when the rendering pipeline finishes the rendering of each texture in the scene to be rendered.
8. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the scene rendering method of any of claims 1-6 via execution of the executable instructions.
9. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the scene rendering method of any one of claims 1-6.
10. A computer program product comprising computer programs/instructions for causing an electronic device to perform the scene rendering method of any of claims 1-6 when the computer program product is run on the electronic device.
CN202211436352.2A 2022-11-16 2022-11-16 Scene rendering method, device, equipment and storage medium Pending CN115908685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211436352.2A CN115908685A (en) 2022-11-16 2022-11-16 Scene rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211436352.2A CN115908685A (en) 2022-11-16 2022-11-16 Scene rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115908685A true CN115908685A (en) 2023-04-04

Family

ID=86485104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211436352.2A Pending CN115908685A (en) 2022-11-16 2022-11-16 Scene rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115908685A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597063A (en) * 2023-07-19 2023-08-15 腾讯科技(深圳)有限公司 Picture rendering method, device, equipment and medium
CN117710502A (en) * 2023-12-12 2024-03-15 摩尔线程智能科技(北京)有限责任公司 Rendering method, rendering device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597063A (en) * 2023-07-19 2023-08-15 腾讯科技(深圳)有限公司 Picture rendering method, device, equipment and medium
CN116597063B (en) * 2023-07-19 2023-12-05 腾讯科技(深圳)有限公司 Picture rendering method, device, equipment and medium
CN117710502A (en) * 2023-12-12 2024-03-15 摩尔线程智能科技(北京)有限责任公司 Rendering method, rendering device and storage medium

Similar Documents

Publication Publication Date Title
US10164459B2 (en) Selective rasterization
US8456479B2 (en) Methods, systems, and data structures for generating a rasterizer
CN115908685A (en) Scene rendering method, device, equipment and storage medium
CN115147579B (en) Block rendering mode graphic processing method and system for expanding block boundary
US9396515B2 (en) Rendering using multiple render target sample masks
US20160078671A1 (en) Render-Time Linking of Shaders
CA3164771A1 (en) Video generating method, device and computer system
US10825129B2 (en) Eliminating off screen passes using memoryless render target
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
US20150015574A1 (en) System, method, and computer program product for optimizing a three-dimensional texture workflow
US11748911B2 (en) Shader function based pixel count determination
US10062140B2 (en) Graphics processing systems
CN109767379B (en) Data normalization processing method and device, storage medium and electronic equipment
TW202131277A (en) Graphics system and graphics processing method thereof
US7336275B2 (en) Pseudo random number generator and method
CN116957899A (en) Graphics processor, system, apparatus, device, and method
CN113313800A (en) Texture-based pixel count determination
CN114511657A (en) Data processing method and related device
CN116263981A (en) Graphics processor, system, apparatus, device, and method
CN112115015A (en) Graphics processor and associated method for displaying a set of pixels, associated platform and avionics system
JP2012155610A (en) Drawing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination