CN117671104A - Rendering method, rendering device, electronic equipment and computer readable storage medium - Google Patents

Rendering method, rendering device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117671104A
CN117671104A CN202311512190.0A CN202311512190A CN117671104A CN 117671104 A CN117671104 A CN 117671104A CN 202311512190 A CN202311512190 A CN 202311512190A CN 117671104 A CN117671104 A CN 117671104A
Authority
CN
China
Prior art keywords
information
light source
rendering
primitive
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311512190.0A
Other languages
Chinese (zh)
Inventor
杨念健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311512190.0A priority Critical patent/CN117671104A/en
Publication of CN117671104A publication Critical patent/CN117671104A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a rendering method, a rendering device, an electronic device and a readable storage medium, which are applied to a mobile terminal, wherein the rendering method comprises the following steps: obtaining virtual scene data of a virtual scene to be rendered, wherein the virtual scene data comprises light source information of a light source in the virtual scene and model information of at least one non-transparent virtual object; the light source comprises a first type light source and a second type light source; determining basic rendering information which corresponds to the virtual object and does not have illumination result attribute according to the model information of the virtual object; determining first rendering information with illumination result attribute of the first type light source according to the light source information of the first type light source and the basic rendering information, and storing the first rendering information into a geometric buffer area; and rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer. In the method, the light shadow calculation of the first type light source is not limited by the size of the geometric buffer area, so that the illumination performance effect of the first type light source of the virtual object can be better expanded.

Description

Rendering method, rendering device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computers, and in particular, to a rendering method, a rendering device, an electronic device, and a computer readable storage medium.
Background
The illusion Engine 5 (ue 5) provides a lighting rendering scheme based on delayed rendering at the mobile terminal. The delay rendering firstly calculates geometric rendering information of an object to be rendered in a combination processing path, then stores the geometric rendering information into a geometric Buffer (G-Buffer), and then performs light shadow calculation of a light source after acquiring the whole related geometric rendering information from the geometric Buffer in a lighting processing path.
At present, the delay rendering method is limited by the hardware performance of the mobile terminal, and the hardware performance of a part of the mobile terminal requires that the geometric buffer area occupied by the geometric rendering information cannot exceed 128 bits, so that the illumination rendering effect of an object cannot be further enriched in the part of the mobile terminal. Or, the geometric buffer occupied by the hardware performance support set rendering information can exceed 128-bit mobile terminals, so that the geometric buffer is required to be additionally added when the object illumination rendering effect is enriched, and the performance cost of illumination rendering of the object is increased. In view of the above problems, no effective solution has been found yet.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The application provides a rendering method, a rendering device, electronic equipment and a computer readable storage medium, which can reduce the limitation of the size of a geometric buffer area on the illumination rendering performance of an extended object, namely enrich the illumination rendering effect of the object under the condition of not additionally increasing the rendering performance consumption and the rendering cost.
In a first aspect, an embodiment of the present application provides a rendering method, where the method is applied to a mobile terminal, and the method includes:
obtaining virtual scene data of a virtual scene to be rendered, wherein the virtual scene data comprises: light source information of a light source in the virtual scene and model information of at least one non-transparent virtual object; the light sources comprise a first type of light source comprising a global light source for generating a lighting effect on a virtual scene and a second type of light source comprising a local light source for generating a lighting effect on a part of the virtual scene;
according to the model information of the virtual object, determining basic rendering information which corresponds to the virtual object and does not have illumination result attribute;
Determining first rendering information with illumination result attribute of the first type light source according to the light source information and the basic rendering information of the first type light source, and storing the first rendering information into a geometric buffer area;
and rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer.
In a second aspect, an embodiment of the present application provides a rendering apparatus, including: the device comprises an acquisition unit, a determination unit and a rendering unit;
an obtaining unit, configured to obtain virtual scene data of a virtual scene to be rendered, where the virtual scene data includes: light source information of a light source in the virtual scene and model information of at least one non-transparent virtual object; the light sources comprise a first type of light source comprising a global light source for generating a lighting effect on a virtual scene and a second type of light source comprising a local light source for generating a lighting effect on a part of the virtual scene;
the determining unit is used for determining basic rendering information which corresponds to the virtual object and does not have illumination result attribute according to the model information of the virtual object;
the determining unit is further used for determining first rendering information with illumination result attributes of the first type light source according to the light source information and the basic rendering information of the first type light source, and storing the first rendering information into the geometric buffer;
And the rendering unit is used for rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer zone.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor; and
and a memory for storing an information processing program, the electronic device being powered on and executing the program by the processor, the method as in the first aspect being performed.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing an information processing program, the program being executed by a processor to perform a method as in the first aspect.
According to the rendering method provided by the embodiment of the application, virtual scene data of a virtual scene to be rendered is obtained, wherein the virtual scene data comprises: light source information of a light source in the virtual scene and model information of at least one non-transparent virtual object; it can be appreciated that, the light source is configured to add an illumination effect to each virtual object in the virtual scene, so that the virtual scene is more realistic, and in this embodiment, the light source may include a first type light source and a second type light source, where the first type light source includes a global light source for generating an illumination effect on the entire virtual scene, and the second type light source includes a local light source for generating an illumination effect on a portion of the virtual scene; after the model information of the virtual object is obtained, the basic rendering information corresponding to the virtual object can be obtained through processing the model information of the virtual object according to a rendering pipeline, wherein the basic rendering information is geometric rendering information without illumination result attribute; after the basic rendering information is obtained, determining first rendering information with illumination result attribute of the first type light source according to the light source information of the first type light source and the basic rendering information, namely, performing light shadow calculation on the virtual object according to the light source information of the first type light source to obtain the first rendering information with illumination result attribute, and storing the first rendering information into a geometric buffer area; the geometric buffer may be understood as a plurality of texture maps storing first rendering information, and after storing the first rendering information to the geometric buffer, the virtual object is rendered according to the light source information of the second type light source and the first rendering information in the geometric buffer.
According to the rendering method provided by the embodiment of the application, the light effect of the first type light source in the light sources is calculated before the rendering information is input into the geometric buffer area, so that the first rendering information with the illumination result attribute of the first type light source is stored in the geometric buffer area; and performing subsequent shadow calculation of the second type light source based on the first rendering information with the illumination result attribute of the first type light source in the geometric buffer area. Compared with a delay rendering flow in the related art, under the condition of not increasing rendering performance consumption, the light and shadow calculation of the first type light source is not limited by the size of the geometric buffer area, so that more light and shadow rendering characteristics of the first type light source can be realized at the mobile terminal, and further, the light and shadow expression effect of the virtual object is enriched. Meanwhile, in the limit range of the hardware performance of the GPU of the mobile terminal, the method remains in the illumination processing path to perform the shadow calculation of the second type of light source, and reduces the influence on the original illumination rendering performance. The rendering method has higher compatibility to the hardware performance of the GPU of the mobile terminal, for example, aiming at the mobile terminal which does not support the geometric buffer area exceeding 128 bits, the expansion of the light shadow rendering characteristics of the virtual object can be realized, so that the part of the mobile terminal can support the light shadow rendering effect with more stylization.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view illustrating an example of illumination of parallel light according to an embodiment of the present disclosure;
FIG. 2 is a schematic view illustrating an example of illumination of a point light source according to an embodiment of the present disclosure;
FIG. 3 is a schematic view illustrating an example of illumination of a spotlight light source according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a comparison between a rendering flow of delayed rendering and a rendering flow of the technical solution of the present application in the related art provided in the embodiments of the present application;
FIG. 5 is a flowchart illustrating an example of a rendering method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an example of a rendering device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for rendering according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
It should be noted that the terms "first," "second," "third," and the like in the claims, specification, and drawings herein are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. The information so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and their variants are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" is merely an association relationship describing an association object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and/or C" means comprising any 1 or any 2 or 3 of A, B, C.
It should be understood that in the embodiments of the present application, "B corresponding to a", "a corresponding to B", or "B corresponding to a", means that B is associated with a, from which B may be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
Before describing the rendering method provided in the present application in detail, related concepts related to the embodiments of the present application will be described first.
1. Rendering pipeline: the rendering pipeline functions to generate or render a two-dimensional image by giving scene elements such as virtual cameras, three-dimensional scenes, and light sources.
2. Phantom Engine 5 (un real Engine 5, ue 5): is a game development platform, and provides a great deal of core technology, information generation tools and basic support for game developers. The rendering method provided by the embodiment of the application is an improvement of an illumination rendering scheme based on delayed rendering of the mobile terminal provided by the UE5 engine.
3. Shader (shader): is a program running on a graphics processor (Graphics Processing Unit, GPU) that a user can custom program the rendering pipeline of the GPU.
4. Vertex shader: and a shader for performing a series of operations on the vertexes of the three-dimensional virtual model to be rendered, wherein the vertexes of the three-dimensional model have most basic position attributes and can also contain attribute information such as texture coordinates, normals and the like. By means of the vertex shader, the GPU knows where the vertices of the three-dimensional virtual model should be drawn on the display screen at the time of rendering. In this embodiment, the vertex shader may be used to spatially transform the vertex coordinates of the virtual object.
5. Pixel Shader (Pixel Shader): which may also be referred to as a fragment shader, may be used to determine the final color of pixels on a screen.
6. G-Buffer: a geometry buffer for storing shadow calculation parameters during rendering. The shadow calculation parameters include, but are not limited to, position (Position), normal (Normal), diffuse Color (Diffuse Color) and other useful material parameters for each screen pixel.
In game development, a multi-rendering target technique is generally used to generate a G-Buffer, that is, color, normal, world space coordinates are respectively rendered into different floating-point texture maps in one drawing, that is, the geometric Buffer can also be understood as a set of two-dimensional texture maps storing light shadow calculation parameters in a rendering process, and the resolution of the two-dimensional texture maps is consistent with the resolution of a two-dimensional image rendered by a user. Because of the limitation of the hardware performance of a part of mobile terminals, in order to adapt to the hardware performance of the part of mobile terminals, the structure of the G-Buffer of the mobile terminals provided by the UE5 engine comprises 4 two-dimensional texture maps, and the Buffer occupies 128 bits altogether.
7. Advanced depth Culling (Early Z-Culling): a depth buffer is initialized when each frame of image starts to be rendered, and each primitive is subjected to advanced depth rejection in a rasterization stage. If one primitive is completely behind the other primitive, then the primitive need not be rendered. Thus, unnecessary operation and memory access can be avoided, and rendering efficiency is improved. It should be noted that advanced depth culling is a stage of automatic execution of GPU hardware, and can be achieved without performing additional path rendering, which is a "hardware" behavior of the GPU.
8. Forward rendering: and carrying out a rendering scheme of light source illumination calculation one by one on the object by one. For example, if an object is affected by n light sources, then for each tile of the object, the shader code must be executed by passing all of the n light sources into the shader to perform the illumination calculation.
9. Delayed rendering (delayed rendering): a rendering scheme for performing multiple light source illumination calculations for all objects within a screen. Compared with forward rendering, the rendering method for processing after delaying the fragment coloring calculation to the depth rejection is mainly used for solving the rendering consumption of a large amount of illumination. Delay rendering can be understood as a process of rendering all objects into a G-Buffer of screen space first and then coloring the Buffer light source by light source, thereby avoiding unnecessary overhead due to computing coloring of fragments discarded by depth culling. That is, the basic idea of the delay rendering is to perform depth removal first, then perform rendering calculation, and place illumination calculation originally performed in the object space (three-dimensional space) into the image space (two-dimensional space) for processing. Rendering efficiency is independent of object primitive complexity, and coloring and illumination calculated amount is only related to the number of light sources. The rendering method of the embodiment of the application is improved aiming at the delay rendering method.
10. Parallel Light (direct Light): or directional light. Is a parallel light ray without fixed emission source and light ray attenuation, and is generally used for simulating sunlight. In virtual games, parallel light represents a large, remote light source from a location outside the game world, typically used to simulate global illumination received by an entire virtual scene, where the light source objects can be placed anywhere in the scene, all of which are illuminated. The effect of the parallel light in the scene is shown in fig. 1, wherein fig. 1 comprises a virtual scene 10, a parallel light source 11 is arranged outside the virtual scene 10, and the aim of adjusting the illumination condition of the virtual scene 10 can be achieved by adjusting the direction of light rays 12 emitted by the parallel light source 11. For example, the orientation of light 12 may be adjusted to be upward while the entire virtual scene 10 is in a night state, and shadows may be generated when the orientation of light 12 is tilted downward from above, similar to daylight. In a game usually running in a mobile terminal, only one parallel light exists to meet the game requirement.
11. Point Light source (Point Light): the point light source is located at a certain point in space and uniformly emits light in all directions. For example, electric lamps, candles, etc. in real life are suitable for making illumination simulating single-point light emission. Point light sources are also known as spherical light, and the intensity of the emitted light decreases as the distance from the light source increases, forming a spherical range. The effect in the scene is shown in fig. 2, a point light source 13 is included in the virtual scene 10, and the illumination range of the point light source 13 is shown as a circle 14, it being understood that in a three-dimensional scene, the illumination range is spherical, and the circle is only an example here.
12. Spotlight Light source (Spot Light): like point sources, spotlight sources have a specific position and range, but are limited to an angle, forming a cone-shaped illumination area. The light source can be used for simulating light sources such as flashlights, automobile headlights, searchlights or desk lamps. The effect in the scene is shown in fig. 3, where there is a spotlight source 15 in the virtual scene 10, the light rays of which form a cone-shaped illumination area 16. It can be seen that both point light sources and spotlight sources are localized light sources. Global and local in the embodiments of the present application are for virtual scenes.
It will be appreciated that the above-described source of collimated light and the spotlight light source are one type of collimated light.
Next, the related art will be further described.
The scheme for additionally adding the G-Buffer area of the mobile terminal to enrich the illumination rendering effect of the object has the defect that besides the defect of increasing the performance overhead of illumination rendering of the object, the scheme has the defect that the G-Buffer area is limited due to the limitation of the hardware performance of the GPU of the mobile terminal, namely, only a small amount of G-Buffer area can be added, therefore, only the expansion of a small amount of illumination rendering effect can be provided, and part of hardware of the mobile terminal does not support the spatial expansion of the G-Buffer area. In addition, different processing modes of different mobile terminals after the G-Buffer area is added are different, so that the realization and maintenance cost of illumination rendering is high.
In addition to the above, in the related art, the consumption of rendering performance may be reduced by compressing the G-Buffer, but compression is often lossy, which may result in poor illumination effect obtained by rendering, and may not meet the requirement of the player on game scene simulation in the current game.
Based on the above-mentioned problems, embodiments of the present application provide a rendering method, apparatus, electronic device, and computer-readable storage medium.
First, a comparison description is made between the current delay rendering flow and the concept of the improved technical scheme of the application.
Currently, existing delayed rendering mainly includes two processing channels: geometry Pass or Base Pass and illumination Pass. Referring to fig. 4, fig. 4 is a comparison chart of a rendering flow of delayed rendering in the related art and a rendering flow after improvement of the present application. Wherein (a) in fig. 4 shows a flow chart of the existing deferred rendering.
In the geometry processing channel, the object in the scene is not subjected to illumination processing, and only relevant geometry processing is performed, for example, processing steps including vertex processing, rasterization processing, advanced depth removal, pixel processing and the like can be performed, and then the processed geometry information is written into a plurality of G-buffers, for example, the normal line, depth, color, AO, roughness, metal degree and the like of each fragment corresponding to the object are respectively written into different G-buffers. Because of advanced depth rejection, the geometric information of the closest fragment to the virtual camera is finally written into the G-Buffer, which means that the fragment in the G-Buffer must be subjected to illumination calculation.
And traversing each screen pixel in the illumination processing channel in sequence, acquiring geometric data corresponding to each screen pixel from each G-Buffer, and then carrying out illumination calculation. Since the closest patch to the virtual camera is removed in advance and all other patches are discarded when the G-Buffer is created, the illumination calculation is performed only once for each screen pixel correspondence.
The geometric processing channel in the delayed rendering has no complicated illumination calculation, becomes lighter, reduces the rendering cost greatly compared with the forward rendering, calculates the illumination on visible fragments, does not waste the GPU to calculate the time on invisible fragments, reduces the pressure of the GPU theoretically, and is helpful for improving the performance.
Aiming at the delayed rendering flow, the main concept of the rendering method provided by the application is to adjust the light shadow calculation of the global light source originally performed in the light treatment channel to the geometric treatment channel, and delete the flow of the light shadow calculation of the global light source originally realized in the light treatment channel. The original light shadow calculation flow is still reserved for the light shadow calculation of local light sources such as point light sources, spotlight light sources and the like, and the light shadow calculation flow is placed in a lighting treatment stage for rendering. As shown in fig. 4 (b), fig. 4 (b) is a rendering flowchart after the improvement of the present application. Compared to the rendering flow shown in fig. 4 (a), in the embodiment of the present application, the light shadow calculation of the global light source is performed in advance to the geometric processing stage, and the first rendering information obtained after the light shadow calculation for the global light source is stored in the geometric buffer, where the first rendering information may include the illumination result data of the global light source and related geometric data.
The following description is mainly directed to a rendering system for implementing the rendering method provided in the present application.
The rendering method provided by the embodiment of the application can be executed by electronic equipment, and the electronic equipment can be a terminal or a server and other equipment. The terminal can be a mobile terminal such as a smart phone, a tablet computer, a notebook computer and the like, or can be a non-mobile terminal device such as a desktop computer and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud information base, cloud computing, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big information and artificial intelligence platforms, and the like.
The embodiment of the application takes rendering game pictures as an example to carry out method introduction.
In an alternative embodiment, the terminal device stores an application program for rendering the game scene when the rendering method is run on the terminal device. The terminal device interacts with the user through a graphical user interface. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection.
In an alternative embodiment, the method may be implemented and executed based on a cloud gaming system when the rendering method is run on a server. Cloud gaming systems refer to cloud computing-based services. The cloud game system comprises a server and client equipment. The running main body of the game application program and the game picture presentation main body are separated, and the storage and the running of the rendering method are completed on a server. The game screen presentation is performed at a client, and the client is mainly used for receiving and sending game information and presenting the game screen, for example, the client may be a display device with an information transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, a head-mounted display device (head-mounted display device), etc., but the rendering electronic device is a cloud server. When playing the game, the user operates the client to send an instruction to the server, the server controls the game picture to be rendered according to the instruction, the information such as the rendered game picture is encoded and compressed, the information is returned to the client through the network, and finally, the game picture is decoded and output through the client.
It should be noted that, for the non-mobile terminal and the server, the GPU hardware has less limitation on the size of the geometric Buffer area, so that the influence of the GPU hardware performance on the illumination rendering effect can be avoided. Therefore, the preferred execution body of the rendering method provided in the embodiment of the present application is a mobile terminal.
The technical scheme of the present application is described in detail below through specific embodiments. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 5 is a flowchart illustrating an example of rendering according to an embodiment of the present application. It should be noted that the steps shown may be performed in a different logical order than that shown in the method flow diagram. The method may include the following steps S210 to S240. In the embodiment of the application, the delay rendering pipeline of the UE5 engine of the mobile terminal is taken as an example for explanation.
Step S210: obtaining virtual scene data of a virtual scene to be rendered, wherein the virtual scene data comprises: light source information of a light source in the virtual scene and model information of at least one non-transparent virtual object; the light sources include a first type of light source and a second type of light source.
It will be appreciated that during the rendering of a three-dimensional virtual scene, a virtual camera is required to receive the scene image, and the field of view of the virtual camera determines the picture that the GPU ultimately displays on the display screen. The virtual scene is obtained by shooting with a virtual camera and is to be output to a display screen of the electronic equipment for display. For example, the virtual scene may be a game scene of a virtual game.
The virtual scene data may include various information required for rendering the virtual scene. For example, the virtual scene data may include three-dimensional model information of each virtual object in the virtual scene, light source information corresponding to each light source in the virtual scene, or other information in the game scene, which is not limited.
For example, virtual objects in a virtual scene may include, but are not limited to, model information for virtual models of virtual characters, virtual animals, virtual buildings, or virtual items in a game. The light source information of the light source in the virtual scene includes, but is not limited to, information such as the number of light sources, the type of the light source, the light source position or the light incidence direction, the illumination intensity, and the like.
In general, model information of a virtual object is stored by vertex information corresponding to each vertex of a model surface of the virtual object. The model information may be understood as basic geometric information required for rendering the virtual model. For example, the model information of the virtual object may include, but is not limited to, information such as a type, a name, the number of vertices, position information of the vertices, depth information, normal line information, texture information, color information, and the like of the virtual object, which is not limited.
The depth information is used for representing the distance between the primitives forming the virtual object and the virtual camera shooting the virtual scene, or can be understood as the distance between the primitives forming the virtual object and the display screen of the electronic equipment;
the normal line information is used to simulate the illumination effect of the virtual object, and for example, the illumination brightness of the virtual object may be calculated according to the normal line corresponding to each vertex of the virtual object, the position of the light source, and the like.
The color information is the most basic color after eliminating the light and shadow changes, can comprise two colors of metal and nonmetal, and should not carry the light and shadow information.
The texture information is information for characterizing a model surface property of the virtual object, for example, information of glossiness, roughness, transparency, and the like of the model surface of the virtual object.
It is understood that transparent virtual objects and non-transparent virtual objects may be included in the virtual scene. Transparent virtual objects are virtual objects in a virtual scene that have a transparent effect, and that can be seen through the background or surrounding environment without affecting other virtual objects, such as smoke, lake surfaces, waterfalls, water drops, glass, etc. in a game scene. Non-transparent virtual objects are virtual objects that do not have a transparent effect, and in games, opaque virtual objects occupy most game scenes, such as virtual buildings, virtual characters, virtual vehicles, etc.
The rendering process of transparent virtual objects requires more factors to be considered and the rendering process is more complex than the rendering process of non-transparent virtual objects. For example, when multiple transparent virtual objects are superimposed, the ordering and mixing of the transparent virtual objects needs to be properly considered. Based on this, the present embodiment mainly introduces a non-transparent virtual object in a virtual scene. Accordingly, in this embodiment, the virtual scene data at least includes model information of at least one non-transparent virtual object in the virtual scene.
The light sources include a first type light source and a second type light source, and in this embodiment, the first type light source may be a global light source, and the second type light source may be a local light source. Wherein the global light source may be used to generate a lighting effect for the entire virtual scene. The local light source may be used to create a lighting effect for a portion of the virtual scene. For example, as shown in fig. 1, the first type of light source may comprise a parallel light source. As shown in fig. 2 and 3, the second type of light source may include, but is not limited to, a local light source such as a point light source or a spotlight light source. In the present embodiment, the number of light sources of the first type light source and the second type light source is not particularly limited, and the number of the first type light source and the number of the second type light source may include one or more, respectively.
It should be noted that as the number of global light sources in the virtual scene gradually increases, an increase in rendering performance consumption results. Therefore, the preferred embodiment of the present application is to render a scene including a global light source in a virtual scene, so that the mobile terminal can achieve a better balance effect between rendering performance consumption and scene rendering effect; in addition, it is thus possible to expand the illumination effect of the first type light source in the case where the present embodiment is on the level of the rendering performance consumption of the delayed rendering scheme in the related art. It will be appreciated that for current games of mobile terminals, only one first type of light source is typically required to meet the game requirements.
Step S220: and determining basic rendering information which corresponds to the virtual object and does not have the illumination result attribute according to the model information of the virtual object.
The basic rendering information is geometric rendering information which is obtained by processing model information of the virtual object and does not perform illumination rendering, and a two-dimensional image with basic color information can be rendered through the basic rendering information, but does not perform illumination rendering. The base rendering information may include, but is not limited to, vertex position information in screen space, depth information corresponding to each vertex, base color information, normal information, and the like.
It is understood that step S220 is a step performed in the geometry processing channel of the deferred rendering pipeline. The geometry processing path is used to process the set of steps of all geometry-related flows required for rendering. For example, the geometry processing path determines what the primitive corresponding to the virtual object to be drawn is, how to draw, where to start drawing from the screen, and the like. In a specific embodiment, as shown in fig. 4 (b), the main steps of determining, in the geometry processing channel, information to be rendered corresponding to the virtual object according to the model information of the virtual object may include: vertex processing, rasterization processing, advanced depth culling, and pixel processing.
In the vertex processing stage, vertex coordinates of each vertex of the virtual object in the model information are obtained; converting the vertex coordinates of each vertex in space to obtain corresponding screen coordinates of each vertex in a screen space; the screen coordinates are used to indicate the display position of the vertex in screen space.
In a specific embodiment, the vertex coordinates of each vertex of the virtual object may be spatially transformed by the vertex shader, that is, the vertex coordinates located in the three-dimensional model space are transformed into the two-dimensional screen space, so as to obtain the position information corresponding to the virtual object in the two-dimensional screen space, that is, the display position of each vertex in the display screen. The display position may be determined using two-dimensional screen coordinate values in a screen coordinate system established for a display screen of the electronic device. In addition, in addition to the two-dimensional screen coordinate values corresponding to the respective vertices in the two-dimensional screen space, the vertex shader may output information including, but not limited to: depth information, normal line information, texture coordinates, basic color information, and the like corresponding to each vertex.
Next, a Rasterization (knowledge) processing stage is performed for converting the vector graphics of the virtual object into fragments (fragments), where the fragments can be regarded as data representations of screen pixels, and the colors of the final screen pixels are generated from the information in the fragments.
The specific processing procedure of the rasterization processing stage is as follows, and a plurality of simple basic bodies are obtained by assembling the primitives of the vertexes output by the vertex shader, namely a plurality of primitives are obtained. The primitive may be a point, a line or a triangle, and in this embodiment, the method will be described by taking the case of assembling each vertex into a triangle.
After obtaining a number of primitives, a rasterization process is performed for each primitive. The specific steps are to traverse each screen pixel on a display screen of the electronic equipment, check whether each screen pixel is covered by a triangle obtained by assembly, and if so, generate a primitive, so that a plurality of primitives corresponding to each primitive can be obtained.
It should be noted that the above-mentioned primitives are not actually screen pixels on the display screen, but rather include an aggregate of various information, including, for example, but not limited to, screen coordinates, depth information, normal information, texture information, transparency information, and/or the like. This information is used to finally calculate the color of each pixel. Each tile corresponds to at least one screen pixel in the screen space. That is, a tile may correspond to one screen pixel or a group of screen pixels, and may be specifically determined according to the adopted rendering method, which is not described herein.
After the primitives are generated, determining the primitive information corresponding to the primitives according to the positions of the primitives in the primitives and the model information corresponding to the vertexes corresponding to the primitives, and obtaining the primitive information corresponding to each primitive.
In a specific embodiment, the distance between the primitive and each vertex corresponding to the primitive can be determined according to the position of the primitive in the primitive; determining the weight corresponding to each vertex according to the distance between the primitive and each vertex corresponding to the primitive; the weight is used to characterize the extent to which each vertex affects the primitive. And then, carrying out interpolation processing on model information corresponding to each vertex corresponding to the primitive according to the weight corresponding to each vertex to obtain an interpolation processing result, and determining the interpolation processing result as primitive information corresponding to the primitive.
For example, a gravity center interpolation method is adopted to interpolate normal line information, texture coordinates and/or basic color information in vertex information of vertices corresponding to the primitives where the primitives are located, so as to obtain primitive information corresponding to the primitives. Accordingly, the information types in the tile information of the tile correspond to the information types in the model data, and may include, but are not limited to, two-dimensional screen coordinates, base color information, normal line information, depth information, texture coordinates, and the like of the tile. The fragment information of the fragment may provide relevant shading information for the following pixel shading stage.
After obtaining the fragments and the fragment information, each fragment can be subjected to advanced depth rejection, namely, after the rasterization processing stage, a depth test is performed before the pixel processing stage, and the fragments which do not pass the test do not perform calculation of the subsequent pixel processing stage, so that the number of pixels participating in coloring and light shadow calculation is reduced, and the rendering performance is improved. After the advanced depth rejection, the last fragment from the virtual camera can be rendered finally, and the fragment after the advanced depth rejection can be regarded as a screen pixel waiting for rendering in the frame buffer.
The specific steps of advanced depth culling may include: for each fragment, acquiring depth information in fragment information of the fragment; the depth information is used for representing the distance between the fragment and a virtual camera shooting the virtual scene; comparing the depth information of the fragment with the depth threshold value of the screen pixel corresponding to the fragment in the depth buffer zone, if the depth information of the fragment is smaller than the depth threshold value, writing the depth information of the fragment into the depth buffer zone to update the depth threshold value until the screen pixel corresponds to the fragment one by one, and obtaining the fragment after the depth rejection.
It will be appreciated that the Depth Buffer (or Z-Buffer) is a Buffer area for storing Depth information that is initialized at the beginning of rendering. The comparing the depth information of the patch with the depth threshold value of the screen pixel corresponding to the patch in the depth buffer area is to sequentially input the depth information in the patch information of each patch into the depth buffer area, compare the depth threshold value stored in the depth buffer area with the screen pixel corresponding to the input patch, if the depth information of the input patch is smaller than the depth threshold value, write the depth information of the patch into the depth buffer area to cover the depth threshold value, obtain a new depth threshold value, and then compare the depth information of the next patch corresponding to the screen pixel with the new depth threshold value, and update the depth threshold value again. And indicating that advanced depth rejection is finished until the screen pixels are in one-to-one correspondence with the primitives, namely, when one screen pixel is only in correspondence with one primitive, and obtaining the primitives after depth rejection.
In this embodiment, depth rejection is performed according to the real occlusion relationship between the primitives, and the closer the distance from the virtual camera is, the depth information corresponding to the primitives with the closer distance is written into the depth buffer. Some special rendering is sometimes needed, for example, other judging methods can be adopted when rendering is not performed according to the real occlusion relation. In this case, it is determined whether a preset condition is satisfied between the depth information of the fragment input into the depth buffer and the depth threshold, and if the preset condition is satisfied, the depth information is written into the depth buffer, and the depth threshold in the depth buffer is updated.
In this embodiment, after the advanced depth rejection, the piece information that is removed by the advanced depth rejection may be input to a pixel shader, and the pixel corresponding to the piece may be colored by the base color information in the piece information, so as to obtain the base rendering information corresponding to each pixel.
It should be noted that the pixel is colored by the pixel shader, which is to color the base color of each screen pixel, and no illumination calculation is performed at this time.
After the basic rendering information corresponding to each screen pixel is obtained, the light and shadow calculation of each light source in the virtual scene is performed. In the present embodiment, the shadow calculation may include step S230 and step S240. Wherein, step S230 calculates the light shadow of the first type light source, and step S240 calculates the light shadow of the second type light source.
In the following, the light shadow calculations for the two types of light sources are described separately. Step S230 adds the illumination result attribute of the first type light source to the base rendering information.
Step S230: and determining first rendering information with illumination result attribute of the first type light source according to the light source information of the first type light source and the basic rendering information, and storing the first rendering information into a geometric buffer area.
It can be understood that after the basic rendering information corresponding to each screen pixel is obtained, the light shadow calculation of the first type light source can be performed on the non-transparent virtual object based on the basic rendering information and the light source information of the first type light source, so as to obtain the first rendering information with the illumination result attribute of the first type light source. That is, the first rendering information may include illumination result information of the first type light source and geometric information required for performing light shadow calculation of the second type light source, for example, the geometric information may include normal line information, material information, and the like of each virtual object, and the embodiment is not particularly limited.
It should be noted that, in the embodiment of the present application, the step S230 is performed in the geometry processing path in the deferred rendering pipeline. As can be seen from the foregoing description, the first type of light source is a global light source, and may specifically include a light source of parallel light in a game scene. In a mobile terminal game, a light source of parallel light is arranged to meet the requirement of global illumination, in this case, no matter the parallel light shadow calculation aiming at a virtual object is placed in a geometric processing channel to be rendered one by one after the advanced depth rejection, or is placed in the illumination processing channel to be rendered, the number of the screen pixels participating in rendering is the same, the number is the pixel level after the depth rejection, and no additional performance consumption is generated, so that the delay rendering technical scheme in the application is equal to that in the related art in performance consumption.
In addition, the light shadow calculation of the first type light source is placed in a geometric processing path of delay rendering, so that the light shadow calculation of the global light source in the virtual scene can avoid the limit of the size of a geometric buffer zone, and a user can enrich the illumination rendering effect of the global light source of the virtual scene by adjusting the material information of the virtual object in the geometric processing path. Compared with the method for storing more material information aiming at the virtual object by increasing the cache size of the geometric buffer area in the related art and further enriching the illumination expression effect aiming at the virtual model, the technical scheme provided by the embodiment can adjust various material information of the virtual object in the geometric processing path, the data volume after the material information is adjusted is not limited by the size of the geometric buffer area, the adjustable parameter range is wider, and the illumination expression effect of the global light source obtained by rendering is richer.
It should be noted that, the material information of the virtual object may be adjusted, and the relevant adjustment information may be cached in other available buffers in the GPU, such as, but not limited to, a unified Buffer (unified Buffer), a Constant Buffer (Constant Buffer), or a structure Buffer (Structured Buffer) in the GPU. When performing the light shadow calculation of the first type light source, the GPU may obtain relevant information from the available buffer area, and expand the illumination effect of the first type light source of the virtual object, so that the information level stored in the geometric buffer area is not affected.
In a specific embodiment, the above-mentioned first rendering information having the illumination result attribute of the first type light source may be determined according to the light source information of the first type light source and the base rendering information in the following manner. That is, by performing illumination calculation on a screen-pixel-by-screen-pixel basis, in this method, the color of each screen pixel is calculated individually based on a plurality of factors such as the light source, the normal of the object surface, and the position of the pixel.
In the specific embodiment, firstly, the light incidence direction in the light source information of a first type light source is obtained, the angle between the normal line information corresponding to the screen pixel and the light incidence direction is calculated, and the projection area of the light on the screen pixel is calculated according to the angle; then calculating the brightness information of the first type light source according to the projection area and the illumination intensity in the light source information; and further, the brightness information and the color information of the screen pixel are overlapped to obtain first rendering information with illumination result attribute corresponding to the screen pixel. In this way, the first rendering information having the illumination result attribute can be calculated for each screen pixel. The method can accurately control the color of each screen pixel, and realize high-quality illumination effect.
After the first rendering information is obtained, the first rendering information may be stored into a plurality of geometric buffers. Storing into the geometry buffer may also be considered as generating a plurality of texture maps for the first rendering information. For example, a color map, a depth map, a normal map, a texture map, etc. may be generated for the first rendering information, which is not limited and may be set according to specific requirements. The color information stored in the color map is color information with the illumination result attribute of the first type light source superimposed on the basis of the basic color information.
After storing the first rendering information to the geometry buffer, a light shadow calculation of the second type of illumination may be performed as follows step S240.
Step S240: and rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer.
This step S240 may be calculated in the light processing path of the deferred rendering pipeline.
It should be noted that, as can be seen from the foregoing description, the second type of light source is a local light source, and the local light source can only generate an illumination effect on the pixels within the illumination range, so that each local light source only needs to perform the light shadow calculation on the screen pixels within the illumination range.
Therefore, when the light shadow calculation of the second type light source is performed on the virtual object, firstly, determining the local light source corresponding to each screen pixel according to the illumination range in the light source information of the second type light source, then performing the light shadow calculation of the local light source pixel by pixel, and outputting the output color corresponding to each screen pixel. In a specific embodiment, the illumination is calculated by taking out the needed first rendering information from the corresponding geometric buffer. Assuming that the final output color of the screen pixel with the screen coordinate value (x, y) needs to be calculated, the GPU may first read the shadow calculation parameters of the pixel corresponding to the (x, y) coordinate value from the first rendering information in the multiple geometry buffers, for example, color data with the illumination result attribute of the first type of light source, roughness, diffuse reflection color, normal information, world coordinates, and so on. Then, according to other light source information of the local light source transmitted to the GPU, for example, the light source irradiation direction of the local light source which generates illumination influence on the pixel is directly brought into a mathematical formula to complete operation. The specific calculation method may refer to the foregoing light shadow calculation method for the first type light source, which is not described herein again.
According to the characteristic that the local light source can only generate illumination effect on pixels in the illumination range, the geometric processing passage in the delay rendering pipeline cannot judge the illumination range of the local light source, so that the calculation of each local light source can be performed on each pixel, the calculated amount is large, and extra rendering consumption can be caused. In addition, the number of local light sources in the virtual scene is more than the number of global light, if the local light sources move to a geometric processing path without being limited by a geometric buffer, when the rendering characteristics are enriched, the running consumption can be increased by multiple because of the increase of the number of the light sources, and the running consumption can not be leveled with the performance consumption in the related delay rendering scheme. Thus, advancing the shadow calculations of the local light sources to the geometric processing path in the delayed rendering pipeline is not a preferred implementation.
In addition, because parameters required by calculation of each local light of the whole virtual scene are stored in the geometric buffer area, spatial continuity, integrity and flexibility exist, and the calculation of the local light source in the illumination processing passage is reserved, the better light and shadow effect based on the screen space can be realized.
Based on the above analysis, in this embodiment, the shadow calculation of the local light source is performed while remaining in the lighting processing path of the delayed rendering pipeline, so long as the basic rendering effect of the local light source is satisfied, and for the expansion of the lighting rendering effect of the virtual object, the expansion of the lighting effect of the global light source can be realized in the geometric processing path.
Therefore, the method provided by the embodiment is described, and the rendering method provided by the embodiment of the application advances the calculation of the light shadow of the first type light source in the light treatment path of the original delay rendering pipeline to the collection processing path, so that the calculation of the light shadow of the first type light source is not limited by the size of the G-Buffer under the condition of not increasing the consumption of rendering performance, and further the light shadow rendering characteristics of the first type light source are realized in the mobile terminal. Meanwhile, in the limit range of the hardware performance of the GPU of the mobile terminal, the method remains in the illumination processing path to perform the shadow calculation of the second type of light source, and reduces the influence on the original delay rendering pipeline performance. The rendering method has higher compatibility to the GPU hardware performance of the mobile terminal, for example, aiming at the mobile terminal which does not support G-Buffer over 128 bits, the expansion of the light shadow rendering characteristics of the virtual object can be realized, so that the part of the mobile terminal can support the light shadow rendering effect with more stylization.
In addition, the illumination rendering characteristics of the virtual objects are enriched in the mode, the problem of G-Buffer expansion in the related technology is not needed to be considered, and the maintenance cost of delay rendering pipelines of different mobile terminals is reduced.
Corresponding to the rendering method provided in the embodiment of the present application, the embodiment of the present application further provides a rendering apparatus 200, as shown in fig. 6, including: an acquisition unit 201, a determination unit 202, and a rendering unit 203;
an obtaining unit 201, configured to obtain virtual scene data of a virtual scene to be rendered, where the virtual scene data includes: light source information of a light source in the virtual scene and model information of at least one non-transparent virtual object; the light sources comprise a first type of light source and a second type of light source, the first type of light source comprises a global light source for generating illumination effects on the virtual scene, and the second type of light source comprises a local light source for generating illumination effects on a part of the virtual scene;
a determining unit 202, configured to determine, according to model information of the virtual object, base rendering information corresponding to the virtual object that does not have an illumination result attribute;
the determining unit 202 is further configured to determine first rendering information having an illumination result attribute of the first type light source according to the light source information and the base rendering information of the first type light source, and store the first rendering information to the geometry buffer;
And a rendering unit 203 for rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometry buffer.
Optionally, the determining unit 202 is specifically configured to sequentially perform vertex processing, rasterization processing, advanced depth removal, and pixel processing on the model information of the virtual object, so as to obtain the base rendering information corresponding to the virtual object.
Optionally, the determining unit 202 is specifically configured to obtain vertex coordinates of each vertex of the virtual object in the model information; performing space conversion on vertex coordinates of the vertexes to obtain screen coordinates corresponding to the vertexes in a screen space; the screen coordinates are used to indicate a display position of the vertex in the screen space.
Optionally, the determining unit 202 is specifically further configured to perform primitive assembly on each vertex in the screen space to generate a plurality of primitives; rasterizing the primitives to generate a plurality of primitives, and obtaining a plurality of primitives corresponding to each primitive; each of the primitives corresponding to the primitive corresponds to at least one screen pixel in the screen space; and determining the fragment information corresponding to each fragment according to the position of the fragment in the primitive and the model information corresponding to each vertex corresponding to the primitive, so as to obtain the fragment information corresponding to each fragment.
Optionally, the determining unit 202 is specifically further configured to determine a distance between the primitive and each vertex corresponding to the primitive according to a position of the primitive in the primitive; determining the weight corresponding to each vertex according to the distance between the primitive and each vertex corresponding to the primitive; and carrying out interpolation processing on the model information corresponding to each vertex corresponding to the primitive according to the weight corresponding to each vertex to obtain an interpolation processing result, and determining the interpolation processing result as the primitive information corresponding to the primitive.
Optionally, the determining unit 202 is specifically further configured to obtain, for each of the primitives, depth information in the primitive information of the primitives; the depth information is used for representing the distance between the fragment and a virtual camera shooting the virtual scene; and comparing the depth information of the fragment with the depth threshold value of the screen pixel corresponding to the fragment in a depth buffer zone, and if the depth information of the fragment is smaller than the depth threshold value, writing the depth information of the fragment into the depth buffer zone to update the depth threshold value until the screen pixel corresponds to the fragment one by one, so as to obtain the fragment with the removed depth.
Optionally, the determining unit 202 is specifically further configured to color a screen pixel corresponding to the primitive according to the basic color information in the primitive information of the primitive after the depth rejection, obtain basic rendering information corresponding to each screen pixel in the screen space, and determine the basic rendering information corresponding to each screen pixel as the basic rendering information corresponding to the virtual object.
Corresponding to the rendering method provided by the embodiment of the present application, the embodiment of the present application further provides an electronic device for rendering, as shown in fig. 7, where the electronic device includes: a processor 301; and a memory 302 for storing a program of a rendering method, the apparatus, after being powered on and running the program of the rendering method by a processor, performs the steps of:
obtaining virtual scene data of a virtual scene to be rendered, wherein the virtual scene data comprises: light source information of a light source and model information of at least one non-transparent virtual object in the virtual scene; the light sources comprise a first type of light source and a second type of light source, wherein the first type of light source comprises a global light source for generating illumination effects on the virtual scene, and the second type of light source comprises a local light source for generating illumination effects on part of the virtual scene;
Determining basic rendering information which corresponds to the virtual object and does not have illumination result attribute according to the model information of the virtual object;
determining first rendering information with illumination result attribute of the first type light source according to the light source information of the first type light source and the basic rendering information, and storing the first rendering information into a geometric buffer area;
and rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer.
Corresponding to the rendering method provided in the embodiment of the present application, the embodiment of the present application further provides a computer readable storage medium storing a program of the rendering method, the program being executed by a processor, performing the steps of:
obtaining virtual scene data of a virtual scene to be rendered, wherein the virtual scene data comprises: light source information of a light source and model information of at least one non-transparent virtual object in the virtual scene; the light sources comprise a first type of light source and a second type of light source, wherein the first type of light source comprises a global light source for generating illumination effects on the virtual scene, and the second type of light source comprises a local light source for generating illumination effects on part of the virtual scene;
Determining basic rendering information which corresponds to the virtual object and does not have illumination result attribute according to the model information of the virtual object;
determining first rendering information with illumination result attribute of the first type light source according to the light source information of the first type light source and the basic rendering information, and storing the first rendering information into a geometric buffer area;
and rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer.
It should be noted that, for the detailed description of the apparatus, the electronic device, and the computer readable storage medium provided in the embodiments of the present application, reference may be made to the related description of the embodiment of the rendering method provided in the embodiments of the present application, which is not repeated here.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.
In one typical configuration, the electronic device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable operations, information structures, modules of the program, or other information. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated information signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.

Claims (10)

1. A rendering method applied to a mobile terminal, the method comprising:
obtaining virtual scene data of a virtual scene to be rendered, wherein the virtual scene data comprises: light source information of a light source and model information of at least one non-transparent virtual object in the virtual scene; the light sources comprise a first type of light source and a second type of light source, the first type of light source comprising a global light source for generating a lighting effect on the virtual scene, the second type of light source comprising a local light source for generating a lighting effect on a portion of the virtual scene;
Determining basic rendering information which corresponds to the virtual object and does not have illumination result attribute according to the model information of the virtual object;
determining first rendering information with illumination result attribute of the first type light source according to the light source information of the first type light source and the basic rendering information, and storing the first rendering information into a geometric buffer area;
and rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer.
2. The method according to claim 1, wherein the determining, according to the model information of the virtual object, the base rendering information corresponding to the virtual object without the illumination result attribute includes:
and carrying out vertex processing, rasterization processing, advanced depth removal and pixel processing on the model information of the virtual object in sequence to obtain basic rendering information which corresponds to the virtual object and does not have illumination result attributes.
3. The method according to claim 2, wherein the processing step of vertex processing the model information of the virtual object includes:
obtaining vertex coordinates of each vertex of the virtual object in the model information;
Performing space conversion on vertex coordinates of the vertexes to obtain screen coordinates corresponding to the vertexes in a screen space; the screen coordinates are used to indicate a display position of the vertex in the screen space.
4. A method according to claim 3, wherein the step of rasterizing the model information of the virtual object comprises:
performing primitive assembly on each vertex in the screen space to generate a plurality of primitives;
rasterizing the primitives to generate a plurality of primitives, and obtaining a plurality of primitives corresponding to each primitive; each of the primitives corresponding to the primitive corresponds to at least one screen pixel in the screen space;
and determining the fragment information corresponding to each fragment according to the position of the fragment in the primitive and the model information corresponding to each vertex corresponding to the primitive, so as to obtain the fragment information corresponding to each fragment.
5. The method of claim 4, wherein determining the primitive information corresponding to the primitive according to the location of the primitive in the primitive and the model information corresponding to each vertex corresponding to the primitive comprises:
Determining the distance between the primitive and each vertex corresponding to the primitive according to the position of the primitive in the primitive;
determining the weight corresponding to each vertex according to the distance between the primitive and each vertex corresponding to the primitive;
and carrying out interpolation processing on the model information corresponding to each vertex corresponding to the primitive according to the weight corresponding to each vertex to obtain an interpolation processing result, and determining the interpolation processing result as the primitive information corresponding to the primitive.
6. The method according to claim 4, wherein the step of performing the advanced depth culling of the model information of the virtual object includes:
for each fragment, acquiring depth information in fragment information of the fragment; the depth information is used for representing the distance between the fragment and a virtual camera shooting the virtual scene;
and comparing the depth information of the fragment with the depth threshold value of the screen pixel corresponding to the fragment in a depth buffer zone, and if the depth information of the fragment is smaller than the depth threshold value, writing the depth information of the fragment into the depth buffer zone to update the depth threshold value until the screen pixel and the fragment have a one-to-one correspondence relationship, so as to obtain the fragment with the depth removed.
7. The method according to claim 6, wherein the processing step of performing the pixel processing on the model information of the virtual object includes:
coloring screen pixels corresponding to the primitives according to basic color information in the primitive information of the primitives after the depth rejection to obtain basic rendering information corresponding to each screen pixel in the screen space, and determining the basic rendering information corresponding to each screen pixel as the basic rendering information corresponding to the virtual object.
8. A rendering apparatus, the apparatus comprising: the device comprises an acquisition unit, a determination unit and a rendering unit;
the obtaining unit is configured to obtain virtual scene data of a virtual scene to be rendered, where the virtual scene data includes: light source information of a light source and model information of at least one non-transparent virtual object in the virtual scene; the light sources comprise a first type of light source and a second type of light source, the first type of light source comprising a global light source for generating a lighting effect on the virtual scene, the second type of light source comprising a local light source for generating a lighting effect on a portion of the virtual scene;
The determining unit is used for determining basic rendering information which corresponds to the virtual object and does not have illumination result attribute according to the model information of the virtual object;
the determining unit is further configured to determine first rendering information having an illumination result attribute of the first type light source according to the light source information of the first type light source and the base rendering information, and store the first rendering information into a geometric buffer;
the rendering unit is used for rendering the virtual object according to the light source information of the second type light source and the first rendering information in the geometric buffer zone.
9. An electronic device, comprising:
a processor; and
a memory for storing an information processing program, the electronic device being powered on and executing the program by the processor, to perform the method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which an information processing program is stored, the program being executed by a processor to perform the method according to any one of claims 1 to 7.
CN202311512190.0A 2023-11-13 2023-11-13 Rendering method, rendering device, electronic equipment and computer readable storage medium Pending CN117671104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311512190.0A CN117671104A (en) 2023-11-13 2023-11-13 Rendering method, rendering device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311512190.0A CN117671104A (en) 2023-11-13 2023-11-13 Rendering method, rendering device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117671104A true CN117671104A (en) 2024-03-08

Family

ID=90076131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311512190.0A Pending CN117671104A (en) 2023-11-13 2023-11-13 Rendering method, rendering device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117671104A (en)

Similar Documents

Publication Publication Date Title
Mittring Finding next gen: Cryengine 2
US7411592B1 (en) Graphical processing of object perimeter information
US7583264B2 (en) Apparatus and program for image generation
US7414625B1 (en) Generation of glow effect
US11302058B2 (en) System for non-planar specular reflections in hybrid ray tracing
US11386613B2 (en) Methods and systems for using dynamic lightmaps to present 3D graphics
CN109364481B (en) Method, device, medium and electronic equipment for real-time global illumination in game
CN112116692A (en) Model rendering method, device and equipment
JP2010505164A (en) Pixel color determination method and image processing system in ray tracing image processing system
Widmer et al. An adaptive acceleration structure for screen-space ray tracing
CN111968214A (en) Volume cloud rendering method and device, electronic equipment and storage medium
JP4827250B2 (en) Program, information storage medium, and image generation system
CN117671104A (en) Rendering method, rendering device, electronic equipment and computer readable storage medium
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
Yang et al. Visual effects in computer games
Roughton Interactive Generation of Path-Traced Lightmaps
CN116883572B (en) Rendering method, device, equipment and computer readable storage medium
Sousa et al. Cryengine 3: Three years of work in review
US20230090732A1 (en) System and method for real-time ray tracing in a 3d environment
CN115526977B (en) Game picture rendering method and device
CN116310026A (en) Cloud distributed graphics rendering system, method, electronic device and medium
Russo Improving the StarLogo nova graphics renderer with lights and shadows
Bashford-Rogers et al. Approximate visibility grids for interactive indirect illumination
WO2023285161A1 (en) System and method for real-time ray tracing in a 3d environment
CN117218271A (en) Dough sheet generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination