CN115131482A - Rendering method, device and equipment for illumination information in game scene - Google Patents
Rendering method, device and equipment for illumination information in game scene Download PDFInfo
- Publication number
- CN115131482A CN115131482A CN202210325539.9A CN202210325539A CN115131482A CN 115131482 A CN115131482 A CN 115131482A CN 202210325539 A CN202210325539 A CN 202210325539A CN 115131482 A CN115131482 A CN 115131482A
- Authority
- CN
- China
- Prior art keywords
- illumination
- information
- voxels
- game scene
- voxel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 471
- 238000009877 rendering Methods 0.000 title claims abstract description 144
- 238000000034 method Methods 0.000 title claims abstract description 98
- 239000000523 sample Substances 0.000 claims abstract description 365
- 238000005070 sampling Methods 0.000 claims abstract description 51
- 230000008569 process Effects 0.000 claims description 44
- 230000011218 segmentation Effects 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 abstract description 11
- 230000000694 effects Effects 0.000 description 18
- 230000008859 change Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000011449 brick Substances 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Generation (AREA)
Abstract
The application discloses a rendering method, a rendering device and rendering equipment for illumination information in a game scene, relates to the technical field of 3D rendering, and can be used for adaptively distributing illumination probes, reducing the updating, transmission and storage costs of illumination information sampling in the game scene and improving the rendering efficiency of the illumination information. The method comprises the following steps: segmenting a space region by using a data structure of the space region in a game scene, and extracting a space voxel containing an object, wherein the data structure is grid data containing a plurality of levels; traversing the grid data of multiple levels, and setting an effective illumination probe aiming at a space voxel containing an object to obtain a first illumination probe grid of a space area; additionally setting a virtual illumination probe aiming at a space voxel which does not contain an object to obtain a second illumination probe grid of a space area; and transmitting the illumination information acquired by the second illumination probe grid to texture resource information, and rendering the illumination information in the game scene according to the texture resource information.
Description
Technical Field
The present application relates to the field of 3D rendering technologies, and in particular, to a method, an apparatus, and a device for rendering illumination information in a game scene.
Background
With the rise of the game industry, scenes are needed by a plurality of 3D games, rendering scenes are established, and in addition, physical effects can make the game scenes visible, so that the game effect of players is improved. Because the game scene is a virtual world described by adopting a computer technology, the virtual world is similar to a real world, the scene comprises light, objects and light rays emitted by the objects and scene light sources in the game, and the reflection or refraction phenomenon occurs, in order to improve the reality of the game scene, the global illumination is generally used for rendering, the ejection of the light rays from the light sources to the surfaces of the objects is calculated by the global illumination through a series of complex algorithms, the accurate simulation is generally realized during the operation, and the used calculation cost is higher.
In an actual development scene, aiming at a global illumination effect of a dynamic object, illumination information is generally sampled pixel by using illumination probe collectors distributed in a game scene, and the illumination probe collectors are used as position points in space and can store light samples from all directions to form the illumination information in the game scene. However, a more accurate illumination probe needs to be used in a manner of sampling illumination information by taking a pixel as a unit, the number of the illumination probes is determined by the number of pixel textures in a game scene, and a large number of pixel textures need to be quickly sampled by GPU hardware to some extent, so that the sampling cost of the illumination information in the game scene is high, the transmission cost and the storage cost are high, and the rendering efficiency of the illumination information is affected.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, and a device for rendering illumination information in a game scene, and mainly aims to solve the problem that in the prior art, the update, transmission, and storage costs of the illumination information sampling in the game scene are high, which affects the rendering efficiency of the illumination information.
According to a first aspect of the present application, there is provided a rendering method of lighting information in a game scene, including:
segmenting a space region in a game scene by using a data structure of the space region, and extracting space voxels containing objects, wherein the data structure is grid data containing a plurality of levels;
traversing the grid data of the multiple levels, and setting an effective illumination probe aiming at the space voxel containing the object to obtain a first illumination probe grid of a space area, wherein the first illumination probe grid contains the effective illumination probe;
additionally setting a virtual illumination probe aiming at a space voxel which does not contain an object to obtain a second illumination probe grid of a space area, wherein the second illumination probe grid contains an effective illumination probe and the virtual illumination probe;
and transmitting the illumination information acquired by the second illumination probe grid to texture resource information, and rendering the illumination information in the game scene according to the texture resource information.
Further, the data structure is a tree structure formed by the space regions, the space regions are segmented by using the data structure of the space regions in the game scene, and the extracting of the space voxels including the object specifically includes: carrying out multi-level segmentation on the space region by using a tree structure formed by the space region to form a plurality of levels of space voxels; in the process of segmenting a multi-level space region, judging whether space voxels in a segmented level contain objects or not; and if so, performing next-level segmentation on the spatial voxels in the segmented levels until the number of the levels reaches a preset threshold value, and extracting the spatial voxels containing the object.
Further, setting an effective illumination probe for the spatial voxel containing the object to obtain a first illumination probe grid of the spatial region, specifically including: for a space voxel containing an object, acquiring a grid position of the space voxel in a plurality of levels of grid data; and arranging effective illumination probes at the vertexes of the first grid position to obtain a first illumination probe grid of the space region.
Further, the additionally setting of the virtual illumination probe for the spatial voxel not containing the object to obtain a second illumination probe grid of the spatial region specifically includes: traversing the grid data of the multiple levels, and acquiring a second grid position of a space voxel not containing an object in each level in the grid data of the multiple levels; judging whether effective illumination probes are arranged at four vertexes of the grid position; and if not, additionally arranging a virtual illumination probe aiming at the vertex which is not provided with the effective illumination probe in the grid position to obtain a second illumination probe grid of the space area.
Further, before the transmitting the illumination information collected by the second illumination probe grid into texture resource information and rendering the illumination information in the game scene according to the texture resource information, the method further includes: and extracting the spatial logic relationship mapped by the illumination information in the second illumination probe grid, and storing the spatial logic relationship into texture resource information.
Further, the rendering the illumination information in the game scene according to the texture resource information specifically includes: expanding the illumination information in the texture resource information into a three-dimensional block data set according to the spatial logical relationship, and recording the hierarchical relationship between the three-dimensional block data sets by using indirect textures; merging the three-dimensional block data sets by utilizing the hierarchical relationship among the three-dimensional block data sets in the indirect texture to form three-dimensional block texture information with a tree structure, wherein the spatial position of illumination information in a game scene is recorded in the three-dimensional block texture information; and reading illumination information of the corresponding spatial position from the three-dimensional block texture information of the tree structure for rendering according to the spatial position of the viewpoint position in the game scene.
Further, the expanding the illumination information in the texture resource information into the three-dimensional block data set according to the spatial logical relationship, and recording the hierarchical relationship between each three-dimensional block data set by using indirect textures specifically includes: extracting the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid according to the spatial logical relationship; and according to the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, expanding the illumination information in the texture resource information into the three-dimensional block data, and recording the hierarchical relation between the three-dimensional block data sets by using indirect textures.
Further, the reading, according to the spatial position of the viewpoint position in the game scene, the illumination information of the corresponding spatial position from the three-dimensional block texture information of the tree structure for rendering specifically includes: according to the space position of the viewpoint position in a game scene, acquiring indirect textures representing the hierarchical relationship among three-dimensional block data sets from the three-dimensional block texture information of the tree structure; and reading illumination information of corresponding spatial positions in the three-dimensional block texture information of the tree structure for rendering by using the indirect texture representing the hierarchical relationship among the three-dimensional block data sets.
According to a second aspect of the present application, there is provided a rendering method of lighting information in a game scene, including:
obtaining a near distance voxel contained in a space voxel in a game scene, wherein the near distance voxel is a voxel meeting a distance condition in voxels of a preset level formed by space voxel segmentation in the game scene, and the distance condition is that a bounding box of the voxel is intersected with an object bounding box in the game scene;
creating an effective illumination probe and a virtual illumination probe for a near distance voxel contained in the space voxel, and generating a probe grid of the space voxel, wherein the probe grid is used for capturing illumination information in a game scene;
merging and storing the illumination information captured by the probe grids of the space voxels into texture resource information according to the position of a viewpoint in a game scene;
and responding to a rendering instruction of the illumination information, establishing a rendering task by using the texture resource information, and rendering the illumination information in the game scene.
Further, the acquiring close-range voxels of the object surface in the game scene specifically includes: acquiring a space region covered by an object to be hung in a game scene, and dividing each space voxel in the space region into voxels of a preset level; traversing each voxel in a preset level, and judging that a bounding box of the segmented voxel is intersected with an object bounding box in a game scene; and if so, determining that the segmented voxels are close-range voxels on the surface of the object in the game scene.
Further, the creating an effective light probe and a virtual light probe for the near voxels included in the spatial voxel to generate a probe grid of the spatial voxel specifically includes: creating an active light probe for a close-range voxel within a hierarchy of the spatial voxels; adding virtual voxels corresponding to the levels for the spatial voxels, and creating a virtual illumination probe for the voxels meeting the adding conditions in the virtual voxels, wherein the virtual illumination probe is used for performing seamless interpolation on illumination data sampled by the voxels which are close to the levels in the spatial voxels.
Further, the adding a virtual voxel corresponding to a hierarchy to the spatial voxel and creating a virtual illumination probe for a voxel meeting an adding condition in the virtual voxel specifically include: aiming at the level which is more than the first order in the space voxels, adding virtual voxels corresponding to the level, wherein the virtual voxels are mapped with the voxels of the corresponding level in the space voxels; traversing virtual voxels corresponding to the levels, and judging whether phase-mapped voxels in the space voxels have effective light probes; if not, a virtual illumination probe is created for the virtual voxel.
Further, before the merging and storing the lighting information captured by the probe grid of the spatial voxels into texture resource information according to the viewpoint position in the game scene, the method further includes: expanding the illumination information captured by the probe grid of the space voxel to a three-dimensional block data set according to the position of a viewpoint in a game scene to form three-dimensional block texture information of a multi-level tree structure; recording the mapping hierarchical relation of the three-dimensional block data set into indirect texture information in the process of expanding the three-dimensional block data set;
the merging and storing the illumination information captured by the probe grid of the space voxel into texture resource information according to the position of the viewpoint in the game scene specifically includes: and merging and storing the three-dimensional block texture information and the indirect texture information into texture resource information.
Further, the expanding the illumination information captured by the probe grid of the space voxel to a three-dimensional block data set according to the position of the viewpoint in the game scene to form three-dimensional block texture information of a multi-level tree structure specifically includes: sampling illumination data in a game scene by using an effective illumination probe in the probe grid, and carrying out interpolation operation on the illumination data to obtain first illumination information of a viewpoint position in the game scene; utilizing a virtual illumination probe in the probe grid to perform seamless interpolation on illumination data sampled by near distance voxels in the space voxels, wherein the near distance voxels are positioned in a hierarchy level, so that second illumination information of a viewpoint position in a game scene is obtained; and expanding the first illumination information and the second illumination information of the viewpoint position in the game scene to a three-dimensional block data set to form three-dimensional block texture information of a multi-level tree structure.
Further, the creating a rendering task by using the texture resource information to render the illumination information in the game scene specifically includes: establishing a rendering task by using the texture resource information, and acquiring a hierarchical relation mapped by the three-dimensional block texture information from the indirect texture information; inquiring the position information of the three-dimensional texture information in a tree structure according to the hierarchical relation mapped by the three-dimensional block texture information; and according to the position of the three-dimensional texture information in the tree structure, sampling illumination information captured by the probe grid of the space voxel from the three-dimensional block texture information, and rendering the illumination information.
Further, the querying, according to the hierarchical relationship mapped by the three-dimensional block texture information, the position information of the three-dimensional texture information in the tree structure specifically includes: extracting the level and the offset of the three-dimensional block texture information in a tree structure according to the level relation mapped by the three-dimensional block texture information; and calculating the position information of the three-dimensional texture information in the tree structure according to the level and the offset of the three-dimensional block texture information in the tree structure.
According to a third aspect of the present application, there is provided an apparatus for rendering illumination information in a game scene, including:
the system comprises an extraction unit, a segmentation unit and a classification unit, wherein the extraction unit is used for segmenting a space region in a game scene by using a data structure of the space region, and extracting space voxels containing objects, and the data structure is grid data containing a plurality of levels;
the first setting unit is used for traversing the grid data of the multiple levels, setting an effective illumination probe aiming at the space voxel containing the object, and obtaining a first illumination probe grid of a space area, wherein the first illumination probe grid contains the effective illumination probe;
the second setting unit is used for additionally setting the virtual illumination probe aiming at the space voxel which does not contain the object to obtain a second illumination probe grid of the space area, and the second illumination probe grid contains an effective illumination probe and the virtual illumination probe;
and the first rendering unit is used for transmitting the illumination information acquired by the second illumination probe grid to texture resource information and rendering the illumination information in the game scene according to the texture resource information.
Further, the data structure is a tree structure formed by the spatial regions, and the extraction unit includes: the segmentation module is used for carrying out multi-level segmentation on the space region by utilizing the tree structure formed by the space region to form a plurality of levels of space voxels; the first judgment module is used for judging whether the space voxels in the segmented levels contain objects or not in the segmentation process of the multi-level space region; and the extraction module is used for performing next-level segmentation on the spatial voxels in the segmented levels if the spatial voxels are in the segmented levels, and extracting the spatial voxels containing the object until the number of the levels reaches a preset threshold value.
Further, the first setting unit includes: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring grid positions of space voxels containing objects in a plurality of levels of grid data; and the first setting module is used for setting effective illumination probes at the vertexes of the first grid position to obtain a first illumination probe grid of the space area.
Further, the second setting unit includes: a second obtaining module, configured to traverse the multiple levels of grid data, and for a spatial voxel that does not contain an object in each level, obtain a second grid position where the spatial voxel is located in the multiple levels of grid data; the second judgment module is used for judging whether effective illumination probes are arranged at four vertexes of the grid position or not; and the second setting module is used for additionally setting the virtual illumination probe aiming at the vertex which is not provided with the effective illumination probe in the grid position if the virtual illumination probe is not arranged in the grid position, so as to obtain a second illumination probe grid of the space area.
Further, the apparatus further comprises: and the extraction unit is used for extracting the spatial logic relationship mapped by the illumination information in the second illumination probe grid before transmitting the illumination information collected by the second illumination probe grid to the texture resource information and rendering the illumination information in the game scene according to the texture resource information, and storing the spatial logic relationship in the texture resource information.
Further, the first rendering unit includes: the expansion module is used for expanding the illumination information in the texture resource information into the three-dimensional block data set according to the spatial logical relationship and recording the hierarchical relationship between the three-dimensional block data sets by using indirect textures; the merging module is used for merging the three-dimensional block data sets by utilizing the hierarchical relation among the three-dimensional block data sets in the indirect texture to form three-dimensional block texture information with a tree structure, and the three-dimensional block texture information records the spatial position of illumination information in a game scene; and the reading module is used for reading the illumination information of the corresponding spatial position from the three-dimensional block texture information of the tree structure according to the spatial position of the viewpoint position in the game scene for rendering.
Further, the unfolding module is specifically configured to extract, according to the spatial logical relationship, a hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid; the expansion module is specifically configured to expand the illumination information in the texture resource information into the three-dimensional block data according to the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, and record the hierarchical relationship between the three-dimensional block data sets by using indirect textures.
Further, the reading module is specifically configured to obtain, from the three-dimensional block texture information of the tree structure, an indirect texture representing a hierarchical relationship between three-dimensional block data sets according to a spatial position of a viewpoint position in a game scene; the reading module is specifically further configured to read, by using the indirect texture representing the hierarchical relationship between the three-dimensional block data sets, the illumination information at the corresponding spatial position in the three-dimensional block texture information of the tree structure for rendering.
According to a fourth aspect of the present application, there is provided an apparatus for rendering illumination information in a game scene, including:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring a short-distance voxel contained in a space voxel in a game scene, the short-distance voxel is a voxel which meets a distance condition in voxels of a preset level formed by space voxel segmentation in the game scene, and the distance condition is that a bounding box of the voxel is intersected with an object bounding box in the game scene;
the creating unit is used for creating an effective illumination probe and a virtual illumination probe for the short-distance voxels contained in the space voxels, and generating a probe grid of the space voxels, wherein the probe grid is used for capturing illumination information in a game scene;
the storage unit is used for merging and storing the illumination information captured by the probe grids of the space voxels into texture resource information according to the position of a viewpoint in a game scene;
and the second rendering unit is used for responding to a rendering instruction of the illumination information, establishing a rendering task by using the texture resource information and rendering the illumination information in the game scene.
Further, the acquisition unit includes: the segmentation module is used for acquiring a space region covered by an object to be hung in a game scene and segmenting each space voxel in the space region into voxels of a preset level; the third judgment module is used for traversing each voxel in the preset level and judging that the bounding box of the segmented voxel is intersected with the object bounding box in the game scene; and the determining module is used for determining the segmented voxels to be short-distance voxels on the surface of the object in the game scene if the segmented voxels are the short-distance voxels.
Further, the creating unit includes: a creation module to create an active light probe for a near voxel within a hierarchy of the spatial voxels; the system comprises an adding module and a calculating module, wherein the adding module is used for adding virtual voxels corresponding to levels for the space voxels, and creating a virtual illumination probe aiming at the voxels meeting adding conditions in the virtual voxels, and the virtual illumination probe is used for performing seamless interpolation on illumination data sampled by the voxels which are close to the levels in the space voxels.
Further, the add-on module includes: the adding submodule is used for adding a virtual voxel corresponding to a level aiming at the level which is more than the first level in the space voxel, and the virtual voxel is mapped with the voxel of the corresponding level in the space voxel; the judgment submodule is used for traversing the virtual voxels corresponding to the levels and judging whether the phase mapping voxels in the space voxels have effective illumination probes; and the creating submodule is used for creating a virtual illumination probe aiming at the virtual voxel if the virtual voxel is not found.
Further, the apparatus further comprises: the unfolding unit is used for unfolding the illumination information captured by the probe grids of the space voxels to a three-dimensional block data set according to the position of the viewpoint in the game scene before merging and storing the illumination information captured by the probe grids of the space voxels into texture resource information according to the position of the viewpoint in the game scene to form three-dimensional block texture information of a multi-level tree structure; the recording unit is used for recording the hierarchical relation mapped by the three-dimensional block data set into indirect texture information in the process of expanding the three-dimensional block data set; the storage unit is further configured to combine and store the three-dimensional block texture information and the indirect texture information into texture resource information.
Further, the unfolding unit includes: the operation module is used for sampling illumination data in a game scene by using the effective illumination probes in the probe grids, and carrying out interpolation operation on the illumination data to obtain first illumination information of a viewpoint position in the game scene; the interpolation module is used for carrying out seamless interpolation on illumination data sampled by the short-distance voxels in the hierarchy in the space voxels by using the virtual illumination probe in the probe grid to obtain second illumination information of the viewpoint position in the game scene; and the expansion module is used for expanding the first illumination information and the second illumination information of the viewpoint position in the game scene into a three-dimensional block data set to form three-dimensional block texture information of a multi-level tree structure.
Further, the second rendering unit includes: the acquisition module is used for establishing a rendering task by utilizing the texture resource information and acquiring the hierarchical relation of three-dimensional block texture information mapping from the indirect texture information; the query module is used for querying the position information of the three-dimensional texture information in the tree structure according to the hierarchical relation mapped by the three-dimensional block texture information; and the sampling module is used for sampling the illumination information captured by the probe grid of the space voxel from the three-dimensional block texture information according to the position of the three-dimensional texture information in the tree structure, and rendering the illumination information.
Further, the query module includes: the extraction submodule is used for extracting the level and the offset of the three-dimensional block texture information in a tree structure according to the level relation mapped by the three-dimensional block texture information; and the calculating submodule is used for calculating the position information of the three-dimensional texture information in the tree structure according to the level and the offset of the three-dimensional block texture information in the tree structure.
According to a fifth aspect of the present application, there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of the first aspect when the processor executes the computer program.
According to a sixth aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of the first aspect described above.
In a game scene, illumination is a factor which has a large influence and is also a part of which the visual style cannot be lost, under the common condition, both static and dynamic objects in the game scene have the possibility of different volumes or complex model structures, the artistic assets are difficult to be baked into an effective illumination map, and the illumination probe which samples pixel by pixel can avoid the illumination effect of a moving object and the sense of incongruity of the whole scene using the static illumination map, can bring uniform indirect illumination to the scene, simplifies the complexity of a rendering pipeline and improves the rendering efficiency. In the operation process of the illumination probes, illumination information can be sampled at a position point where a certain illumination probe is located, then the illumination information is sampled from the positions of other adjacent illumination probes of the illumination probe, and then interpolation operation is performed on the illumination information obtained by sampling, so that the illumination information at a certain position among the illumination probes is calculated.
As a way for acquiring illumination information by an illumination probe, the method can be used for sampling objects one by one in a game scene, and specifically comprises sampling parameters corresponding to the objects one by one and sampling 3D textures of the objects one by one, in the process of sampling parameters corresponding to the objects one by one, illumination change depends on the normal lines of the surfaces of the objects, the normal lines of the surfaces of the objects are required to be utilized in a resolving stage, the method is sufficient for small objects, but illumination mismatch and discontinuity can be caused with an adjacent large model, so that one-to-one collection, updating and sampling are performed at a CPU end, and each related object has an SH coefficient (Spherical Harmonic function) after corresponding interpolation, so that the method is relatively easy to maintain and low in cost; in the process of sampling the 3D textures of the objects one by one, sampling interpolation calculation is carried out at a GPU (graphics processing unit) end by utilizing a hardware acceleration function, and compared with the mode of sampling the corresponding parameters of the objects one by one, the sampling effect of the illumination information is improved to a certain extent.
As another way for acquiring the illumination information by the illumination probe, the illumination probe can sample pixels one by one in a game scene, because screen pixels are fixed, a way of acquiring the illumination information by pixel units instead of object units can provide a more accurate illumination probe, and in a pixel or a computation shader, when linear interpolation is performed among a plurality of illumination probes, GPU hardware can be used to quickly sample the illumination information captured by millions of illumination probes per frame.
By means of the technical scheme, compared with the existing mode of sampling illumination information object by object, the method, the device and the equipment for rendering the illumination information in the game scene provided by the application can provide indirect illumination for a large number of complex objects and are suitable for various illumination acquisition scenes by using the mode of sampling the illumination information pixel by pixel, and the total number of the objects sampled object by object, which is limited by the updating cost and the transmission bandwidth of the 3D texture from the CPU end to the GPU end, is avoided. Compared with the existing mode of sampling illumination information pixel by pixel, the method has the advantages that the short-distance voxels contained in the space voxels in the game scene are obtained, the short-distance voxels are voxels close to the surface of an object, a larger number of illumination probes are needed, the effective illumination probes and the virtual illumination probes are further created for the short-distance voxels contained in the space voxels, probe grids of the space voxels are generated, the illumination probes and the virtual illumination probes can be distributed in a self-adaptive mode, the virtual illumination probes serve as auxiliary effective illumination probes to provide seamless interpolation for the game scene, the illumination information captured by the probe grids of the space voxels is merged and stored into texture resource information according to the positions of the view points in the game scene, the texture resource information can sample the illumination information of different game scenes, so that the same screen pixel samples interpolation results of different surface densities according to different spatial positions of the same screen pixel, and when a rendering instruction of the illumination information is responded, a rendering task is established by using texture resource information, the illumination information in the game scene is rendered, a three-dimensional texture address of the game scene is cached in the texture resource information, the number of the illumination probes is determined by the number of pixel textures in the game scene, and a large number of voxels are distributed through the illumination probes in a self-adaptive manner, so that the number of the illumination probes can be saved, millions of illumination probes in each frame are not required to be rapidly sampled by GPU hardware, the sampling updating, transmission and storage cost of the illumination information in the game scene is reduced, and the rendering efficiency of the illumination information is improved.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 illustrates a flowchart of a rendering method for lighting information in a game scene according to an embodiment of the present application;
2a-2c show schematic diagrams of a process for creating an illumination probe in a game scene provided by the embodiment of the application;
fig. 3 is a flowchart illustrating another rendering method for lighting information in a game scene according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating another rendering method for lighting information in a game scene according to an embodiment of the present application;
5a-5b are schematic diagrams illustrating a rendering process of lighting information in a game scene according to an embodiment of the present application;
fig. 6 shows a schematic structural diagram of a rendering apparatus for lighting information in a game scene according to an embodiment of the present application;
fig. 7 is a schematic structural diagram illustrating a rendering apparatus for lighting information in another game scene according to an embodiment of the present application;
fig. 8 is a schematic structural diagram illustrating another apparatus for rendering lighting information in a game scene according to an embodiment of the present application;
fig. 9 is a schematic structural diagram illustrating another apparatus for rendering lighting information in a game scene according to an embodiment of the present application;
fig. 10 is a schematic device structure diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
The content of the invention will now be discussed with reference to a number of exemplary embodiments. It is to be understood that these examples are discussed only to enable those of ordinary skill in the art to better understand and thus implement the teachings of the present invention, and are not meant to imply any limitations on the scope of the invention.
As used herein, the term "include" and its variants are to be read as open-ended terms meaning "including, but not limited to. The term "based on" is to be read as "based, at least in part, on. The terms "one embodiment" and "an embodiment" are to be read as "at least one embodiment". The term "another embodiment" is to be read as "at least one other embodiment".
The embodiment provides a rendering method of illumination information in a game scene, as shown in fig. 1, the method is applied to a client of a scene rendering tool, and includes the following steps:
101. and segmenting the space region by using a data structure of the space region in the game scene, and extracting the space voxel containing the object.
102. And traversing the grid data of the multiple levels, and setting an effective illumination probe aiming at the space voxel containing the object to obtain a first illumination probe grid of the space area.
103. And additionally setting a virtual illumination probe aiming at the space voxel which does not contain the object to obtain a second illumination probe grid of the space area.
104. And transmitting the illumination information acquired by the second illumination probe grid to texture resource information, and rendering the illumination information in the game scene according to the texture resource information.
The rendering method of the illumination information in the game scene provided by the embodiment of the invention comprises the steps of segmenting a space region by using a data structure of the space region in the game scene, extracting a space voxel containing an object, wherein the data structure comprises a plurality of levels of grid data, each level comprises a corresponding level number of space voxels, forming a relative space position relation between the space voxel and the object in the space region by using the grid data to obtain a space position with stronger illumination change, setting an effective illumination probe aiming at the space voxel containing the object to obtain a first illumination probe grid, so that the space position with stronger illumination change can be self-adaptively provided with the illumination probe, traversing the grid data of the plurality of levels, additionally setting a virtual illumination probe aiming at the space voxel not containing the object to obtain a second illumination probe grid of the space region, the spatial voxels between different levels can be effectively interpolated and transited in the illumination information sampling process, more effective illumination information sampling results can be provided for positions with uneven illumination density distribution in a game scene, the illumination information collected by the second illumination probe grid is transmitted to texture resource information, the illumination information in the game scene is rendered according to the texture resource information, the illumination information is collected by effective illumination probes arranged around an object, the illumination information meeting conditions can be collected without using a large number of effective illumination probes, the rendering complexity is simplified, the virtual illumination probes can provide an interpolation supplementing effect in the process of collecting the illumination information by the effective illumination probes, and the rich expressive force of real-time rendering is ensured.
In the embodiment of the invention, the data structure of the space region can be a tree structure formed by the space region, and can be a quadtree, an octree, a hexadecimal tree and the like, specifically, in the process of segmenting the space region, the tree structure formed by the space region is utilized to segment the space region in multiple levels to form space voxels of multiple levels, in the process of segmenting the level space region, whether the space voxels in the segmented levels contain objects or not is judged, if yes, next-level segmentation is carried out on the space voxels in the segmented levels until the number of the levels reaches a preset threshold value, and the space voxels containing the objects are extracted, wherein the objects contained in the space voxels refer to the space voxels close to the surface of the objects.
It is understood that, in the process of segmenting the spatial region, first, a large voxel is taken as a starting spatial region, and the large voxel is taken as a starting spatial region to be segmented, which may be set based on the level of the minimum spatial voxel, for example, the minimum spatial voxel is 1 × 1, the number of levels of the spatial region is 3, and the spatial region is 8 × 8, and may also be set by a user. Then before the large voxel is segmented, judging whether the space region contains an object or not, if so, segmenting the large voxel, otherwise, not segmenting the large voxel, further respectively judging whether the object exists in each sub-voxel or not according to the two sub-voxels formed after segmentation, and if so, segmenting the sub-voxels, otherwise, not segmenting the sub-voxels, and forming a spatial voxel of a hierarchy every time the voxel is segmented, wherein the voxel size of the spatial voxel is continuously reduced along with the increase of the hierarchy, for example, the spatial voxel of the first hierarchy, namely the first large spatial voxel, corresponds to the largest voxel, the spatial voxel of the highest hierarchy, namely the spatial voxel corresponding to the space region after multiple segmentation, corresponds to the smallest voxel, and is not increased by a hierarchy, and the size of the voxel is reduced by double, finally, each spatial region is segmented to form a plurality of layers of grid data, but the layer distribution in the grid data is not uniform, only the spatial region around the object is segmented, and the hierarchical distribution of the segmented spatial voxels is also non-uniform, for example, for the spatial region 8 x 8 including the object at the upper left corner and at the second level, 4 x 4 spatial voxels are formed after the first segmentation, that is, the spatial voxels of the first level, only the spatial voxels including the object are segmented into the spatial voxels of the second level, that is, the other 3 spatial voxels of the first level are not segmented, 4 spatial voxels 2 x 2 spatial voxels are formed after the second segmentation, and similarly, for the spatial voxels of the second level, if all the 4 spatial voxels on the second level contain the object, the three-level spatial voxels are formed by segmenting the 4 spatial voxels, that is, each of the two spatial voxels of the second level forms 41 × 1 spatial voxels.
Because the hierarchical distribution in the grid data is not uniform, in order to distribute the light probes with different densities in a game scene, the grid data of multiple levels can be traversed, the grid position of the space voxel in the grid data of multiple levels is obtained for the space voxel containing an object, effective light probes are arranged at four vertexes of the grid position, a first light probe grid of a space area is obtained, the space voxel is in different levels, repeated vertexes may occur, for example, the four vertex positions of the space area may only be in the first level, or may be in the second level or the third level in the splitting process, the grid position of the space voxel in the grid data of multiple levels is obtained for the grid data of multiple levels, whether the effective light probes are arranged at the four vertexes of the grid position is judged, if not, and additionally arranging a virtual illumination probe aiming at the vertex without the effective illumination probe in the grid position to obtain a second illumination probe grid in the space area, so that the multi-level grid data is filled up by the effective illumination probe and the virtual illumination probe, wherein the virtual illumination probe does not capture the illumination information, but uses the virtual illumination probe to perform seamless interpolation in the subsequent illumination information sampling process by using the effective illumination probe so as to improve the sampling result of the illumination information.
Taking a 2D quadtree as a specific application scene for example, as shown in fig. 2a-2c, firstly regarding a segmentation process of a spatial region, as shown in fig. 2a, taking spatial voxels formed in the spatial region as spatial voxels of a first level, since the spatial voxels of the first level contain objects, then segmenting the spatial voxels of the first level to form 4 spatial voxels of a second level, further determining whether each spatial voxel of the second level contains an object, if so, segmenting the spatial voxels of the second level until reaching spatial voxels of a third level; secondly, setting illumination probes (an effective illumination probe and a virtual illumination probe) in a self-adaptive manner in a space voxel, as shown in fig. 2b, the effective illumination probe is represented by a solid line circle, since the first-level space voxel contains an object, the effective illumination probe is placed at four corners of the first-level space voxel, and for the second-level space voxel, the effective illumination probe is placed at only four corners of the space voxel containing the object, as shown in a solid line part in fig. 2c, only an intersection point at the position of the solid line is placed, at this time, the same position as the first level may occur, and the effective illumination probe does not need to be repeatedly placed, and for the space voxel at which the effective illumination probe is not placed at different levels, as shown in a dotted line part in fig. 2c, a virtual illumination probe is additionally set, that is, the virtual illumination probe is placed at the intersection point at the position of the dotted line, here, the virtual light probes are indicated by dashed circles, eventually forming a light probe grid comprising active light probes and virtual light probes.
Furthermore, because the collection process of the illumination information is influenced by the position of the illumination probe in the illumination probe grid, the illumination information of different spatial positions can be influenced mutually, before the illumination information is rendered, the spatial logical relationship mapped by the illumination information in the second illumination probe grid is extracted, the spatial logical relationship is stored in the texture resource information, then in the subsequent rendering process of the illumination information, the illumination information in the texture resource information is expanded into the three-dimensional block data set according to the spatial logical relationship, the hierarchical relationship among the three-dimensional block data sets is recorded by using indirect textures, the hierarchical relationship among the three-dimensional block data sets in the indirect textures is further used for merging the three-dimensional block data sets to form three-dimensional block texture information of a tree structure, and the spatial position of the illumination information in a game scene is recorded in the three-dimensional block texture information, according to the spatial position of the viewpoint position in the game scene, the viewpoint position is equivalent to the spatial position corresponding to the visual angle of the video equipment, the spatial position is used as an illumination acquisition position, illumination information of the corresponding spatial position is read from three-dimensional block texture information of a tree structure for rendering, and through the spatial logic relationship of prestored illumination information, a large amount of resources are not required to be consumed for storing and transmitting probe data, so that the storage and transmission expenses are saved, and the rendering precision is ensured.
Further, in consideration of the distribution conditions of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, in the process of expanding the illumination information in the texture resource information into the three-dimensional block data sets according to the spatial logical relationship and recording the hierarchical relationship between each three-dimensional block data set by using the indirect texture, firstly, the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid is extracted according to the spatial logical relationship, then, the illumination information in the texture resource information is expanded into the three-dimensional block data according to the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, and the hierarchical relationship between the three-dimensional block data sets is recorded by using the indirect texture. Because the indirect texture is not direct texture information, a large amount of texture information does not need to be transmitted in the process of transmitting the texture resource information, so that the transmission efficiency and the sampling efficiency of the illumination information in the subsequent rendering process are improved.
Furthermore, in order to accurately acquire actual illumination information in the game world, in the process of reading illumination information of a corresponding space position from three-dimensional block texture information of the tree structure for rendering according to the space position of the viewpoint position in the game scene, firstly, indirect textures representing the hierarchical relationship between three-dimensional block texture information of the tree structure are acquired from the three-dimensional block texture information of the tree structure according to the space position of the viewpoint position in the game scene, and then, the illumination information of the corresponding space position in the three-dimensional block texture information of the tree structure is read for rendering by using the indirect textures representing the hierarchical relationship between the three-dimensional block texture information. Because the levels and the offsets of the three-dimensional block data in the tree structure are stored in the indirect texture, the illumination information acquired by the effective illumination probe can be acquired by sampling the corresponding levels and the offsets in the indirect texture according to the world space position of the screen pixel.
The embodiment provides another rendering method of lighting information in a game scene, as shown in fig. 3, the method is applied to a client of a scene rendering tool, and includes the following steps:
201. and acquiring short-distance voxels contained in the space voxels in the game scene.
The game is real-time, dynamic and interactive computer simulation, a plurality of three-dimensional games use a three-dimensional triangular mesh to express the surface of an object, detail layers of the expressed surface of the object are stored in textures, when the rendering is carried out, objects with crossed rays need to be considered firstly, then multilevel textures of detail layers corresponding to the surface of the object are selected to carry out rendering calculation, space voxels are used as volume units for rendering the textures in a game space, and objects containing the space voxels can be represented by three-dimensional rendering or polygon isosurface extracting given threshold value outlines.
Aiming at the illumination effect related to a game scene, the global illumination is generally used for rendering, the direct light and the diffuse reflection effect as much as possible of the light are considered in the global illumination, and the finally displayed illumination effect is closer to the real world. The specific global illumination refers to the calculation of the reflection of light around a game scene, and is responsible for making a plurality of fine coloring special effects, atmospheres and glossy metal reflection effects in the realization environment. In the existing global illumination mode, all indirect illumination is pre-calculated and stored in texture information with light maps, and the light maps enable a game scene to have an effect similar to global illumination. For the global illumination of the non-static object, the illumination probe can be used for simulating the effect of using an illumination map, the illumination probe can sample illumination information illuminating a certain specified point in the 3D space in a pre-calculation stage before operation, and then the collected information is compiled and stored in a package mode through a spherical harmonic function. When the game runs, the lighting information can be coded by the shader program to quickly reconstruct the lighting effect. Similar to the light map, the light probe stores the light information in the scene, while the light map stores the light information of the light illuminating the surface of the object, and the light probe stores the light information of the light passing through the vacuum area.
The space voxels in the game scene are equivalent to a three-dimensional space unit in a game world, the short-distance voxels are voxels meeting a distance condition in voxels of a preset level formed by the space voxel segmentation in the game scene and are equivalent to voxels close to the surface of an object, the distance condition is used as a basis for judging the voxels close to the surface of the object, a bounding box of the voxels can be intersected with an object bounding box in the game scene, the pixels to be segmented and the segmented pixels can be judged according to the distance condition by forming the space voxels in the segmentation process, and if the bounding box of the voxels is intersected with the object bounding box in the game scene, the voxels are close to the surface of the object and are the short-distance voxels.
In the embodiment of the application, for the placement of the illumination probe in the game world, manual placement is usually performed, the placement position is adjusted according to the rendering result of actual illumination information, time and space waste is very serious, the probe cannot be effectively covered along the surface of an object, the region of the surface of the object can be located in the game world by acquiring a short-distance voxel contained in the space voxel in the game scene, the illumination probe is arranged in a self-adaptive manner according to the region of the surface of the object, the illumination information with environmental influence in the game scene is acquired, and the sampling efficiency of the illumination information is improved.
The execution main body of the embodiment can be a rendering device or equipment of illumination information in a game scene, and can be configured at a client of a scene rendering tool, after the game scene is arranged, the position information of an illumination probe in the game scene needs to be arranged, because the illumination probe cannot be directly mounted on a game object, usually the illumination probe needs to depend on a specified space area in the game scene, when the illumination probe is added into the game scene, a short-distance voxel contained in a space pixel in the game scene can be obtained aiming at the specified space area, and the position of the short-distance voxel in the game scene is taken as a preferred position for placing the illumination probe, so that the illumination information which better meets the requirements of the game scene can be collected, and the illumination rendering effect is improved.
202. And creating an effective illumination probe and a virtual illumination probe for the short-distance voxels contained in the space voxel to generate a probe grid of the space voxel.
The virtual illumination probe is an illumination collector which is arranged on a sampling point with a real position in a game world, can capture light rays from all directions at the sampling point, and codes color information of the captured light rays into a group of coefficients which can be evaluated quickly in the game running process.
In this application example, the probe grid of the spatial voxel corresponds to a grid of a tree structure, the tree structure is a preset level formed by space voxel segmentation, for example, an octree structure corresponds to the preset level of 3, that is, the spatial voxel is segmented into 8 voxels, the corners of each voxel are formed into 2 × 2 probe grids, in order to distribute illumination probes with different densities in the game world, and adaptively minimize the total creation number, when the space voxel is segmented, a near-distance voxel is recorded, the near-distance voxel has a better illumination information sampling effect, and an effective illumination probe is further created on the near-distance voxel.
It can be understood that although the effective illumination probe can provide illumination information with interpolation effect, it still needs resource occupation of more than gigabytes, in order to be able to be competent for various scales and types of game scenes and reduce resource occupation, by additionally arranging the virtual illumination probe between different levels of the spatial pixels, the virtual illumination probe can be associated with the effective illumination probe in each level, and sampling interpolation results of different densities are sampled according to different spatial pixels of the effective illumination probe in the game world, so as to reduce resource overhead in the illumination information acquisition process.
203. And merging and storing the illumination information captured by the probe grids of the space voxels into texture resource information according to the position of the viewpoint in the game scene.
The virtual illumination probe is used for providing sampling interpolation results with different densities for the illumination data acquired by the effective illumination probe in the process of interpolating the illumination data, so that the illumination information captured by the probe grid is merged and stored in the texture resource information.
It can be understood that the illumination information captured by the probe mesh of the spatial voxel includes pre-computed illumination data, and many budget costs are generated at the time of editing, and when the illumination probe does not run, the illumination data passing through the scene space is stored and integrated into texture resource information, which is equivalent to an illumination map of a dynamic object, and also includes a direct light source projected onto the surface of the object in the scene and an indirect light source reflected between different objects, and the surface information and the concave-convex information of the object can be described through a shader on the material of the object.
204. And responding to a rendering instruction of the illumination information, establishing a rendering task by using the texture resource information, and rendering the illumination information in the game scene.
Because the texture resource information can be used for accurately obtaining the illumination information in the current game scene, a rendering task is further established by using the texture resource information to render the illumination information in the game scene. It can be understood that, in the same manner, each scene data frame in the game scene establishes a rendering task, and the scene rendering tool submits the scene space in the game scene to the rendering queue at one time, where each renderable scene space includes, in addition to its own mesh and material, a bounding box and its matrix in the game scene.
Compared with the existing method of sampling illumination information object by object, the method of sampling illumination information pixel by pixel can provide indirect illumination for a large number of complex objects, is suitable for various illumination collection scenes, and avoids the total number of objects sampled one by one, which is limited by the updating cost and transmission bandwidth of 3D textures from a CPU end to a GPU end. Compared with the existing mode of sampling illumination information pixel by pixel, the method has the advantages that the short-distance voxels contained in the space voxels in the game scene are obtained, the short-distance voxels are voxels close to the surface of an object, a larger number of illumination probes are needed, the effective illumination probes and the virtual illumination probes are further created for the short-distance voxels contained in the space voxels, probe grids of the space voxels are generated, the illumination probes and the virtual illumination probes can be distributed in a self-adaptive mode, the virtual illumination probes serve as auxiliary effective illumination probes to provide seamless interpolation for the game scene, the illumination information captured by the probe grids of the space voxels is merged and stored into texture resource information according to the positions of the view points in the game scene, the texture resource information can sample the illumination information of different game scenes, so that the same screen pixel samples interpolation results of different surface densities according to different spatial positions of the same screen pixel, and when a rendering instruction of the illumination information is responded, a rendering task is established by using texture resource information, the illumination information in the game scene is rendered, a three-dimensional texture address of the game scene is cached in the texture resource information, the number of the illumination probes is determined by the number of pixel textures in the game scene, and a large number of voxels are distributed through the illumination probes in a self-adaptive manner, so that the number of the illumination probes can be saved, millions of illumination probes in each frame are not required to be rapidly sampled by GPU hardware, the sampling updating, transmission and storage cost of the illumination information in the game scene is reduced, and the rendering efficiency of the illumination information is improved.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully describe the specific implementation process of the embodiment, the embodiment provides another rendering method of lighting information in a game scene, as shown in fig. 4, the method includes:
301. the method comprises the steps of obtaining a space area covered by an object to be hung in a game scene, and dividing each space voxel in the space area into voxels of preset levels.
The game scene is composed of objects, some of which are solid, e.g. a brick, some of which are free of fixed shapes, e.g. a plume of smoke, but all of which occupy a volume of three-dimensional space, and which may be opaque, i.e. light cannot pass through the object, or transparent, i.e. light can pass through the object. When rendering an opaque object, only the surface of the opaque object needs to be considered, and it is not necessary to know what the inside of the object is, and when rendering a transparent or semitransparent object, actions such as reflection, refraction, scattering, absorption and the like caused by light passing through the object need to be considered, and knowledge of the internal structure and properties of the object needs to be combined.
The light rays in the game scene can control the activities of characters, influence the mood of players and influence the mode of perceiving various events, and the game engine is used as a tool for game development, and various effects including intensity, color, shadow and the like which can be observed in real time in the process of making the game can be flexibly adjusted in the process of making the game. In general, for static objects in a game scene, a lighting map may be baked using global lighting, and when a lighting map is baked, objects in the game scene may calculate a map result based on the influence of light and superimpose the map result on the objects in the game scene to create a lighting effect, where the lighting map may include a direct light source projected onto the surface of the objects in the scene and an indirect light source reflected between different objects, and surface information and concave-convex information of the objects may be described through a shader on the material of the objects. Although the illumination map of the static object can not change the illumination condition of the game scene during the game execution, the pre-computed real-time global illumination system can compute complex scene light source interaction in real time, and can establish game environment with rich global illumination reflection and reflect the change of the light source in real time by pre-computing the global illumination. For dynamic objects in a game scene, sampling points of the illumination probes can be arranged in a designated area to collect light and shade information of the designated area, the designated area can be a space area covered by the objects to be hung in the game scene, as the illumination information generated in a place with small illumination change in the game scene is less, waste can be generated by arranging too many illumination probes, and intensive illumination probes are preferably arranged at illumination change, shadow and illumination transition areas. The method includes the steps that a corresponding optimal position can be selected according to a space area covered by an area to be hooked, each space voxel in the space area is divided into voxels of preset levels, the preset levels are limiting depths of tree structures and can be specifically set according to practical application scenes, and the number of the levels is higher, and the number of required illumination probes is larger.
302. And traversing each voxel in the preset level, and judging that the bounding box of the segmented voxel is intersected with the object bounding box in the game scene.
It can be understood that, in order to arrange more light probes at the spatial pixels close to the surface of an object, and arrange a small number of light probes at the open spatial pixels, the distance condition of each voxel in a preset hierarchy is determined, where the bounding box of the segmented voxel is equivalent to the smallest hexahedron which surrounds the segmented voxel and is parallel to the coordinate axis, the bounding box of the object in the game scene is equivalent to the smallest hexahedron which surrounds the object and is parallel to the coordinate axis, the bounding box has a simple structure and a small storage space, and is not suitable for a software containing a deformed complex virtual environment.
303. And if so, determining that the segmented voxels are close-range voxels of the surface of the object in the game scene.
The judging process can be executed before and after the intermediate voxels are divided, the voxels meeting the judging conditions are repeatedly divided, if the voxels not meeting the judging conditions are not divided, firstly, a large voxel is taken as an original space voxel to be divided, then the original space voxel is uniformly divided, the dividing principle is that if the large voxel is close to the surface of an object, the large voxel is subdivided, each sub-voxel is repeatedly divided until the size of the specified minimum voxel is reached, namely, a preset level is reached, and the process can generate a tree structure of the preset level.
304. And creating an effective illumination probe and a virtual illumination probe for the near distance voxels contained in the space voxel, and generating a probe grid of the space voxel.
In the embodiment of the present application, the effective illumination probe is used as an illumination probe arranged in a game scene, and generally approaches the surface of an object to obtain an illumination brightness condition of the surface of the object, specifically, the effective illumination probe may be created for a short-distance voxel in a hierarchy in a space voxel, a virtual voxel corresponding to the hierarchy is added to the space voxel, and a virtual illumination probe is created for a voxel meeting an addition condition in the virtual voxel, where the virtual illumination probe is used to perform seamless interpolation on illumination data sampled by the short-distance voxel in the hierarchy in the space voxel.
Specifically, in the process of adding a virtual voxel corresponding to a hierarchy for a space voxel and creating a virtual illumination probe for a voxel meeting an addition condition in the virtual voxel, a virtual voxel corresponding to the hierarchy can be added for a hierarchy greater than one order in the space voxel, the virtual voxel is mapped with a voxel of a corresponding hierarchy in the space voxel, the virtual voxel corresponding to the hierarchy is further traversed, whether an effective illumination probe exists in the mapped voxel in the space voxel is judged, and if not, the virtual illumination probe is created for the virtual voxel. The virtual illumination probe can be additionally created for the places where the effective illumination probes do not exist around the short-distance voxels for creating the effective illumination probe.
305. And expanding the illumination information captured by the probe grids of the space voxels to a three-dimensional block data set according to the position of the viewpoint in the game scene to form three-dimensional block texture information of a multi-level tree structure.
In the embodiment of the application, the illumination data in the game scene may be sampled by using an effective illumination probe in the probe grid, interpolation operation is performed on the illumination data to obtain first illumination information of a viewpoint position in the game scene, seamless interpolation is performed on the illumination data sampled by a short-distance voxel in a hierarchy in a spatial voxel by using a virtual illumination probe in the probe grid to obtain second illumination information of the viewpoint position in the game scene, and the first illumination information and the second illumination information of the viewpoint position in the game scene are further expanded to a three-dimensional block data set to form three-dimensional block texture information of a multi-hierarchy tree structure.
It should be noted that the three-dimensional texture information of the tree structure is combined with the illumination information captured by the effective illumination probe and the virtual illumination probe, and the illumination information can be transmitted to the GPU to form the literary resource information.
306. And recording the mapping hierarchical relation of the three-dimensional block data set into indirect texture information in the process of expanding the three-dimensional block data set.
It should be noted that, in order to complete the storage of the three-dimensional block data set, it is further required to construct indirect texture information including development expression level by using a tree structure, and sample the indirect texture information during running, where the obtained content is the hierarchical relationship of the three-dimensional block data in the tree structure, so as to calculate the sampling position of the cache light probe in the three-dimensional block data.
307. And merging and storing the three-dimensional block texture information and the indirect texture information into texture resource information.
It will be appreciated that the texture resource information stores the illumination information in each spatial voxel in the game scene, and then by interpolating between the illumination information captured by the nearest illumination probe, the illumination information at any location within the spatial voxel will be projected onto the moving object after estimation.
308. And responding to a rendering instruction of the illumination information, establishing a rendering task by using the texture resource information, and rendering the illumination information in the game scene.
In the embodiment of the application, a rendering task can be specifically established by using texture resource information, a hierarchical relationship mapped by three-dimensional block texture information is obtained from indirect texture information, then position information of the three-dimensional texture information in a tree structure is inquired according to the hierarchical relationship mapped by the three-dimensional block texture information, and finally illumination information captured by a probe grid of a space voxel is sampled from the three-dimensional block texture information according to the position of the three-dimensional texture information in the tree structure to render the illumination information.
Specifically, in the process of querying the position information of the three-dimensional texture information in the tree structure according to the hierarchical relationship mapped by the three-dimensional block texture information, the hierarchy and the offset of the three-dimensional block texture information in the tree structure can be extracted according to the hierarchical relationship mapped by the three-dimensional block texture information, and the position information of the three-dimensional texture information in the tree structure can be calculated according to the hierarchy and the offset of the three-dimensional block texture information in the tree structure.
Taking a 2D quadtree as a specific application scene for example, firstly adaptively creating an effective illumination probe in a spatial voxel in a game scene, then placing one effective illumination probe at the corner of each voxel, specifically dividing each spatial voxel into 4 voxels, further repeatedly dividing each voxel, forming the spatial voxel into an octree structure to form a 2 x 2 probe grid, recording the voxels close to the surface of an object, placing the effective illumination probes on the voxel nodes, then adding virtual voxels and virtual illumination probes in the game scene at the levels corresponding to the spatial voxels, wherein the virtual voxels and the virtual illumination probes can realize seamless interpolation of illumination information when sampling between voxels of a larger level and a smaller level, and further merging and storing the illumination information captured by the effective illumination probes and the virtual illumination probes into texture resource information, the texture resource information can be expanded layer by layer in the process of rendering, each three-dimensional block data in the texture resource information of each layer is individually numbered, the three-dimensional block data of each layer is usually combined with the illumination information captured by an effective illumination probe and a virtual illumination probe, the three-dimensional block data which is not combined and stored can be continuously combined, because the expanded three-dimensional block data set is a storage structure consisting of a plurality of textures with the same block size layout, the storage structure can realize indirect addressing in the process of rendering, so that the three-dimensional block data can be obtained from a cache after the caching of the three-dimensional block data is completed, the whole rendering process is shown in figures 5a-5b, the tree structure is utilized to construct indirect textures which are expanded layer by layer, the indirect textures are sampled during the rendering, and the obtained contents are the layers and the offset which are cached in the tree structure in the three-dimensional block data, from this the sample positions for the illumination probes in the block cache can be calculated.
The space voxels with hierarchical relationships in a tree structure are expanded into an illumination probe set to form a three-dimensional block data set, each three-dimensional block data is stored in an indirect mapping texture in the hierarchical relationship of the tree structure, then in the game scene rendering process, each frame of scene data updates illumination information captured by the illumination probe in the tree structure according to a viewpoint position and is expanded into the three-dimensional block data set, the hierarchical relationships in the expanding process are recorded into the indirect textures and then are transmitted to a GPU, a certain hierarchical relationship in the corresponding tree structure is obtained from the indirect textures firstly according to the world position of an object surface sampling point during sampling, and finally illumination information finally formed by the illumination probe is sampled in the three-dimensional block data by utilizing the hierarchical relationship and is rendered.
Further, as a specific implementation of the method in fig. 1, an embodiment of the present application provides a rendering apparatus for lighting information in a game scene, as shown in fig. 6, the apparatus includes: an extraction unit 41, a first setting unit 42, a second setting unit 43, a first rendering unit 44.
The extracting unit 41 may be configured to segment a space region in a game scene by using a data structure of the space region, and extract a space voxel including an object, where the data structure is mesh data including multiple levels;
a first setting unit 42, configured to traverse the grid data of the multiple levels, and set an effective illumination probe for the spatial voxel that includes the object, to obtain a first illumination probe grid of the spatial region, where the first illumination probe grid includes the effective illumination probe;
a second setting unit 43, configured to additionally set a virtual illumination probe for a spatial voxel that does not include an object, to obtain a second illumination probe grid of the spatial region, where the second illumination probe grid includes an effective illumination probe and the virtual illumination probe;
the first rendering unit 44 may be configured to transmit the illumination information collected by the second illumination probe grid to texture resource information, and render the illumination information in the game scene according to the texture resource information.
In a specific application scenario, as shown in fig. 7, the data structure is a tree structure formed by the spatial regions, and the extracting unit 41 includes:
a segmentation module 411, configured to perform multi-level segmentation on the space region by using a tree structure formed by the space region, so as to form multiple levels of space voxels;
the first determining module 412 may be configured to determine, in the process of segmenting a multi-level spatial region, whether a spatial voxel in a segmented level contains an object;
the extracting module 413 may be configured to, if yes, perform next-level segmentation on the spatial voxels in the segmented levels until the number of levels reaches a preset threshold, and extract the spatial voxels including the object.
In a specific application scenario, as shown in fig. 7, the first setting unit 42 includes:
a first obtaining module 421, configured to obtain, for a spatial voxel containing an object, a grid position where the spatial voxel is located in grid data of multiple levels;
a first setting module 422 may be configured to set active illumination probes at vertices in the first grid location, resulting in a first illumination probe grid of the spatial region.
In a specific application scenario, as shown in fig. 7, the second setting unit 43 includes:
a second obtaining module 431, configured to traverse the multiple levels of grid data, and obtain, for a spatial voxel that does not contain an object in each level, a second grid position where the spatial voxel is located in the multiple levels of grid data;
a second determining module 432, configured to determine whether effective light probes are set at four vertices of the grid location;
the second setting module 433 may be configured to, if not, additionally set a virtual illumination probe for a vertex where no effective illumination probe is set in the grid position, so as to obtain a second illumination probe grid of the spatial region.
In a specific application scenario, as shown in fig. 7, the apparatus further includes:
the extracting unit 45 may be configured to extract a spatial logical relationship mapped by the illumination information in the second illumination probe grid before the illumination information acquired by the second illumination probe grid is transmitted to the texture resource information and the illumination information in the game scene is rendered according to the texture resource information, and store the spatial logical relationship in the texture resource information.
In a specific application scenario, as shown in fig. 7, the first rendering unit 44 includes:
the expansion module 441 is configured to expand the illumination information in the texture resource information into the three-dimensional block data set according to the spatial logical relationship, and record a hierarchical relationship between the three-dimensional block data sets by using indirect textures;
a merging module 442, configured to merge the three-dimensional block data sets according to a hierarchical relationship between the three-dimensional block data sets in the indirect texture to form three-dimensional block texture information with a tree structure, where a spatial position of illumination information in a game scene is recorded in the three-dimensional block texture information;
the reading module 443 may be configured to read, according to a spatial position of a viewpoint position in a game scene, illumination information of a corresponding spatial position from the three-dimensional block texture information of the tree structure for rendering.
In a specific application scenario, the unfolding module 441 may be specifically configured to extract, according to the spatial logical relationship, a hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid;
the expansion module 441 may be further configured to expand the illumination information in the texture resource information into the three-dimensional block data according to the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, and record the hierarchical relationship between the three-dimensional block data sets by using indirect textures.
In a specific application scenario, the reading module 443 may be specifically configured to obtain, from the three-dimensional block texture information of the tree structure, an indirect texture representing a hierarchical relationship between three-dimensional block data sets according to a spatial position of a viewpoint position in a game scenario;
the reading module 443 may be further specifically configured to read, by using the indirect texture representing the hierarchical relationship between the three-dimensional block data sets, the illumination information at the corresponding spatial position in the three-dimensional block texture information of the tree structure for rendering.
It should be noted that other corresponding descriptions of the functional units related to the rendering apparatus for lighting information in a game scene provided in this embodiment may refer to the corresponding descriptions in fig. 1, and are not repeated herein.
Further, as a specific implementation of the method in fig. 3 and fig. 4, an embodiment of the present application provides a rendering apparatus for lighting information in a game scene, as shown in fig. 8, the apparatus includes: an acquisition unit 51, a creation unit 52, a storage unit 53, and a second rendering unit 54.
The obtaining unit 51 may be configured to obtain a short-distance voxel included in a spatial voxel in a game scene, where the short-distance voxel is a voxel meeting a distance condition in voxels of a preset hierarchy formed by spatial voxel segmentation in the game scene, and the distance condition is that a bounding box of the voxel intersects with an object bounding box in the game scene;
a creating unit 52, configured to create an effective illumination probe and a virtual illumination probe for a near-distance voxel included in the spatial voxel, and generate a probe grid of the spatial voxel, where the probe grid is used to capture illumination information in a game scene;
the storage unit 53 may be configured to combine and store the illumination information captured by the probe grid of the spatial voxel into texture resource information according to a viewpoint position in a game scene;
the second rendering unit 54 may be configured to, in response to a rendering instruction of the illumination information, establish a rendering task using the texture resource information, and render the illumination information in the game scene.
Compared with the existing mode of sampling illumination information object by object, the rendering device of illumination information in game scenes provided by the embodiment of the invention can provide indirect illumination for a large number of complex objects and is suitable for various illumination acquisition scenes by using the mode of sampling illumination information pixel by pixel, and can avoid the total number of objects sampled object by object limited by the updating cost and transmission bandwidth of 3D textures from a CPU end to a GPU end. Compared with the existing mode of sampling illumination information pixel by pixel, the method has the advantages that the short-distance voxels contained in the space voxels in the game scene are obtained, the short-distance voxels are voxels close to the surface of an object, a larger number of illumination probes are needed, effective illumination probes and virtual illumination probes are further created for the short-distance voxels contained in the space voxels, probe grids of the space voxels are generated, the illumination probes and the virtual illumination probes can be adaptively distributed, the virtual illumination probes serve as auxiliary effective illumination probes to provide seamless interpolation for the game scene, the illumination information captured by the probe grids of the space voxels is merged and stored into texture resource information according to the position of a viewpoint in the game scene, the texture resource information can sample the illumination information of different game scenes, so that the pixels on the same screen sample interpolation results of different areal densities according to the illumination information at different spatial positions, and when a rendering instruction of the illumination information is responded, a rendering task is established by using texture resource information, the illumination information in the game scene is rendered, a three-dimensional texture address of the game scene is cached in the texture resource information, the number of the illumination probes is determined by the number of pixel textures in the game scene, and a large number of voxels are distributed through the illumination probes in a self-adaptive manner, so that the number of the illumination probes can be saved, millions of illumination probes in each frame are not required to be rapidly sampled by GPU hardware, the sampling updating, transmission and storage cost of the illumination information in the game scene is reduced, and the rendering efficiency of the illumination information is improved.
In a specific application scenario, as shown in fig. 9, the obtaining unit 51 includes:
the segmentation module 511 may be configured to acquire a spatial region covered by an object to be hooked in a game scene, and segment each spatial voxel in the spatial region into voxels of a preset level;
a third determining module 512, configured to traverse each voxel in the preset hierarchy, and determine that a bounding box of the segmented voxel intersects with an object bounding box in the game scene;
the determining module 513 may be configured to determine that the segmented voxel is a short-distance voxel on the surface of an object in the game scene if yes.
In a specific application scenario, as shown in fig. 9, the creating unit 52 includes:
a creation module 521, operable to create an active light probe for a close-range voxel within a hierarchy of the spatial voxels;
the adding module 522 may be configured to add a virtual voxel corresponding to a hierarchy to the spatial voxel, and create a virtual illumination probe for a voxel meeting an adding condition in the virtual voxel, where the virtual illumination probe is used to perform seamless interpolation on illumination data sampled by a voxel located in a close distance within the hierarchy in the spatial voxel.
In a specific application scenario, as shown in fig. 9, the add-on module 522 includes:
an add sub-module 5221, which may be configured to add, for levels greater than one level in the spatial voxels, a virtual voxel corresponding to a level, which is mapped to a voxel of a corresponding level in the spatial voxels;
the determining submodule 5222 may be configured to traverse the virtual voxels corresponding to the levels, and determine whether there is an effective light probe in the phase-mapped voxels in the spatial voxels;
a creation sub-module 5223 may be used to create a virtual illumination probe for the virtual voxel if not.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
the unfolding unit 55 may be configured to unfold the illumination information captured by the probe grid of the spatial voxel to a three-dimensional block data set according to the viewpoint position in the game scene before merging and storing the illumination information captured by the probe grid of the spatial voxel into texture resource information according to the viewpoint position in the game scene, so as to form three-dimensional block texture information of a multi-level tree structure;
a recording unit 56, configured to record the hierarchical relationship mapped by the three-dimensional block data set into indirect texture information during the expansion process into the three-dimensional block data set;
the storage unit 53 may be further configured to combine and store the three-dimensional block texture information and the indirect texture information into texture resource information.
In a specific application scenario, as shown in fig. 9, the expansion unit 55 includes:
the operation module 551 is configured to sample illumination data in a game scene by using the effective illumination probe in the probe grid, and perform interpolation operation on the illumination data to obtain first illumination information of a viewpoint position in the game scene;
an interpolation module 552, configured to perform seamless interpolation on the illumination data sampled by the near-distance voxel in the hierarchy level in the spatial voxel by using the virtual illumination probe in the probe grid, to obtain second illumination information of a viewpoint position in a game scene;
the expansion module 553 is configured to expand the first illumination information and the second illumination information of the viewpoint position in the game scene into a three-dimensional tile data set, so as to form three-dimensional tile texture information of a multi-level tree structure.
In a specific application scenario, as shown in fig. 9, the second rendering unit 54 includes:
an obtaining module 541, configured to establish a rendering task by using the texture resource information, and obtain a hierarchical relationship mapped by three-dimensional block texture information from the indirect texture information;
the query module 542 may be configured to query, according to the hierarchical relationship mapped by the three-dimensional block texture information, position information of the three-dimensional texture information in a tree structure;
the sampling module 543 may be configured to sample, according to the position of the three-dimensional texture information in the tree structure, the illumination information captured by the probe grid of the space voxel from the three-dimensional block texture information, and render the illumination information.
In a specific application scenario, as shown in fig. 9, the query module 542 includes:
the extracting submodule 5421 may be configured to extract, according to the hierarchical relationship mapped by the three-dimensional block texture information, a hierarchy and an offset of the three-dimensional block texture information in a tree structure;
the calculating submodule 5422 may be configured to calculate the position information of the three-dimensional texture information in the tree structure according to the level and the offset of the three-dimensional block texture information in the tree structure.
It should be noted that other corresponding descriptions of the functional units related to the rendering apparatus for lighting information in a game scene provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not repeated herein.
Based on the method shown in fig. 1, correspondingly, an embodiment of the present application further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for rendering the lighting information in the game scene shown in fig. 1 is implemented.
Based on the methods shown in fig. 3 to 4, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for rendering lighting information in a game scene shown in fig. 3 to 4 is implemented.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the method shown in fig. 1, fig. 3 to fig. 4, and the virtual device embodiment shown in fig. 6 to fig. 9, to achieve the above object, an embodiment of the present application further provides an entity device for rendering illumination information in a game scene, which may be specifically a computer, a smart phone, a tablet computer, a smart watch, a server, or a network device, where the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above-mentioned rendering method of lighting information in a game scene as shown in fig. 1, 3-4.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
In an exemplary embodiment, referring to fig. 10, the entity device includes a communication bus, a processor, a memory, and a communication interface, and may further include an input/output interface, and a display device, where the functional units may complete mutual communication through the bus. The memory stores computer programs, and the processor is used for executing the programs stored in the memory and executing the painting mounting method in the embodiment.
Those skilled in the art will appreciate that the physical device structure for rendering the illumination information in the game scene provided by the present embodiment does not constitute a limitation to the physical device, and may include more or less components, or combine some components, or arrange different components.
The storage medium can also comprise an operating system and a network communication module. The operating system is a program for managing hardware and software resources of the actual device for store search information processing, and supports the operation of the information processing program and other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. Compared with the prior art, the technical scheme of the application shows that the scene terrain does not change for the terrain data frames with unchanged terrain elements in the game scene, the mixed texture information formed by the terrain data frames changed last time can be used for executing the game scene rendering, multi-texture mixed operation is not required to be executed for each frame of terrain scene, the time occupation of the texture rendering process is reduced, and the rendering speed of the illumination information in the game scene is improved.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.
Claims (11)
1. A rendering method of illumination information in a game scene is characterized by comprising the following steps:
obtaining a near distance voxel contained in a space voxel in a game scene, wherein the near distance voxel is a voxel meeting a distance condition in voxels of a preset level formed by space voxel segmentation in the game scene, and the distance condition is that a bounding box of the voxel is intersected with an object bounding box in the game scene;
creating an effective illumination probe and a virtual illumination probe for a near distance voxel contained in the space voxel, and generating a probe grid of the space voxel, wherein the probe grid is used for capturing illumination information in a game scene;
merging and storing the illumination information captured by the probe grids of the space voxels into texture resource information according to the position of a viewpoint in a game scene;
and responding to a rendering instruction of the illumination information, establishing a rendering task by using the texture resource information, and rendering the illumination information in the game scene.
2. The method according to claim 1, wherein the acquiring of the close-range voxels of the object surface in the game scene specifically comprises:
acquiring a space region covered by an object to be hung in a game scene, and dividing each space voxel in the space region into voxels of a preset level;
traversing each voxel in a preset level, and judging that a bounding box of the segmented voxel is intersected with an object bounding box in a game scene;
and if so, determining that the segmented voxels are close-range voxels of the surface of the object in the game scene.
3. The method according to claim 1, wherein the creating an effective light probe and a virtual light probe for the near voxels included in the spatial voxel and generating a probe grid of spatial voxels comprises:
creating an active light probe for a near voxel within a hierarchy of the spatial voxels;
adding virtual voxels corresponding to the levels for the space voxels, and creating a virtual illumination probe for the voxels meeting the adding conditions in the virtual voxels, wherein the virtual illumination probe is used for performing seamless interpolation on illumination data sampled by the voxels which are close to the level in the space voxels.
4. The method according to claim 1, wherein the adding of virtual voxels corresponding to the hierarchy to the spatial voxels and the creating of a virtual light probe for voxels meeting the addition condition in the virtual voxels specifically include:
aiming at the level which is more than the first order in the space voxels, adding virtual voxels corresponding to the level, wherein the virtual voxels are mapped with the voxels of the corresponding level in the space voxels;
traversing virtual voxels corresponding to the levels, and judging whether phase mapping voxels exist effective light probes in the space voxels;
if not, a virtual illumination probe is created for the virtual voxel.
5. The method of claim 4, wherein prior to said merging and storing the lighting information captured by the probe grid of spatial voxels into texture resource information according to viewpoint locations in a game scene, the method further comprises:
expanding the illumination information captured by the probe grid of the space voxel to a three-dimensional block data set according to the position of a viewpoint in a game scene to form three-dimensional block texture information of a multi-level tree structure;
recording the mapping hierarchical relation of the three-dimensional block data set into indirect texture information in the process of expanding the three-dimensional block data set;
the merging and storing the illumination information captured by the probe grid of the space voxel into texture resource information according to the position of the viewpoint in the game scene specifically includes:
and merging and storing the three-dimensional block texture information and the indirect texture information into texture resource information.
6. The method according to any one of claims 1 to 5, wherein the expanding the illumination information captured by the probe mesh of spatial voxels into a three-dimensional tile data set according to the viewpoint position in the game scene, forming three-dimensional tile texture information of a multi-level tree structure, specifically comprises:
sampling illumination data in a game scene by using an effective illumination probe in the probe grid, and carrying out interpolation operation on the illumination data to obtain first illumination information of a viewpoint position in the game scene;
utilizing a virtual illumination probe in the probe grid to perform seamless interpolation on illumination data sampled by near distance voxels in the space voxels, wherein the near distance voxels are positioned in a hierarchy level, so that second illumination information of a viewpoint position in a game scene is obtained;
and expanding the first illumination information and the second illumination information of the viewpoint position in the game scene to a three-dimensional block data set to form three-dimensional block texture information of a multi-level tree structure.
7. The method according to claim 6, wherein the creating a rendering task by using the texture resource information to render the illumination information in the game scene specifically comprises:
establishing a rendering task by using the texture resource information, and acquiring a hierarchical relation mapped by the three-dimensional block texture information from the indirect texture information;
inquiring the position information of the three-dimensional texture information in a tree structure according to the hierarchical relation mapped by the three-dimensional block texture information;
and sampling illumination information captured by the probe grid of the space voxel from the three-dimensional block texture information according to the position of the three-dimensional texture information in the tree structure, and rendering the illumination information.
8. The method according to claim 7, wherein the querying the position information of the three-dimensional texture information in the tree structure according to the hierarchical relationship of the mapping of the three-dimensional block texture information specifically comprises:
extracting the level and the offset of the three-dimensional block texture information in a tree structure according to the level relation mapped by the three-dimensional block texture information;
and calculating the position information of the three-dimensional texture information in the tree structure according to the level and the offset of the three-dimensional block texture information in the tree structure.
9. An apparatus for rendering illumination information in a game scene, comprising:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring a short-distance voxel contained in a space voxel in a game scene, the short-distance voxel is a voxel which meets a distance condition in voxels of a preset level formed by space voxel segmentation in the game scene, and the distance condition is that a bounding box of the voxel is intersected with an object bounding box in the game scene;
the creating unit is used for creating an effective illumination probe and a virtual illumination probe for the short-distance voxels contained in the space voxels, and generating a probe grid of the space voxels, wherein the probe grid is used for capturing illumination information in a game scene;
the storage unit is used for merging and storing the illumination information captured by the probe grids of the space voxels into texture resource information according to the position of a viewpoint in a game scene;
and the second rendering unit is used for responding to a rendering instruction of the illumination information, establishing a rendering task by using the texture resource information and rendering the illumination information in the game scene.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor when executing the computer program realizes the steps of a method for rendering lighting information in a game scene according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of a method for rendering lighting information in a game scene according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210325539.9A CN115131482A (en) | 2021-03-30 | 2021-03-30 | Rendering method, device and equipment for illumination information in game scene |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110342331.3A CN113034657B (en) | 2021-03-30 | 2021-03-30 | Rendering method, device and equipment for illumination information in game scene |
CN202210325539.9A CN115131482A (en) | 2021-03-30 | 2021-03-30 | Rendering method, device and equipment for illumination information in game scene |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110342331.3A Division CN113034657B (en) | 2021-03-30 | 2021-03-30 | Rendering method, device and equipment for illumination information in game scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115131482A true CN115131482A (en) | 2022-09-30 |
Family
ID=76452932
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110342331.3A Active CN113034657B (en) | 2021-03-30 | 2021-03-30 | Rendering method, device and equipment for illumination information in game scene |
CN202210325539.9A Pending CN115131482A (en) | 2021-03-30 | 2021-03-30 | Rendering method, device and equipment for illumination information in game scene |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110342331.3A Active CN113034657B (en) | 2021-03-30 | 2021-03-30 | Rendering method, device and equipment for illumination information in game scene |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113034657B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876572A (en) * | 2024-03-13 | 2024-04-12 | 腾讯科技(深圳)有限公司 | Illumination rendering method, device, equipment and storage medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299220A (en) * | 2021-11-19 | 2022-04-08 | 腾讯科技(成都)有限公司 | Data generation method, device, equipment, medium and program product of illumination map |
CN116740255A (en) * | 2022-03-02 | 2023-09-12 | 腾讯科技(深圳)有限公司 | Rendering processing method, device, equipment and medium |
CN116934946A (en) * | 2022-04-02 | 2023-10-24 | 腾讯科技(深圳)有限公司 | Illumination rendering method and device for virtual terrain, storage medium and electronic equipment |
CN118096985A (en) * | 2023-07-11 | 2024-05-28 | 北京艾尔飞康航空技术有限公司 | Real-time rendering method and device for virtual forest scene |
CN118070403B (en) * | 2024-04-17 | 2024-07-23 | 四川省建筑设计研究院有限公司 | BIM-based method and system for automatically generating lamp loop influence area space |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204701A (en) * | 2016-06-22 | 2016-12-07 | 浙江大学 | A kind of rendering intent based on light probe interpolation dynamic calculation indirect reference Gao Guang |
EP3337585A2 (en) * | 2015-08-17 | 2018-06-27 | Lego A/S | Method of creating a virtual game environment and interactive game system employing the method |
CN111340926A (en) * | 2020-03-25 | 2020-06-26 | 北京畅游创想软件技术有限公司 | Rendering method and device |
CN111744183A (en) * | 2020-07-02 | 2020-10-09 | 网易(杭州)网络有限公司 | Illumination sampling method and device in game and computer equipment |
CN112169324A (en) * | 2020-09-22 | 2021-01-05 | 完美世界(北京)软件科技发展有限公司 | Rendering method, device and equipment of game scene |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103198515B (en) * | 2013-04-18 | 2016-05-25 | 北京尔宜居科技有限责任公司 | In a kind of instant adjusting 3D scene, object light is according to the method for rendering effect |
US9390548B2 (en) * | 2014-06-16 | 2016-07-12 | Sap Se | Three-dimensional volume rendering using an in-memory database |
CN104574489B (en) * | 2014-12-16 | 2017-11-03 | 中国人民解放军理工大学 | Landform and motion vector integrated approach based on lamination quaternary tree atlas |
CN111798558A (en) * | 2020-06-02 | 2020-10-20 | 完美世界(北京)软件科技发展有限公司 | Data processing method and device |
-
2021
- 2021-03-30 CN CN202110342331.3A patent/CN113034657B/en active Active
- 2021-03-30 CN CN202210325539.9A patent/CN115131482A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3337585A2 (en) * | 2015-08-17 | 2018-06-27 | Lego A/S | Method of creating a virtual game environment and interactive game system employing the method |
CN106204701A (en) * | 2016-06-22 | 2016-12-07 | 浙江大学 | A kind of rendering intent based on light probe interpolation dynamic calculation indirect reference Gao Guang |
CN111340926A (en) * | 2020-03-25 | 2020-06-26 | 北京畅游创想软件技术有限公司 | Rendering method and device |
CN111744183A (en) * | 2020-07-02 | 2020-10-09 | 网易(杭州)网络有限公司 | Illumination sampling method and device in game and computer equipment |
CN112169324A (en) * | 2020-09-22 | 2021-01-05 | 完美世界(北京)软件科技发展有限公司 | Rendering method, device and equipment of game scene |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876572A (en) * | 2024-03-13 | 2024-04-12 | 腾讯科技(深圳)有限公司 | Illumination rendering method, device, equipment and storage medium |
CN117876572B (en) * | 2024-03-13 | 2024-08-16 | 腾讯科技(深圳)有限公司 | Illumination rendering method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113034657A (en) | 2021-06-25 |
CN113034657B (en) | 2022-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113034657B (en) | Rendering method, device and equipment for illumination information in game scene | |
CN110738721B (en) | Three-dimensional scene rendering acceleration method and system based on video geometric analysis | |
CN113034656B (en) | Rendering method, device and equipment for illumination information in game scene | |
US8570322B2 (en) | Method, system, and computer program product for efficient ray tracing of micropolygon geometry | |
US11804002B2 (en) | Techniques for traversing data employed in ray tracing | |
US7773087B2 (en) | Dynamically configuring and selecting multiple ray tracing intersection methods | |
US11816783B2 (en) | Enhanced techniques for traversing ray tracing acceleration structures | |
CN113674389B (en) | Scene rendering method and device, electronic equipment and storage medium | |
US11373358B2 (en) | Ray tracing hardware acceleration for supporting motion blur and moving/deforming geometry | |
CN112755535B (en) | Illumination rendering method and device, storage medium and computer equipment | |
JP2009525526A (en) | Method for synthesizing virtual images by beam emission | |
CN102157008A (en) | Large-scale virtual crowd real-time rendering method | |
JP6864495B2 (en) | Drawing Global Illumination in 3D scenes | |
US11508112B2 (en) | Early release of resources in ray tracing hardware | |
US20240009226A1 (en) | Techniques for traversing data employed in ray tracing | |
KR100624455B1 (en) | Lightmap processing method in 3 dimensional graphics environment and apparatus therefor | |
CN115423917B (en) | Real-time drawing method and system for global three-dimensional wind field | |
CN116993894B (en) | Virtual picture generation method, device, equipment, storage medium and program product | |
Hoppe et al. | Adaptive meshing and detail-reduction of 3D-point clouds from laser scans | |
Favorskaya et al. | Large scene rendering | |
Atanasov et al. | Efficient Rendering of Digital Twins Consisting of Both Static And Dynamic Data | |
Forstmann et al. | Visualizing large procedural volumetric terrains using nested clip-boxes | |
CN118037962A (en) | Lightweight three-dimensional model construction method, system and medium for experimental equipment | |
CN117058301A (en) | Knitted fabric real-time rendering method based on delayed coloring | |
Vyatkin et al. | Parallel architecture and algorithms for real-time synthesis of high-quality images using Voxel-Based surfaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20220930 Assignee: Beijing Xuanguang Technology Co.,Ltd. Assignor: Perfect world (Beijing) software technology development Co.,Ltd. Contract record no.: X2022990000514 Denomination of invention: Rendering method, device and device of lighting information in game scene License type: Exclusive License Record date: 20220817 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |