CN114299220A - Data generation method, device, equipment, medium and program product of illumination map - Google Patents

Data generation method, device, equipment, medium and program product of illumination map Download PDF

Info

Publication number
CN114299220A
CN114299220A CN202111642785.9A CN202111642785A CN114299220A CN 114299220 A CN114299220 A CN 114299220A CN 202111642785 A CN202111642785 A CN 202111642785A CN 114299220 A CN114299220 A CN 114299220A
Authority
CN
China
Prior art keywords
illumination
virtual
target
data
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111642785.9A
Other languages
Chinese (zh)
Inventor
夏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Publication of CN114299220A publication Critical patent/CN114299220A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Generation (AREA)

Abstract

The application discloses a data generation method, a data generation device, equipment, a medium and a program product of an illumination map, and relates to the technical field of computers. The method comprises the following steps: acquiring a virtual scene, wherein the virtual scene comprises a virtual object and a virtual light source illuminates the virtual scene; setting a first number of uniformly distributed illumination probes at a target interval in a virtual scene, wherein the illumination probes are used for detecting the light distribution condition of a virtual light source in the virtual scene; the method comprises the steps that irradiance data of the surface of a virtual object are obtained through a first number of illumination probes, and the irradiance data are used for indicating the quantity of light rays irradiated on the surface of the virtual object by a virtual light source; and generating a target illumination map based on the irradiance data code, wherein the target illumination map is used for baking the illumination effect irradiated on the virtual object in the virtual scene. Irradiance data of the surface of a virtual object in the virtual scene is acquired through the illumination probe to generate an illumination map corresponding to the virtual scene, and data quantity corresponding to the illumination map is reduced.

Description

Data generation method, device, equipment, medium and program product of illumination map
The present application claims priority from chinese patent application No. 202111374087.5 entitled "method, apparatus, device, medium, and program product for generating data for a light map" filed on 11/19/2021, which is incorporated herein by reference in its entirety.
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, a medium, and a program product for generating data of an illumination map.
Background
In an application program supporting a virtual scene, in order to enhance expressiveness in the virtual scene, a lighting effect needs to be added in a rendering process.
In the related art, in the running process of an application program, on the basis of rendering of an original object model texture in a virtual scene through a light map (Lightmap), fusion of the light texture is added, and then a light effect can be rendered. The generation process of the illumination map of the whole virtual scene comprises rendering of the scene with illumination, generation of a single polygon illumination texture map and combination of the polygon illumination texture maps in the whole scene.
Therefore, in the generation of the illumination map, the model in the virtual scene needs the illumination map UV, the whole data volume of the illumination map is related to the expansion area of the object model in the virtual scene, the generation process of the illumination map is complex, and the data volume of the generated illumination map is high.
Disclosure of Invention
The embodiment of the application provides a data generation method, a device, equipment, a medium and a program product of an illumination map. The technical scheme is as follows:
in one aspect, a data generation method for an illumination map is provided, where the method includes:
acquiring a virtual scene, wherein the virtual scene comprises a virtual object and a virtual light source illuminates the virtual scene;
setting a first number of uniformly distributed illumination probes at a target interval in the virtual scene, wherein the illumination probes are used for detecting the light distribution condition of the virtual light source in the virtual scene;
obtaining irradiance data of the surface of the virtual object through the first number of illumination probes, wherein the irradiance data is used for indicating the quantity of light rays irradiated on the surface of the virtual object by the virtual light source;
and generating a target illumination map based on the irradiance data code, wherein the target illumination map is used for baking the illumination effect irradiated on the virtual object in the virtual scene.
In another aspect, there is provided an illumination mapping data generation apparatus, the apparatus including:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a virtual scene, the virtual scene comprises a virtual object, and a virtual light source is used for illuminating the virtual scene;
the device comprises a setting module, a control module and a display module, wherein the setting module is used for setting a first number of uniformly distributed illumination probes in the virtual scene at a target interval, and the illumination probes are used for detecting the light distribution condition of the virtual light source in the virtual scene;
the acquisition module is used for acquiring irradiance data of the surface of the virtual object through the first number of the illumination probes, wherein the irradiance data is used for indicating the light quantity of the virtual light source irradiating the surface of the virtual object;
and the generating module is used for generating a target illumination map based on the irradiance data code, and the target illumination map is used for baking the illumination effect irradiated on the virtual object in the virtual scene.
In another aspect, a computer device is provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the data generation method of an illumination map according to any one of the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the data generation method of an illumination map described in any of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the data generation method of the light map according to any one of the above embodiments.
The technical scheme provided by the application at least comprises the following beneficial effects:
when the illumination sticking chart needs to be generated according to the illumination effect of the virtual object in the virtual scene by the virtual light source, the illumination distribution condition in the virtual scene is detected by the illumination probe arranged in the virtual scene so as to determine the irradiance data corresponding to the surface of the object, and the irradiance data is encoded to generate the illumination sticking chart. Because the irradiance data is obtained through the illumination probe, the data volume of the illumination map is irrelevant to the expansion area of the object model in the virtual scene, the data volume of the generated illumination map is reduced, the illumination map UV of the object model does not need to be obtained, and the data volume required by the illumination map during data generation is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a data generation method for an illumination map provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic view of a ray tracing direction corresponding to an illumination probe provided in an exemplary embodiment of the present application;
FIG. 4 is a flowchart of a data generation method for an illumination map provided by another exemplary embodiment of the present application;
FIG. 5 is a graph illustrating an illumination probe distribution corresponding to a dense volumetric illumination map provided by an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a data generation method for an illumination map provided by another exemplary embodiment of the present application;
FIG. 7 is a schematic view of an illumination probe distribution corresponding to a sparse volumetric illumination map provided by an exemplary embodiment of the present application;
FIG. 8 is a block diagram of a data generation apparatus for an illumination map provided in an exemplary embodiment of the present application;
FIG. 9 is a block diagram of a data generation apparatus for an illumination map according to another exemplary embodiment of the present application;
fig. 10 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms related to embodiments of the present application will be schematically described.
Virtual scene: the virtual environment is a virtual open space, and the virtual scene can be a simulation scene of a real world, a semi-simulation semi-fictional scene, or a pure fictional scene. Alternatively, the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated in a case where the virtual scene is a three-dimensional virtual scene, but are not limited thereto.
Illustratively, the virtual scene includes a virtual object, and optionally, the virtual object may be a scene component element in the virtual scene, such as a virtual building in the virtual scene, or may also be a scene participating element located in the virtual scene, such as a virtual object that moves in the virtual scene.
Light sticking: the method is used for adding the fusion of the illumination texture on the basis of the texture rendering of the original object model in the virtual scene, so that the object model renders the illumination effect. Illumination mapping is a technique that can enhance the illumination effect of static scenes.
In a general illumination mapping, illumination effects corresponding to polygons forming a virtual scene are stored in the illumination mapping, that is, if N polygons exist in the virtual scene, the N illumination mapping corresponds to the N illumination mapping, where N is a positive integer, and then the illumination mapping corresponding to each polygon is merged to obtain an illumination mapping of the entire virtual scene, where the generation of the illumination mapping process of the entire virtual scene includes: 1. rendering the virtual scene with illumination; 2. generating a single polygonal illumination texture map; 3. and merging the illumination texture maps of the polygons in the whole virtual scene. When generating the illumination map for a single polygon, it is necessary to obtain Lightmap UV corresponding to the polygon model.
In the present application, a data generation method of a lighting map is implemented based on a Volume lighting map (Volume lighting map), where the Volume lighting map has the following advantages compared with an ordinary lighting map:
1. the model of the virtual object in the virtual scene does not need Lightmap UV, and the vertex data amount is saved.
2. The data volume of the illumination map is irrelevant to the model expansion area of the virtual object and only relevant to the size of the occupied space of the model of the virtual object, and the data volume is lower than the Lightmap.
3. And supporting a normal line mapping and showing a fine detail effect.
In conjunction with the above noun explanations, an implementation environment of the embodiments of the present application will be explained. FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a second terminal 120, a server 130 and a communication network 140.
The first terminal 110 is provided with a first application that generates a target light map. Illustratively, the first application includes a first graphics engine, and the first graphics engine can be used in the development process of the virtual scene. Optionally, the graphics engine includes Unity3D, a ghost engine, a frost engine, and the like, and is not limited herein. The first terminal 110 includes various types of terminal devices such as a mobile phone, a tablet computer, a desktop computer, and a laptop computer.
The second terminal 120 is installed and operated with a second application program supporting a virtual scene. Illustratively, the second application includes a second graphics engine, and the second graphics engine can be used for the running display process of the virtual scene. Optionally, the first graphics engine and the second graphics engine may be the same graphics engine, or may be different application versions of the same graphics engine (e.g., the first graphics engine is a developer version, and the second graphics engine is an application running version), which is not limited herein. The second application program may be any one of Game programs such as a virtual reality application program, a three-dimensional map program, a First-Person Shooting (FPS) Game, a Third-Person Shooting (TPS) Game, a Multiplayer Online tactical sports Game (MOBA), a Massively Multiplayer Online Role Playing Game (MMORPG), and a Multiplayer Battle survival Game. The user controls the master virtual object located in the virtual scene through the second terminal 120. The second terminal 120 includes various types of terminal devices such as a mobile phone, a tablet computer, a desktop computer, and a laptop computer.
The server 130 is configured to provide backend services for the first application and/or the second application, such as providing backend data computing support for the first application and backend application logic support for the second application. Optionally, the server 130 undertakes primary computational work and the first terminal 110 and the second terminal 120 undertakes secondary computational work; alternatively, the server 130 undertakes the secondary computing work, and the first terminal 110 and the second terminal 120 undertake the primary computing work; or, the server 130, the first terminal 110, and the second terminal 120 perform cooperative computing by using a distributed computing architecture.
It should be noted that the server 130 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. In some embodiments, the server 130 described above may also be implemented as a node in a blockchain system. The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like.
In the embodiment of the present application, taking the second application program as an example of a game program, a developer creates a virtual scene through the first terminal 110, or acquires virtual scene data from the server 130. After the first terminal 110 obtains the virtual scene, the irradiance data of the virtual light source in the virtual scene on the surface of the virtual object is obtained through the illumination probe, an illumination map is obtained according to the irradiance data code, and the first terminal 110 sends the illumination map to the server 130. The second terminal 120 downloads the illumination map from the server 130 for storage, and when the player needs to run the virtual scene through the second application program, the second terminal 120 reads the illumination map, and cures the illumination effect irradiated on the virtual object in the virtual scene, so that the displayed virtual scene includes the illumination effect of the virtual light source.
Illustratively, the first terminal 110 and the server 130, and the second terminal 120 and the server 130 are connected through a communication network 140.
Referring to fig. 2, a data generating method of an illumination map according to an embodiment of the present application is shown, in the embodiment of the present application, for example, the method is applied to the first terminal shown in fig. 1, and the method may also be applied to other terminals in a server or in a computer system, which is not limited herein, and the method includes:
step 201, acquiring a virtual scene, wherein the virtual scene includes a virtual object, and a virtual light source is present to illuminate the virtual scene.
Optionally, the virtual scene includes a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene. In some embodiments, the first terminal acquires resources of a virtual scene from a server, wherein the resources of the virtual scene are pre-manufactured resource data stored in the server; or the first terminal acquires the resources of the virtual scene from the storage area, wherein the resources of the virtual scene are generated by the developer through the first terminal.
The virtual object includes a two-dimensional virtual object or a three-dimensional virtual object. The virtual object may be a static object in a virtual scene, or may be a dynamic object in the virtual scene. In some embodiments, the virtual objects include scene objects in a virtual scene, such as virtual buildings, virtual mountains and mountains, virtual trees, virtual river channels, and the like, and the virtual objects further include object objects in the virtual scene, such as virtual heros, virtual soldiers, virtual props, and the like, which are not limited herein. In one example, the virtual foundation used to support the virtual object also belongs to the virtual object in the virtual scene.
In some embodiments, when the virtual object is a two-dimensional virtual object, the virtual object is composed of a plurality of pixels (pixels), and when the virtual object is a three-dimensional virtual object, the virtual object is composed of a plurality of Voxel blocks (voxels). In the embodiment of the present application, a virtual object is taken as an example of a three-dimensional virtual object. Illustratively, the higher the resolution of a three-dimensional virtual object when displayed, the greater the number of voxel blocks that it constitutes.
Optionally, the virtual objects in the virtual scene are generated by Surface Voxelization (Surface Voxelization) or volume Voxelization (Solid Voxelization), wherein the Surface Voxelization generates voxel blocks from Surface data of the virtual objects, and the volume Voxelization generates voxel blocks from whole data (including Surface data and internal data) of the virtual objects. In the embodiment of the present application, since the light map is only related to the surface data, the generation of a voxel block by surface voxel is exemplified to reduce the amount of data.
Optionally, the virtual light source includes a light source capable of forming lighting effects such as spot light, parallel light, spotlight, and the like. The virtual scene may correspond to one or more virtual light sources, which is not limited herein.
In step 202, a first number of evenly distributed illumination probes are arranged in a virtual scene with a target spacing.
The light Probe (Probe) is used for detecting the light distribution condition of the virtual light source in the virtual scene.
In some embodiments, a functional component for setting an illumination probe is provided in the first application for generating the target illumination map, and the setting of the illumination probe in the virtual scene can be realized through the functional component.
Optionally, the first number may be preset by the system, may also be customized by a developer, and may also be determined according to a virtual scene.
Optionally, the distribution positions of the illumination probes in the virtual scene may be specified by a user, or may be generated according to a preset rule, for example, after the probe density is specified, the first terminal automatically generates a first number of illumination probes corresponding to the specified probe density in the virtual scene.
In this embodiment of the present application, the first number of the illumination probes are uniformly distributed in the virtual scene, and the Volume illumination map generated corresponding to the uniformly distributed illumination probes is a Dense Volume illumination map (Dense Volume lighting map), that is, intervals between each of the illumination probes set in the virtual scene are all the same. Alternatively, the first number of illumination probes may be non-uniformly distributed in the virtual scene, e.g. the density of virtual objects in the virtual scene is proportional to the number of illumination probes.
Irradiance data of the surface of the virtual object is obtained by a first number of illumination probes, step 203.
The Irradiance (Irradiance) data is used to indicate the amount of light that the virtual light source irradiates the surface of the virtual object.
Illustratively, for a certain spatial position, the irradiance is a spherical function, and calculating the irradiance of a single illumination probe is to calculate the irradiance spherical data where the central point of the illumination probe is located. Illustratively, irradiance data of a certain ray tracing direction is calculated by formula one, aiming at the emission direction omega of a certain rayiIn other words, the direction ωiThe corresponding irradiance is given by the function TraceRadiance (ω) in equation onei) Calculated, where D is a direction in which irradiance needs to be calculated, i.e., the ray tracing direction, and Tracerradiance is a function indicating that the light probe traces the ray to obtain the direction ωiCorresponding ray data, max (0, ω)iD) for indicating acquisition of 0 and ωiMaximum element in D, due to direction ωiRays having an angle of more than 90 ° with respect to the ray tracing direction D do not contribute to the irradiance calculation of the ray tracing direction D, and therefore by calculating ωiInner product between D and D (omega)iAnd D are both vectors), if the inner product between the two is less than 0, it indicates that the angle between the two is greater than 90 °, so 0 is taken as the direction ωiThe irradiance of the ray in the ray tracing direction D is finally obtained by accumulating ray data obtained by ray tracing in each direction.
The formula I is as follows: irradiance (d) ═ TraceRadiance (ω)i)max(0,ωi·D)dωi
Because the light emitted by the light source is a free unknown quantity, the integral operation of the formula I can not be solved and can be analyzed, the formula I is converted into the formula II through a Monte Carlo method to be solved, and the irradiance data of the ray tracing direction D is obtained by calculating the light data of any direction (obtained by Monte Carlo sampling) on the spherical surface corresponding to the illumination probe, namely, the approximation is carried out in a sampling mode, wherein N is the number of rays, namely the number of Monte Carlo sampling.
The formula II is as follows:
Figure BDA0003444240080000081
that is, for the radiometric calculation of a certain ray tracing direction D corresponding to a single light probe, a large number of rays need to be emitted for path tracing, and in order to reduce the amount of data calculation during baking, the result of each ray needs to be utilized.
In some embodiments, the light tracing direction corresponding to the illumination probe corresponds to the encoding manner of the illumination map, for example, when the encoding manner of the illumination map is an Ambient light Cube (Ambient Cube) encoding, the light tracing direction corresponding to the illumination probe is as shown in fig. 3, where six corresponding surfaces of a rectangular parallelepiped 300 in fig. 3 are six light tracing directions, an illumination probe 301 is located at the center of the rectangular parallelepiped 300, the illumination probe 301 corresponds to a plurality of ray directions, and each ray direction belongs to a different light tracing direction.
And step 204, generating a target illumination map based on the irradiance data codes.
The target illumination map is used for baking illumination effects irradiated on the virtual object in the virtual scene. In some embodiments, the target illumination map is pre-fabricated rendering data for an illumination effect of a static light source, the static light source is a light source that does not change with time in the virtual scene, that is, the target illumination map is implemented by pre-fabricating the illumination map, and when the second application generates the virtual scene, the illumination effect in the virtual scene can be rendered by directly reading the fabricated illumination map.
Schematically, the irradiance data is encoded in a target encoding manner to obtain a target illumination map. Alternatively, the target encoding method may be any one of Spherical Harmonics (Spherical Harmonics) encoding, gaussian sphere (Spherical gaussian) encoding, Ambient light cube encoding, and Ambient partition (Ambient Dice) encoding.
In some embodiments, after the first terminal encodes the irradiance data into the target illumination map, the first terminal uploads the target illumination map to the server for provision to a second application in the second terminal to generate the virtual scene. Or in other embodiments, the first terminal further includes a second application program capable of generating a virtual scene screen, and after the target illumination map is generated, the terminal generates the virtual scene through the second application program.
Optionally, when the virtual scene is generated through the second application program, the second terminal obtains the target illumination map from the server in real time to map the target illumination map to the model surface of the virtual object in the virtual scene, so as to form an illumination effect; or the second terminal acquires the target illumination mapping from the server and prestores the target illumination mapping to the storage area when the second application program is installed, and acquires the target illumination mapping from the storage area and pastes the target illumination mapping to the model surface of the virtual object in the virtual scene when the virtual scene needs to be generated.
In some embodiments, after the target illumination map is generated, to facilitate storage of the map data, the target illumination map may be compressed by a preset compression technique to obtain map compressed data. The pre-Compression technique may be a three-dimensional Texture Compression (3D Texture Compression) technique (e.g., FXT1), or other Texture Compression techniques such as Vector Quantization Texture Compression (VQTC), PowerVR Texture Compression (PVRTC), etc.
In the embodiment of the present application, taking a virtual scene used in an MOBA game as an example, when an overall scene shape of the virtual scene of the MOBA game is a regular shape (e.g., a rectangle), and a corresponding height is limited, a three-dimensional texture compression technique is suitably used to compress a target illumination map, that is, the target illumination map is compressed by the three-dimensional texture compression technique to obtain map compressed data (3D map). Meanwhile, in the virtual scene of the MOBA game, the virtual camera shoots the virtual scene picture at a top view angle, the depth which can be seen by the virtual scene picture is small, and the range is small, so that the target illumination mapping is compressed by using the three-dimensional texture, and the utilization rate of a Cache (Cache) corresponding to the storage mode of the 3D mapping is high.
When the target illumination map is compressed by using a three-dimensional Texture Compression technology to obtain a 3D map, an Adaptive Scalable Texture Compression (ASTC) format can be used to reduce the storage cost of the map data. The 3D map may be map data stored in other Compression formats, such as a DirectX Texture Compression (DXTC) format, an Ericsson Texture Compression (ETC) format, and the like, but is not limited thereto.
Optionally, different compression schemes are employed for Processing interfaces of different Graphics Processing Units (GPUs). Illustratively, when the GPU processing interface is OpenGLES, if GL _ KHR _ texture _ compression _ ASTC _ HDR extension is supported, a High Dynamic Range (HDR) compression scheme (e.g., 3D ASTC HDR) is used, and specifically, ASTC compression of a Block Size (Block Size) of 3 × 3pixels or 4 × 4pixels may be selected according to quality requirements. When the GPU processing interface is a Low-level rendering application programming interface (Metal), since it does not support 3D ASTC HDR, it is possible to use the ASTC format of Low Dynamic Range (LDR) and use Slice-Based (Slice-Based)3D maps, the irradiance of HDR being encoded into LDR data using e.g. an encoding like a phantom light map (un real light map), compressed Slice by Slice (Slice) and then assembled into a 3D map.
In some embodiments, when the second terminal for running the virtual scene does not support either 3D or Slice-Based 3D mapping, the 3D mapping may be expanded into 2D mapping, with samples of the 3D mapping being simulated in a Shader (Shader).
To sum up, according to the data generation method of the illumination map provided by the embodiment of the application, when the illumination map is required to be generated according to the illumination effect of the virtual object in the virtual scene by the virtual light source, the illumination distribution condition in the virtual scene is detected by the illumination probe arranged in the virtual scene so as to determine the irradiance data corresponding to the surface of the object, and the irradiance data is encoded to generate the illumination map. Because the irradiance data is obtained through the illumination probe, the data volume of the illumination map is irrelevant to the expansion area of the object model in the virtual scene, the data volume of the generated illumination map is reduced, the illumination map UV of the object model does not need to be obtained, and the data volume required by the illumination map during data generation is reduced.
Referring to fig. 4, a data generating method of an illumination map according to an embodiment of the present application is shown, in the embodiment of the present application, taking an example in which a target illumination map is a dense volume illumination map, the method includes:
step 401, acquiring a virtual scene.
The virtual scene comprises a virtual object, and a virtual light source is used for illuminating the virtual scene. Illustratively, the virtual scene is composed of at least one virtual object, and the virtual object comprises a two-dimensional virtual object or a three-dimensional virtual object. The virtual object may be a static object in a virtual scene, or may be a dynamic object in the virtual scene.
Optionally, the virtual light source includes a light source capable of forming lighting effects such as spot light, parallel light, spotlight, and the like. The virtual scene may correspond to one or more virtual light sources, which is not limited herein.
In step 402, a first number of evenly distributed illumination probes is arranged at a target distance in a virtual scene.
In this embodiment of the present application, the first number of illumination probes are uniformly distributed in the virtual scene, and the volume illumination maps generated corresponding to the uniformly distributed illumination probes are dense volume illumination maps, that is, intervals between each of the illumination probes set in the virtual scene are all the same.
Alternatively, the first number of the illumination probes may be uniformly distributed in the entire space corresponding to the virtual scene, or may be uniformly distributed in a specified space in the virtual scene, for example, a space in which a virtual object exists.
Referring to fig. 5, schematically, a distribution of the illumination probes corresponding to the dense volume illumination map provided by an exemplary embodiment of the present application is shown, where, for a two-dimensional virtual scene is taken as an example, a space 500 corresponding to the virtual scene includes a virtual scene 510 formed by a plurality of virtual objects, the virtual scene corresponds to a virtual light source 520, a first number of the illumination probes 530 are uniformly distributed in the space 500 corresponding to the virtual scene, and the illumination probes 530 can detect light emitted by the virtual light source 520.
At step 403, a thread group is invoked for each lighting probe.
In this embodiment of the present application, each illumination probe correspondingly calls a Thread Group (Thread Group), that is, each illumination probe respectively calls different Thread groups, and the different Thread groups perform parallel processing on the different illumination probes, so as to improve data processing efficiency, where the Thread groups include target threads (threads) with a first Thread number, and the threads in the Thread groups track light emitted by the virtual light source, and in some embodiments, the first Thread number is determined by a light direction number obtained by sampling according to monte carlo.
In some embodiments, the target thread in the thread group corresponds to a target direction in the ray tracing directions, i.e., the target thread performs path tracing on rays in the target direction.
And step 404, tracking the light of the virtual light source in the target direction through the target thread to obtain light data corresponding to the target direction.
Each thread is responsible for the path tracking of a direction in the thread group and goes to collect light data, namely, through assigning different thread groups for each illumination probe to carry out ray tracking to the illumination probe, realize parallelization and handle, promoted the efficiency of acquireing of irradiance data. In some embodiments, the ray data may be the Radiance (Radiance) of the ray corresponding to the target direction, or the ray data may be the number of rays passing through the target direction.
In some embodiments, the number of branches of the threads is reduced by setting a thread bundle (Warp), that is, a thread bundle including a second number of threads in the thread group, wherein the thread bundle includes a third number of target threads, and in some embodiments, the second number of threads is determined by the number of ray tracing directions, for example, when the illumination probe should have 6 ray tracing directions, the second number of threads is 6, and the third number of threads is determined by the number of sampled rays in the ray tracing directions. Illustratively, the sum of the second thread number and the third thread number is the first thread number. The rays traced by the target threads in the same thread bundle share the same Stack Tracing (Tracing Stack), when the Stack Tracing is used for indicating the target threads to trace the rays, if exception prompting is performed in the Tracing process, the exception prompting comprises a function calling track of the Tracing process, namely, the existing exception is determined by using a Packet Traversal (Packet Traversal) mode aiming at the exception occurring in the ray Tracing of the same thread bundle of the illumination probe, so that the access times of a memory can be reduced, and the data processing efficiency is improved.
In some embodiments, the light emitted by the virtual light source is distributed to the thread bundle in morton order, the direction corresponding to the light in the light set processed by the thread bundle is within the range of the target direction, the light set includes the light in the target direction, and the light in the target direction is tracked by the target thread in the thread bundle to obtain the light data corresponding to the target direction. The morton mapping is that coordinates corresponding to the direction omega of the light are subjected to binary bit cross operation (at least one of bit expansion operation, shift operation or exclusive or operation) to generate a binary data to represent the coordinates corresponding to the direction omega, and the directions of adjacent light are relatively adjacent to each other, so that the light processed by the same thread bundle has a relatively consistent access Memory (Memory) mode, and the data processing efficiency is improved.
Hardware in the GPU includes a Work Scheduler (Work Scheduler) which is a Thread model for load balancing, and since Thread burden of ray tracing is not balanced, efficiency of the Work Scheduler is low, in some embodiments, a Persistent Thread (Persistent Thread) concurrent processing model may be adopted to bypass the Work Scheduler, so as to place all ray directions of the illumination probe to be subjected to ray tracing in a target queue, where the target queue is located in a block of memory with a long life cycle, and is read by a target Thread in a Thread bundle from the target queue, so that the Work Scheduler does not need to participate in distribution of the ray directions, and data processing efficiency is improved.
Optionally, when the thread is set to track the ray, a heterogeneous structure between a Central Processing Unit (CPU) and a graphics processor may be considered, and when designing a code, the number of communications and the amount of data transmission between the CPU and the GPU may be considered to be reduced as much as possible, and the GPU may be used to complete the scheduling process as much as possible.
Optionally, since a Global Memory (Global Memory) in the GPU is slow in access speed, it is necessary to consider reducing the number of times of accessing the Global Memory when designing code, and a Shared Memory (Shared Memory) is often used, and it is preferable to access a continuous segment of Memory when accessing the Memory.
Optionally, in order to increase blocks (blocks) and thread bundles in concurrent operation, reducing the number of shared memories and registers may be considered when designing a code, where the blocks are execution units of a Unified computing Device Architecture (CUDA) of the GPU in an actual execution process, threads in each Block take a thread bundle as a unit, and threads in the same Block may perform communication and synchronization operations through the shared memories.
Step 405, determining irradiance data of the surface of the virtual object according to ray data traced by the threads in the first number of thread groups.
Alternatively, serial or parallel calculations may be employed in collecting radiance tracked by rays corresponding to the illumination probe to calculate irradiance data. In some embodiments, when the radiance traced by each ray is calculated in Parallel to obtain the irradiance data, a Parallel Reduction algorithm may be used, that is, the radiance of the rays traced by the rays of the illumination probe in the target direction is taken as data in the same array, the radiance data in the array is summed through parallelization, for example, the radiance is obtained by N threads respectively, and the radiance is summed in Parallel by N/2 threads (two-by-two summation) and then summed in Parallel by N/4 threads until the last thread outputs the summation result.
Step 406, generating a target illumination map based on the irradiance data encoding.
The target illumination map is used for baking illumination effects irradiated on the virtual object in the virtual scene.
In the embodiment of the application, the irradiance data obtained by the illumination probe is coded into the target illumination map in an ambient light cube coding mode, and the target illumination map is stored in a three-dimensional texture compression mode.
To sum up, according to the data generation method of the illumination map provided by the embodiment of the application, when the illumination map is required to be generated according to the illumination effect of the virtual object in the virtual scene by the virtual light source, the illumination distribution condition in the virtual scene is detected by the illumination probe arranged in the virtual scene so as to determine the irradiance data corresponding to the surface of the object, and the irradiance data is encoded to generate the illumination map. Because the irradiance data is obtained through the illumination probe, the data volume of the illumination map is irrelevant to the expansion area of the object model in the virtual scene, the data volume of the generated illumination map is reduced, the illumination map UV of the object model does not need to be obtained, and the data volume required by the illumination map during data generation is reduced.
The dense volume illumination mapping provided by the embodiment of the application can reduce the data volume corresponding to the illumination mapping under the condition that the overall shape of the virtual scene is regular, and meanwhile, the rendering efficiency of the illumination mapping during use can be ensured in the operation process.
Referring to fig. 6, a data generation method of an illumination map according to an embodiment of the present application is shown, in the embodiment of the present application, taking a target illumination map as a Sparse Volume illumination map (Sparse Volume light), as an example, for explanation, the method includes:
step 601, acquiring a virtual scene.
The virtual scene comprises a virtual object, and a virtual light source is used for illuminating the virtual scene. Illustratively, the virtual scene is composed of at least one virtual object, and the virtual object comprises a two-dimensional virtual object or a three-dimensional virtual object. The virtual object may be a static object in a virtual scene, or may be a dynamic object in the virtual scene.
Optionally, the virtual light source includes a light source capable of forming lighting effects such as spot light, parallel light, spotlight, and the like. The virtual scene may correspond to one or more virtual light sources, which is not limited herein.
In step 602, a first number of evenly distributed illumination probes are arranged at a target distance in a virtual scene.
The sparse volumetric illumination map provided in the embodiments of the present application is a further optimization of the dense volumetric illumination map, and therefore, the intervals between the first number of illumination probes provided in the embodiments of the present application are all the same.
Step 603, determining a second number of illumination probes from the first number of illumination probes according to the target screening condition.
Optionally, the target screening condition may indicate that a distance between the illumination probe and a virtual object in the virtual scene needs to satisfy a condition; or, indicating that a distance between the illumination probe and a virtual light source corresponding to the virtual scene needs to satisfy a condition.
In an embodiment of the application, the sparse volumetric illumination map is generated by a second number of illumination probes, whereas the core of the sparse volumetric illumination map is the distribution of the illumination probes. Illustratively, to maximize the data efficiency of a single illumination probe, the position of the illumination probe preferably satisfies the following three conditions:
(1) the center point of the illumination probe and the voxel block of the virtual object cannot overlap.
If the central point of the illumination probe is overlapped with the voxel block of the virtual object, the illumination probe is positioned in the virtual object, the ray corresponding to the illumination probe is blocked by the virtual object, the ray direction corresponding to the illumination probe cannot track the ray, and the calculated irradiance data indicates that the position is black.
(2) The distance between the center point of the illumination probe and the voxel block of the virtual object cannot be too close.
If the distance between the center point of the illumination probe and the voxel block of the virtual object is very close, then less useful information is available in the irradiance data because the illumination probe can only "see" a few areas.
(3) The distance between the center point of the illumination probe and the voxel block of the virtual object cannot be too far.
If the distance between the center point of the illumination probe and the voxel block of the virtual object is large, then the information in the irradiance data obtained by the illumination probe is of little use because the area that the illumination probe can "see" is large, but too little storage is allocated to this information.
For the above three conditions, the target screening condition is taken as an example in which the distance between the light probe and the virtual object satisfies the condition. Illustratively, obtaining distance information between a first number of the illumination probes and the virtual object; a second number of illumination probes is screened from the first number of illumination probes based on the distance information.
In some embodiments, the Distance information between each illumination probe and the surface of the virtual object or a block of virtual object voxels is computed by a directed Distance Field (SDF) method.
Illustratively, size data of a virtual scene is acquired, a resolution of a prefabricated illumination map is determined according to the size data and a target distance, the prefabricated illumination map is generated by irradiance data acquired by a first number of illumination probes, texture data is acquired according to the resolution, the texture data is used for storing the prefabricated illumination map, distance information between the first number of illumination probes and a virtual object is determined based on the texture data, the virtual scene comprises a target number of virtual objects, and the distance information is used for indicating a distance between the illumination probes and the virtual object closest to the illumination probes.
In some embodiments, when the distance information between the illumination probes and the virtual object satisfies the distance threshold condition, the illumination probes are determined as effective illumination probes corresponding to the virtual object, an effective illumination probe in the first number of illumination probes is a second number of illumination probes used for determining the sparse volume illumination map, that is, an illumination probe whose distance information between the illumination probes and the virtual object satisfies the distance threshold condition is determined as an effective illumination probe corresponding to the virtual object, and an effective illumination probe corresponding to the target number of virtual objects is determined as the second number of illumination probes.
In some embodiments, the initial texture data is acquired at the resolution; carrying out voxelization on voxel blocks of the virtual object through the prefabricated illumination map, and determining object occupation data in probe areas corresponding to the first number of illumination probes; and updating the initial texture data according to the object occupation data to obtain texture data.
Illustratively, in response to that a target voxel block corresponding to a virtual object falls in a target probe area (Cell), accumulating object occupancy in the target probe area to obtain object occupancy data corresponding to the target probe area, where the target probe area is an area range in which a target illumination probe can effectively detect light, the object occupancy in the target probe area is used to indicate the number of objects in which the light detected by the target illumination probe can effectively irradiate the virtual object, and the effective irradiation includes that the light can reach the virtual object and the intensity of the light reaching the virtual object meets the illumination intensity requirement. And marking the target probe area as an object occupied area in response to the object occupied data corresponding to the target probe area exceeding the target number threshold.
After the occupation condition of the virtual object is determined in the probe area corresponding to the illumination probe, a Jump Flooding algorithm (Jump Flooding) can be used for determining distance information for texture data.
In one example, the resolution of the dense volume lighting map to be generated can be first calculated according to the size of the scene and the size of the space between the lighting probes, a 3D texture of this resolution is applied, and in the Voxelization (Voxelization) stage, the data recorded by the texture is a count value, and for each voxel block, if it falls within a Cell corresponding to the dense volume lighting map, the count value corresponding to the Cell is incremented by one, and after the Voxelization stage is finished, for each Cell, if the calculated value corresponding to the Cell exceeds a threshold, the Cell is marked as "occupied by an object". Then, the Jump mapping algorithm that can be used calculates the distance between the objects closest to each Cell, and uses the distance as the distance information between the light probe and the virtual object.
Schematically, a two-dimensional map is taken as an example to schematically illustrate the Jump mapping algorithm, assuming that the size of the map is n × n, before the Jump mapping algorithm is run, for a Cell c marked as "occupied by an object", map data corresponding to c is initialized to be<cx,cy>I.e. the Cell closest to c is itself. And then running the flood process for logn times, wherein in each flood process, each Cell transmits the shortest distance node information recorded by the Cell to other 8 nodes at most, and the coordinates of the 8 nodes are<cx+i,cy+j>Wherein i, j ∈ { -l,0, l }, l is the step size of logn iterations, and the value of l is n/2, n/4, … 1. After running log flood, each Cell records the coordinates of the Cell "full of objects" closest to it.
In some embodiments, the closest distance can be calculated after each Cell knows the "full with object" Cell coordinates that are closest to itself. From the foregoing analysis, it is known that irradiance data need only be calculated for cells whose distances are within a valid range, which can be set by the developer, and after the valid cells are calculated, they can be recorded by Octree (Octree) or K-dimensional Tree (KD-Tree). When the method is used, the value of irradiance data in the normal direction of a shadow Point (Shading Point) can firstly traverse the Octree from top to bottom to find the illumination probes corresponding to the surrounding cells, and then the irradiance data corresponding to the illumination probes are used for carrying out interpolation calculation so as to realize the rendering of an illumination effect.
Irradiance data for the surface of the virtual object is obtained with a second number of illumination probes, step 604.
The irradiance data is used to indicate the amount of light that the virtual light source shines on the surface of the virtual object.
Illustratively, for a certain spatial position, the irradiance is a spherical function, and calculating the irradiance of a single illumination probe is to calculate the irradiance spherical data where the central point of the illumination probe is located.
In the embodiment of the present application, the target threads in the thread group are called to track the light of the virtual light source, so as to obtain the radiance corresponding to the target direction, and the radiance of the target direction is converted into irradiance data, where the generation method of the irradiance data corresponding to the target direction is the same as that in steps 403 to 405, and is not described herein again.
In an example, as shown in fig. 7, a distribution of the illumination probes corresponding to the sparse volumetric illumination map provided by an exemplary embodiment of the present application is shown, where a space 700 corresponding to a virtual scene includes a virtual scene 710 composed of a plurality of virtual objects, the virtual scene corresponds to a virtual light source 720, a second number of illumination probes 730 are distributed in the space 700 corresponding to the virtual scene, and the illumination probes 730 can detect light emitted by the virtual light source 720.
Step 605, generating a target illumination map based on the irradiance data encoding.
The target illumination map is used for baking illumination effects irradiated on the virtual object in the virtual scene.
In the embodiment of the application, the irradiance data obtained by the illumination probe is coded into the target illumination map in an ambient light cube coding mode, and the target illumination map is stored in a three-dimensional texture compression mode.
In some embodiments, when the second terminal acquires the target illumination map and renders the illumination effect in the virtual scene through the target illumination map, the second terminal decodes according to a coding mode corresponding to the target illumination map to obtain decoded irradiance data, performs normal mapping through the decoded irradiance data, and uses the decoded irradiance data for interpolation used when displaying the illumination effect on the virtual object when the virtual scene runs. Namely, acquiring a baking normal of the virtual object; and sampling the target illumination map based on the baking normal and a target decoding mode to generate decoded irradiance data. The illumination effect rendering realized in the normal map mode can enable the effect of the virtual scene obtained by rendering to be more precise, and the detail display effect in the virtual scene is improved.
In one example, when the target illumination map is coded using Ambient Cube, 3 directions are selected according to the baking normal, and then the corresponding data is sampled, and the corresponding code is designed as follows:
Figure BDA0003444240080000171
Figure BDA0003444240080000181
the parameter posWorld is used for representing world coordinates of a rendering pixel, the parameter normallworld is used for representing a world normal of the rendering pixel, the variable nSquared is used for representing the square of the world normal, the variable isPositive is used for recording information of whether each component of the normal is larger than 0 or not and is used for subsequent calculation, the function GetVolumeLightMapUv represents a three-dimensional uv value of a sampling volume illumination map obtained through calculation of the world coordinates, the function SampleTex3D is used for indicating data sampling of the three-dimensional map, and finally the value of the parameter irradence returned by the whole function SampleVolumeLightMap3D is the calculated normallWorld data at the posild position.
To sum up, according to the data generation method of the illumination map provided by the embodiment of the application, when the illumination map is required to be generated according to the illumination effect of the virtual object in the virtual scene by the virtual light source, the illumination distribution condition in the virtual scene is detected by the illumination probe arranged in the virtual scene so as to determine the irradiance data corresponding to the surface of the object, and the irradiance data is encoded to generate the illumination map. Because the irradiance data is obtained through the illumination probe, the data volume of the illumination map is irrelevant to the expansion area of the object model in the virtual scene, the data volume of the generated illumination map is reduced, the illumination map UV of the object model does not need to be obtained, and the data volume required by the illumination map during data generation is reduced.
According to the sparse volume illumination mapping provided by the embodiment of the application, the number of the set illumination probes is reduced according to the distance between the illumination probes and the virtual object, the data volume of the generated mapping is further reduced, and the storage resource consumption of the mapping is reduced.
Referring to fig. 8, a block diagram of a data generating apparatus for an illumination map according to an exemplary embodiment of the present application is shown, where the apparatus includes the following modules:
an obtaining module 810, configured to obtain a virtual scene, where the virtual scene includes a virtual object and a virtual light source illuminates the virtual scene;
a setting module 820, configured to set, in the virtual scene, a first number of uniformly distributed illumination probes at a target interval, where the illumination probes are configured to detect a light distribution condition of the virtual light source in the virtual scene;
an obtaining module 830, configured to obtain irradiance data of the surface of the virtual object through the first number of light probes, where the irradiance data is used to indicate an amount of light that the virtual light source irradiates the surface of the virtual object;
a generating module 840, configured to generate a target illumination map based on the irradiance data code, where the target illumination map is used to bake an illumination effect irradiated on the virtual object in the virtual scene.
In some optional embodiments, as shown in fig. 9, the obtaining module 830 further includes:
a calling unit 831, configured to call a thread group for each illumination probe, where the thread group includes target threads of a first thread number, and the target threads correspond to target directions;
a tracking unit 832, configured to track, by using the target thread, light of the virtual light source in the target direction, so as to obtain light data corresponding to the target direction;
a determining unit 833, configured to determine the irradiance data of the surface of the virtual object according to ray data tracked by threads in the first number of thread groups.
In some optional embodiments, the thread group includes a second number of threads, the thread bundle includes a third number of target threads, and rays traced by the target threads in the same thread bundle share the same stack trace.
In some optional embodiments, as shown in fig. 9, the obtaining module 830 further includes:
a distributing unit 834 configured to distribute the light rays emitted by the virtual light source to the thread bundle in a morton order, where directions corresponding to the light rays in a light ray set processed by the thread bundle are within a target direction range, and the light ray set includes the light rays in the target direction;
the tracking unit 832 is further configured to track the light in the target direction through the target thread in the thread bundle, so as to obtain light data corresponding to the target direction.
In some optional embodiments, as shown in fig. 9, the setting module 820 further includes:
an obtaining unit 821, configured to obtain distance information between the first number of illumination probes and the virtual object;
a screening unit 822 for screening the second number of illumination probes from the first number of illumination probes based on the distance information;
the obtaining module 830 is further configured to obtain the irradiance data of the surface of the virtual object through the second number of illumination probes.
In some optional embodiments, the obtaining unit 821 is further configured to obtain size data of the virtual scene;
the obtaining unit 821 is further configured to determine a resolution of a prefabricated illumination map according to the size data and a target interval, where the prefabricated illumination map is generated by irradiance data obtained by the first number of illumination probes, and the target interval is an interval between the first number of illumination probes;
the obtaining unit 821 is further configured to obtain texture data according to the resolution, where the texture data is used to store the prefabricated illumination map;
the obtaining unit 821 is further configured to determine the distance information between the first number of illumination probes and the virtual object based on the texture data, where the virtual scene includes a target number of virtual objects, and the distance information is used to indicate a distance between the illumination probe and the virtual object closest to the illumination probe.
In some optional embodiments, the screening unit 822 is further configured to determine, as an effective illumination probe corresponding to the virtual object, an illumination probe for which the distance information between the virtual object and the illumination probe satisfies a distance threshold condition;
the screening unit 822 is further configured to determine the effective light probes corresponding to the target number of virtual objects as the second number of light probes.
In some alternative embodiments, the virtual object is comprised of a block of voxels;
the obtaining unit 821, further configured to obtain initial texture data at the resolution;
as shown in fig. 9, the setting module 820 further includes:
a determining unit 823, configured to perform voxelization on the voxel blocks of the virtual object through the prefabricated illumination map, and determine object occupancy data in the probe regions corresponding to the first number of illumination probes;
the obtaining unit 821 is further configured to update the initial texture data according to the object occupation data to obtain the texture data.
In some optional embodiments, the determining unit 823 is further configured to, in response to that a target voxel block corresponding to the virtual object falls in a target probe region, add up the object occupancy in the target probe region to obtain the object occupancy data corresponding to the target probe region, where the target probe region is an area range where the target illumination probe can effectively detect light.
In some optional embodiments, the determining unit 823 is further configured to mark the target probe area as an object-full area in response to the object occupancy data corresponding to the target probe area exceeding a target quantity threshold.
In some optional embodiments, as shown in fig. 9, the apparatus further comprises:
and the compression module 850 is configured to compress the target illumination map by using a three-dimensional texture compression technique to obtain map compressed data.
In some optional embodiments, the apparatus further comprises:
a decoding module 860 for obtaining a baking normal of the virtual object;
the decoding module 860 is further configured to sample the target illumination map based on the baking normal and a target decoding manner, and generate decoded irradiance data, where the decoded irradiance data is used for interpolation used when displaying an illumination effect on the virtual object during the running of the virtual scene.
To sum up, the data generation device of illumination map that this application embodiment provided when needs carry out the generation of illumination map to the illumination effect of virtual object in the virtual scene according to virtual light source, through the illumination distribution condition of setting in the illumination probe detection virtual scene in the virtual scene to confirm the irradiance data that the object surface corresponds, encode irradiance data and generate the illumination map. Because the irradiance data is obtained through the illumination probe, the data volume of the illumination map is irrelevant to the expansion area of the object model in the virtual scene, the data volume of the generated illumination map is reduced, the illumination map UV of the object model does not need to be obtained, and the data volume required by the illumination map during data generation is reduced.
It should be noted that: the data generating apparatus for an illumination map provided in the foregoing embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data generating device of the illumination map provided by the above embodiment and the data generating method embodiment of the illumination map belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and is not described herein again.
Fig. 10 shows a block diagram of a terminal 1000 according to an exemplary embodiment of the present application. The terminal 1000 can be: a smart phone, a tablet computer, a motion Picture Experts Group Audio Layer 3 player (MP 3), a motion Picture Experts Group Audio Layer 4 player (MP 4), a notebook computer or a desktop computer. Terminal 1000 can also be referred to as user equipment, portable terminal, laptop terminal, desktop terminal, or the like by other names.
In general, terminal 1000 can include: a processor 1001 and a memory 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a Graphics Processing Unit (GPU) which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1001 may also include an Artificial Intelligence (AI) processor for processing computational operations related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one instruction for execution by processor 1001 to implement the virtual office based control method provided by the method embodiments herein.
In some embodiments, terminal 1000 can also optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, display screen 1005, audio circuitry 1007, and power supply 1009.
Those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting and that terminal 1000 can include more or fewer components than shown, or some components can be combined, or a different arrangement of components can be employed.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by the processor to implement the virtual office based control method of any of the above embodiments.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (16)

1. A method for generating data of an illumination map, the method comprising:
acquiring a virtual scene, wherein the virtual scene comprises a virtual object and a virtual light source illuminates the virtual scene;
setting a first number of uniformly distributed illumination probes at a target interval in the virtual scene, wherein the illumination probes are used for detecting the light distribution condition of the virtual light source in the virtual scene;
obtaining irradiance data of the surface of the virtual object through the first number of illumination probes, wherein the irradiance data is used for indicating the quantity of light rays irradiated on the surface of the virtual object by the virtual light source;
and generating a target illumination map based on the irradiance data code, wherein the target illumination map is used for baking the illumination effect irradiated on the virtual object in the virtual scene.
2. The method of claim 1, wherein said obtaining irradiance data for the surface of the virtual object with the first number of illumination probes comprises:
calling a thread group for each illumination probe, wherein the thread group comprises target threads with a first thread number, and the target threads correspond to target directions;
tracking the light of the virtual light source in the target direction through the target thread to obtain light data corresponding to the target direction;
determining the irradiance data of the virtual object surface from ray data traced by threads in a first number of thread groups.
3. The method of claim 2, wherein the thread group comprises a second number of threads, wherein the thread bundle comprises a third number of threads, and wherein rays traced by the target threads in the same thread bundle share the same stack trace.
4. The method of claim 3, wherein the tracking, by the target thread, the light of the virtual light source in the target direction to obtain light data corresponding to the target direction comprises:
distributing the light rays emitted by the virtual light source to the thread beams in a Morton order, wherein the direction corresponding to the light rays in a light ray set processed by the thread beams is within a target direction range, and the light ray set comprises the light rays in the target direction;
and tracking the light rays in the target direction through the target thread in the thread bundle to obtain light ray data corresponding to the target direction.
5. The method of any of claims 1 to 4, wherein said obtaining irradiance data for the surface of the virtual object with the first number of illumination probes comprises:
acquiring distance information between the first number of illumination probes and the virtual object;
selecting a second number of illumination probes from the first number of illumination probes based on the distance information;
obtaining the irradiance data for the virtual object surface with the second number of illumination probes.
6. The method of claim 5, wherein obtaining distance information between the first number of light probes and the virtual object comprises:
acquiring size data of the virtual scene;
determining the resolution of a prefabricated illumination map according to the size data and a target interval, wherein the prefabricated illumination map is generated by irradiance data obtained by the first number of illumination probes, and the target interval is the interval between the first number of illumination probes;
acquiring texture data according to the resolution, wherein the texture data is used for storing the prefabricated illumination map;
determining the distance information between the first number of illumination probes and the virtual objects based on the texture data, the virtual scene including a target number of virtual objects, the distance information indicating a distance between the illumination probes and the virtual objects closest in distance.
7. The method of claim 6, wherein said screening a second number of illumination probes from said first number of illumination probes based on said distance information comprises:
determining the illumination probe with the distance information between the illumination probe and the virtual object meeting the distance threshold condition as an effective illumination probe corresponding to the virtual object;
and determining the effective illumination probes corresponding to the virtual objects of the target number as the illumination probes of the second number.
8. The method of claim 6, wherein the virtual object is comprised of a block of voxels;
the acquiring texture data according to the resolution includes:
obtaining initial texture data at the resolution;
carrying out voxelization on the voxel blocks of the virtual object through the prefabricated illumination map, and determining object occupation data in probe areas corresponding to the first number of illumination probes;
and updating the initial texture data according to the object occupation data to obtain the texture data.
9. The method of claim 8, wherein the determining object occupancy data in the probe region corresponding to the first number of illumination probes by voxelizing a voxel block of the virtual object through the pre-fabricated illumination map comprises:
and responding to that the target voxel block corresponding to the virtual object falls in a target probe area, and accumulating the object occupation amount in the target probe area to obtain the object occupation data corresponding to the target probe area, wherein the target probe area is an area range within which the target illumination probe can effectively detect light.
10. The method of claim 9, further comprising:
and in response to the object occupancy data corresponding to the target probe area exceeding a target quantity threshold, marking the target probe area as an object occupied area.
11. The method of any one of claims 1 to 4, wherein after the encoding generating the target illumination map based on the irradiance data, further comprising:
and compressing the target illumination map by a three-dimensional texture compression technology to obtain compressed data of the map.
12. The method of any of claims 1 to 4, further comprising:
acquiring a baking normal of the virtual object;
and sampling the target illumination map based on the baking normal and a target decoding mode to generate decoded irradiance data, wherein the decoded irradiance data is used for interpolation used when the illumination effect on the virtual object is displayed during the running of the virtual scene.
13. An apparatus for generating data for an illumination map, the apparatus comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a virtual scene, the virtual scene comprises a virtual object, and a virtual light source is used for illuminating the virtual scene;
the device comprises a setting module, a control module and a display module, wherein the setting module is used for setting a first number of uniformly distributed illumination probes in the virtual scene at a target interval, and the illumination probes are used for detecting the light distribution condition of the virtual light source in the virtual scene;
the acquisition module is used for acquiring irradiance data of the surface of the virtual object through the first number of the illumination probes, wherein the irradiance data is used for indicating the light quantity of the virtual light source irradiating the surface of the virtual object;
and the generating module is used for generating a target illumination map based on the irradiance data code, and the target illumination map is used for baking the illumination effect irradiated on the virtual object in the virtual scene.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the data generation method of a light map according to any one of claims 1 to 12.
15. A computer-readable storage medium, having at least one program code stored therein, the program code being loaded and executed by a processor to implement the data generation method of an illumination map according to any one of claims 1 to 12.
16. A computer program product comprising a computer program or instructions which, when executed by a processor, implement the data generation method of an illumination map according to any of claims 1 to 12.
CN202111642785.9A 2021-11-19 2021-12-29 Data generation method, device, equipment, medium and program product of illumination map Pending CN114299220A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021113740875 2021-11-19
CN202111374087 2021-11-19

Publications (1)

Publication Number Publication Date
CN114299220A true CN114299220A (en) 2022-04-08

Family

ID=80970954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111642785.9A Pending CN114299220A (en) 2021-11-19 2021-12-29 Data generation method, device, equipment, medium and program product of illumination map

Country Status (1)

Country Link
CN (1) CN114299220A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984444A (en) * 2023-03-21 2023-04-18 成都信息工程大学 Illumination information cache calculation method and system for volume data global illumination
CN116030180A (en) * 2023-03-30 2023-04-28 北京渲光科技有限公司 Irradiance cache illumination calculation method and device, storage medium and computer equipment
CN117392251A (en) * 2023-12-06 2024-01-12 海马云(天津)信息技术有限公司 Decoding performance optimization method for texture data in ASTC format in Mesa 3D graphics library

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984444A (en) * 2023-03-21 2023-04-18 成都信息工程大学 Illumination information cache calculation method and system for volume data global illumination
CN116030180A (en) * 2023-03-30 2023-04-28 北京渲光科技有限公司 Irradiance cache illumination calculation method and device, storage medium and computer equipment
CN116030180B (en) * 2023-03-30 2023-06-09 北京渲光科技有限公司 Irradiance cache illumination calculation method and device, storage medium and computer equipment
CN117392251A (en) * 2023-12-06 2024-01-12 海马云(天津)信息技术有限公司 Decoding performance optimization method for texture data in ASTC format in Mesa 3D graphics library
CN117392251B (en) * 2023-12-06 2024-02-09 海马云(天津)信息技术有限公司 Decoding performance optimization method for texture data in ASTC format in Mesa 3D graphics library

Similar Documents

Publication Publication Date Title
US11645810B2 (en) Method for continued bounding volume hierarchy traversal on intersection without shader intervention
US10825230B2 (en) Watertight ray triangle intersection
CN114299220A (en) Data generation method, device, equipment, medium and program product of illumination map
CN109523621B (en) Object loading method and device, storage medium and electronic device
US10380785B2 (en) Path tracing method employing distributed accelerating structures
US8570322B2 (en) Method, system, and computer program product for efficient ray tracing of micropolygon geometry
KR101545039B1 (en) Systems and methods for rendering with ray tracing
US10614614B2 (en) Path tracing system employing distributed accelerating structures
US8773422B1 (en) System, method, and computer program product for grouping linearly ordered primitives
US11508112B2 (en) Early release of resources in ray tracing hardware
Livny et al. A GPU persistent grid mapping for terrain rendering
CN115375828B (en) Model shadow generation method, device, equipment and medium
CN115082611B (en) Illumination rendering method, apparatus, device and medium
CN115131482A (en) Rendering method, device and equipment for illumination information in game scene
WO2016128297A1 (en) Cache friendly jittered hemispherical sampling
US20240095993A1 (en) Reducing false positive ray traversal in a bounding volume hierarchy
US11908062B2 (en) Efficient real-time shadow rendering
Zhu et al. Sprite tree: an efficient image-based representation for networked virtual environments
Crause Fast, realistic terrain synthesis
Christen The Future of Virtual Globes The Interactive Ray-Traced Digital Earth
Schmitz et al. Interactive global illumination for deformable geometry in cuda
Shi Efficient Rendering of Scenes with Dynamic Lighting Using a Photons Queue and Incremental Update Algorithm
CN116848554A (en) Traversal apparatus for incoherent ray tracing
CN117876572A (en) Illumination rendering method, device, equipment and storage medium
CN117456079A (en) Scene rendering method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination