CN115082613A - Target radiance calculation method and device and terminal equipment - Google Patents

Target radiance calculation method and device and terminal equipment Download PDF

Info

Publication number
CN115082613A
CN115082613A CN202210829013.4A CN202210829013A CN115082613A CN 115082613 A CN115082613 A CN 115082613A CN 202210829013 A CN202210829013 A CN 202210829013A CN 115082613 A CN115082613 A CN 115082613A
Authority
CN
China
Prior art keywords
target
model
gpu
ray
ray tracing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210829013.4A
Other languages
Chinese (zh)
Inventor
王静
王放
马岩
赵军明
陈红
朱肇昆
徐非凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Beijing Institute of Environmental Features
63921 Troops of PLA
Original Assignee
Harbin Institute of Technology
Beijing Institute of Environmental Features
63921 Troops of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology, Beijing Institute of Environmental Features, 63921 Troops of PLA filed Critical Harbin Institute of Technology
Priority to CN202210829013.4A priority Critical patent/CN115082613A/en
Publication of CN115082613A publication Critical patent/CN115082613A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Abstract

The invention relates to the technical field of computer graphic processing, in particular to a method and a device for calculating target radiance and terminal equipment. The method is applied to terminal equipment, the terminal equipment comprises a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU), and a ray tracing application program Optix framework is configured in the terminal equipment, and the method comprises the following steps: the method comprises the steps that a CPU obtains a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model; transferring the simulation model to a GPU; the GPU builds an acceleration structure of the target model; starting ray tracing of the simulation model by the GPU based on an Optix framework, and accelerating the ray tracing by using an acceleration structure to obtain a ray tracing result; the GPU determines the target radiation brightness according to the ray tracing result; the GPU sends the target radiance to the CPU. The target radiance calculation method provided by the application has the advantages that the calculation accuracy is guaranteed, and meanwhile, the calculation efficiency is high.

Description

Target radiance calculation method and device and terminal equipment
Technical Field
The invention relates to the technical field of computer graphic processing, in particular to a method and a device for calculating target radiance and terminal equipment.
Background
With the development of science and technology, people have higher and higher requirements on the precision of simulation calculation of target radiance, which requires to improve the degree of refinement of a target model, for example, to increase the number of grids of a target or to reduce the simplification of an algorithm, however, this will significantly increase the time of simulation calculation, so that the problems of high precision and real-time performance cannot be balanced.
Therefore, a method for calculating the target radiance is needed to solve the above technical problem.
Disclosure of Invention
The embodiment of the invention provides a method and a device for calculating target radiance and terminal equipment, which can greatly shorten the calculation time while ensuring the calculation accuracy.
In a first aspect, an embodiment of the present invention provides a target radiance calculation method, which is applied to a terminal device, where the terminal device includes a central processing unit CPU and a graphics processing unit GPU, and a ray tracing application Optix framework is configured in the terminal device, and the method includes:
the CPU obtains a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model;
transferring the simulation model to the GPU;
the GPU builds an acceleration structure of the target model;
the GPU starts ray tracing of the simulation model based on the Optix framework, accelerates the ray tracing by using the acceleration structure and obtains a ray tracing result;
and the GPU determines the target radiance according to the ray tracing result and sends the target radiance to the CPU.
In one possible design, the transferring the simulation model to the GPU includes:
transferring the target model to the GPU by constructing a Shader Binding Table (SBT), wherein the target model comprises a position, a geometric parameter, a texture and a physical parameter of a target;
updating the camera model and the light source model to the GPU by updating a starting parameter Launch Params; the camera model comprises the position, resolution and pitch angle of the camera, and the light source model comprises the geometric parameters, position, direction and radiance of the light source.
In one possible design, before the GPU constructs the acceleration structure of the target model, the method further includes:
initialize the Optix framework, create contexts, modules, groups, and program pipelines.
In one possible design, the acceleration structure is a BVH structure.
In one possible design, each pixel in the simulation model imaging plane corresponds to a number of rays;
before the GPU initiates ray tracing based on the Optix framework, the method further comprises the following steps:
according to the position, the resolution and the pitch angle of the camera, distributing a thread ID to each pixel in the simulation model imaging plane;
according to the thread ID of each pixel, initializing Prd data of each ray in the pixel, wherein the Prd data comprises the direction of the ray, the radiance of the ray and a random number sequence.
In one possible design, the GPU initiates ray tracing of the simulation model based on the Optix framework and accelerates the ray tracing using the acceleration structure to obtain a ray tracing result, including:
for each ray of each pixel in the imaging plane, performing:
s1, judging whether the ray intersects with the target surface;
if the ray does not intersect, using the background light radiation brightness corresponding to the ray as a return value, updating the Prd data according to the return value, stopping tracking the ray, and executing S3;
if the intersection exists, calling a preset function at a first intersection point, updating the Prd data according to the calculation result of the preset function, transmitting shadow rays to a light source by taking the first intersection point as a starting point, and judging whether the shadow rays can reach the light source; if yes, updating the light source radiation brightness corresponding to the shadow ray to the Prd data, and executing S2; if not, not updating the Prd data and executing S2; the first intersection point is the intersection point which is closest to the camera in the intersection points of the light rays and the target surface;
s2, adding 1 to the depth value, judging whether the updated depth value is equal to the depth threshold value, if yes, stopping tracing the light, and executing S3; if not, continuing to track the reflected ray generated by the ray at the first intersection point, and returning to execute S1;
s3, the radiance of the light in the current Prd data is used as the tracking result of the light.
In one possible design, the preset function is a close _ hit function.
In one possible design, the GPU determines a target radiance from the ray tracing result, including:
taking the average value of the radiance of a plurality of light rays in each pixel as the radiance of the corresponding pixel;
and combining the radiance of each pixel to obtain the radiance of the target.
In a second aspect, an embodiment of the present invention further provides a target radiance calculation apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a simulation model, and the simulation model comprises a target model, a camera model and a light source model;
the first sending module is used for transferring the simulation model to the GPU;
a construction module for constructing an acceleration structure of the target model;
the second obtaining module is used for starting ray tracing of the simulation model based on the Optix framework, accelerating the ray tracing by using the accelerating structure and obtaining a ray tracing result;
the determining module is used for determining the target radiation brightness according to the ray tracing result;
and the second sending module is used for sending the target radiation brightness to the CPU.
In a third aspect, an embodiment of the present invention further provides a terminal device, which includes a CPU and a GPU, and is configured with a ray tracing application Optix framework;
the CPU is used for acquiring a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model, and transferring the simulation model to the GPU;
the GPU is used for constructing an acceleration structure of the target model, starting ray tracing of the simulation model based on the Optix framework, accelerating the ray tracing by using the acceleration structure and obtaining a ray tracing result; and determining the target radiation brightness according to the ray tracing result, and sending the target radiation brightness to the CPU.
The embodiment of the invention provides a target radiance calculation method, a target radiance calculation device and terminal equipment. Specifically, firstly, a CPU is utilized to obtain a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model; meanwhile, the GPU has strong parallel operation capability, so that the simulation model can be transferred to the GPU, an acceleration structure of the target model is constructed by the GPU to accelerate the traversal speed of light rays on a simulation scene, ray tracing of the simulation model is started in the GPU based on the Optix framework, a ray tracing result is obtained, target radiation brightness is determined according to the ray tracing result, and finally the target radiation brightness is sent to the CPU. From the above analysis, in the embodiment of the invention, the CPU and the GPU are heterogeneous and parallel to calculate the radiance of the target under the Optix framework, so that the calculation accuracy is ensured and the calculation efficiency is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a method for calculating radiance of a target according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a ray tracing scenario of a ray tracing algorithm according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of rendering equations provided by an embodiment of the invention;
FIG. 4 is a diagram illustrating the calculation of shadow rays according to an embodiment of the present invention;
FIG. 5(a) is a schematic diagram of a forward direction satellite model according to an embodiment of the present invention;
FIG. 5(b) is a schematic side view of a satellite model according to an embodiment of the present invention;
FIG. 6(a) is an image calculated in the red wavelength band by the method of the present invention according to an embodiment of the present invention;
FIG. 6(b) is a diagram of an image calculated in the green band using the method of the present invention according to an embodiment of the present invention;
FIG. 6(c) is a diagram of an image calculated in the blue band by the method of the present invention according to an embodiment of the present invention;
fig. 6(d) is an image calculated by the method of the present invention and displayed by placing three colors of red, green and blue on three channels in a visible light band according to an embodiment of the present invention;
fig. 6(e) is a gray image calculated by the method of the present invention and displayed by placing three colors of red, green, and blue in the same channel in the visible light band according to an embodiment of the present invention;
FIG. 7 is a diagram of a hardware architecture of a computing device according to an embodiment of the present invention;
fig. 8 is a block diagram of a target radiance calculation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
As described above, the conventional target radiance calculation method cannot balance the calculation accuracy with the calculation time.
In order to solve the problem, the inventor proposes that the CPU and the GPU can be enabled to perform heterogeneous parallel computation on the radiance of the target under an Optix framework, and an acceleration structure is used at the GPU end to shorten the computation time.
Specific implementations of the above concepts are described below.
Referring to fig. 1, an embodiment of the present invention provides a method for calculating target radiance, where the method is applied to a terminal device, where the terminal device includes a central processing unit CPU and a graphics processing unit GPU, and a ray tracing application Optix framework is configured in the terminal device, and the method includes:
step 100, a CPU obtains a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model;
step 102, transferring the simulation model to a GPU;
104, the GPU builds an acceleration structure of the target model;
106, starting ray tracing of the simulation model by the GPU based on an Optix framework, and accelerating the ray tracing by using the acceleration structure to obtain a ray tracing result;
step 108, the GPU determines the target radiation brightness according to the ray tracing result;
and step 110, the GPU sends the target radiance to the CPU.
In the embodiment, a simulation model is obtained by using a CPU, meanwhile, because the GPU has strong parallel operation capability, the simulation model can be transferred to the GPU, an acceleration structure of a target model is built by using the GPU to accelerate the traversal speed of a light ray on a simulation scene, ray tracing of the simulation model is started in the GPU based on an Optix frame, a ray tracing result is obtained, target radiation brightness is determined according to the ray tracing result, and finally the target radiation brightness is sent to the CPU. According to the embodiment, the CPU and the GPU are enabled to calculate the radiance of the target in a heterogeneous and parallel mode under the Optix framework, so that the calculation accuracy is guaranteed, and meanwhile, the calculation efficiency is high.
For better understanding, before describing the specific implementation of each step, the principle of determining the radiance of an object using ray tracing and the rendering equation need to be explained:
fig. 2 is a schematic diagram of a ray tracing scene of a ray tracing algorithm, in which a specified number of rays are emitted from a camera toward each pixel in an imaging plane, a transmission process of each ray is traced and recorded, and a rendering equation is solved according to the transmission process, that is, radiance of an object (target) is determined, that is, radiance of the target is obtained by solving the rendering equation.
FIG. 3 is a schematic diagram of rendering equations, which in computer graphics are the classical rendering equations as follows:
L o (x,ω o ,λ,t)=L e (x,ω o ,λ,t)+∫ Ω f λ (x,ω io ,λ,t)L i (x,ω i ,λ,t)(ω i ·n)dω i (1-1)
in the formula, L o (x,ω o λ, t) is the orientation ω at the x point of the object surface o (ii) a directionally projected spectral radiance at a wavelength λ; l is e (x,ω o λ, t) is the orientation ω at the x point of the object surface o The brightness of the radiation emitted directionally; l is i (x,ω i λ, t) is ω i Spectral radiance of directional incidence, omega o The reflection direction of the light Ray is also equivalent to the opposite direction of the sight line (View Ray) emitted from the camera in the calculation; t is time; f. of λ (x,ω io λ, t) is the bidirectional reflectance distribution function, BRDF, of the object at the surface x point; omega i N is a geometric term, the point product of the incident unit vector and the unit normal vector, cos θ i ;dω i A micro element solid angle in an incident direction; theta i Is the angle between the incident ray and the normal.
It can be seen that the above rendering equation (1-1) is an integral equation, in the conventional ray tracing calculation, if a large number of photons need to be emitted to converge the calculation result, and the calculation is limited by computer computing power, this process takes an unacceptably large amount of time, and many scholars propose that illumination models such as Phong, Blinn-Phong, and the like perform simplified calculation on the equation for rapidly solving the equation, and these simplified calculation models usually do not consider multiple scattering of rays in a scene, and only calculate the influence of direct illumination of a light source on surface radiation brightness, so that the calculation result lacks reality and is low in accuracy.
With the improvement of computing power and the development of a CUDA parallel GPU computing architecture of NVIDIA corporation, the ray tracing algorithm is widely researched by virtue of high parallelism. In the calculation method, the GPU starts ray tracing based on an Optix frame, and solution calculation of each pixel is independent, so that a calculation thread can be allocated to each pixel according to the resolution of the camera, and further calculation acceleration is achieved.
In addition, in order to improve the calculation accuracy of model solution, the integral term of the rendering equation (1-1) is solved by using a path tracking algorithm based on Monte Carlo integration. It should be noted that, due to the transient characteristics of the calculation process, the time term in the equation may be omitted, and for reading convenience, the position parameter and the wavelength parameter may also be omitted, and the calculation equation of the integral term in the simplified rendering equation (1-1) is shown as the following equation (1-2):
Figure BDA0003747427410000071
in the formula, p (omega) j ) For probability density of sampling in hemispherical space, N is the number of photons.
It can be seen that, the formula (1-2) converts the integral calculation into the calculation for obtaining the expectation through a mathematical statistics method, and compared with the discrete coordinate method, the method for solving the integral has the advantages that only one photon needs to be tracked during each reflection calculation, the result of approximating the integral is calculated through a large number of photon numbers, instead of reflecting one ray in each grid of discrete coordinates of a hemispherical space every time, and the problem that the cache of a single thread in the CUDA parallel calculation is small is met.
At present, the common sampling probability density is uniform sampling in a hemispherical space or sampling according to the lanfibrate probability density, and the calculation formulas of the two sampling methods are as follows:
(1) hemispherical uniform sampling:
Figure BDA0003747427410000072
θ j =acos(1-ξ 1 ) (1-4)
φ j =2πξ 2 (1-5)
in the formula: p is a radical of uj ) For uniformly sampled probability density, θ j ,φ j Reflection direction omega calculated for sampling j Zenith and circumferential angles, xi 1 And xi 2 Is a random number between 0 and 1.
(2) Lanbeite probability density sampling:
Figure BDA0003747427410000073
Figure BDA0003747427410000074
φ j =2πξ 2 (1-8)
in the formula, p lj ) The other parameters have the same meaning as described above for the probability density of the Lanfibrate sample.
Further, in the formula (1-2),
Figure BDA0003747427410000075
the calculation is complex, if order:
Figure BDA0003747427410000081
the BRDF term in the molecule can be eliminated, as well as the geometric term, and if the object surface is a diffuse reflective surface, the BRDF term satisfies:
Figure BDA0003747427410000082
and further:
Figure BDA0003747427410000083
it can be found that the above formula (1-11) is the probability density of the sample of the lanfibrate, and in reality, most of the object surfaces are closer to the diffuse reflection surface, so that the sample probability density of the lanfibrate can generally obtain a faster convergence rate, and thus, the sample probability density of the lanfibrate is adopted in the embodiments of the present application.
The manner in which the various steps shown in fig. 1 are performed is described below.
First, with respect to step 100, the CPU obtains a simulation model including a target model, a camera model, and a light source model.
As shown in fig. 2, the simulation model is used to simulate a ray tracing scene, where the scene includes a target, a camera and a light source, and within an imaging plane, a number of rays are emitted from the camera toward the target, some rays do not intersect with the object and fall into the background, some rays intersect with the object to generate reflected rays, and of course, the reflected rays will continue to be transmitted within the scene, or fall into the background, or reach the light source, and ray tracing is to record the transmission process of each ray, accumulate the energy in the transmission process, and finally determine the radiance of the target through the accumulated result.
In the simulation model, the number of the light sources and the targets may be one or more, and may also be the same type or different types, and the present application is not particularly limited.
Next, for step 102, transferring the simulation model to the GPU includes:
and step A, transferring a target model to the GPU by constructing a Shader Binding Table (SBT), wherein the target model comprises the position, the geometric parameters, the texture and the physical parameters (such as reflectivity, temperature, refractive index and the like) of the target. In this step, the SBT is composed of sharder record through which the data is transferred to the GPU.
B, updating the camera model and the light source model to the GPU by updating a starting parameter Launch Params; the camera model includes the position, resolution and pitch angle of the camera, and the light source model includes the geometric parameters, position, orientation and radiance of the light source.
In steps a and B, the camera model and the light source model are stored in different locations of the GPU than the target model, so that when the light source or camera parameters need to be adjusted while the target model is unchanged, there is no need to reconstruct the acceleration structure.
Then, for step 104, before the GPU constructs the acceleration structure of the target model, the method further includes: the Optix framework is initialized, contexts are created, modules, program groups, and program pipelines are created, so that it can be checked whether the terminal device can run the Optix framework normally and is ready to start up the GPU.
With respect to step 104, the GPU builds an acceleration structure of the target model.
In this step, the acceleration structure may be a BVH (bounding volume hierarchy) structure, a construction process of the structure is related to a geometric parameter of the target, and the acceleration structure may be used to accelerate the traversal speed of the ray on the scene after the construction is completed.
For example, assuming that there are m grids in the imaging plane shown in fig. 2, if CPU computation is used, since it does not use an acceleration structure, it is necessary to traverse m grids for each pixel computation and for each reflection, and after the GPU uses the acceleration structure, it is only necessary to traverse log2(m) grids at most each time, and even when pixels corresponding to the background are encountered, one grid does not need to be traversed, so that the computation time can be greatly reduced.
In the simulation model shown in fig. 2, each pixel in the imaging plane corresponds to several light rays, and before performing step 106, the method further includes:
according to the position, the resolution and the pitch angle of the camera, allocating a thread ID to each pixel in the imaging plane of the simulation model;
according to the thread ID of each pixel, the Prd data of each ray in the pixel is initialized, and the Prd data comprises the direction of the ray, the radiance of the ray and a random number sequence.
In this step, after the position, resolution and pitch angle of the camera are determined, the ID of each pixel, i.e. the thread ID, can be determined according to the position of the pixel in the resolution, and the initial state of each ray, i.e. the starting point of ray tracing, can be determined according to the thread ID.
Aiming at the step 106, the GPU starts ray tracing of the simulation model based on the Optix framework, and accelerates the ray tracing by using an acceleration structure to obtain a ray tracing result, including:
for each ray of each pixel in the imaging plane, performing:
s1, judging whether the ray intersects with the target surface;
if the ray does not intersect, using the background light radiation brightness corresponding to the ray as a return value, updating the Prd data according to the return value, stopping tracking the ray, and executing S3;
if the intersection exists, calling a preset function at the first intersection point, updating Prd data according to the calculation result of the preset function, transmitting shadow rays to the light source by taking the first intersection point as a starting point, and judging whether the shadow rays can reach the light source; if so, updating the light source radiation brightness corresponding to the shadow ray to the Prd data, and executing S2; if not, the Prd data is not updated and S2 is executed; the first intersection point is the intersection point which is closest to the camera in the intersection points of the light rays and the target surface;
s2, adding 1 to the depth value, judging whether the updated depth value is equal to the depth threshold value, if yes, stopping tracing the light, and executing S3; if not, continuing to track the reflected ray generated by the ray at the first intersection point, and returning to execute S1;
s3, the radiance of the light in the current Prd data is used as the tracking result of the light.
In this step, the preset function is a close _ hit function, and the close _ hit function can calculate the self-emission radiance, the light energy, the emission shadow light, the initial point of the updated light and the next reflection direction of the target every time the light is transmitted. Every time the ray is transmitted, the energy value of the time is accumulated to the energy value of the last time until the tracking is finished, and the energy value (namely the radiance) in the final Prd data is used as the radiance of the ray.
During the tracking process, if the ray or the reflected ray does not intersect with the target, the calculated background energy is accumulated into the Prd data, and the tracking is stopped, otherwise, the tracking is continued. Of course, as the number of transmission times increases, the energy contribution of the ray becomes smaller and smaller, and if tracking is performed for an infinite number of times, not only the calculation time is increased, but also the calculation accuracy is not contributed, therefore, in this step, the depth threshold may be 10, that is, for each ray, tracking is performed for 10 times at most, and the energy accumulation value of these 10 times is taken as the final radiance of the ray, thereby shortening the calculation time while ensuring the calculation accuracy.
The shadow ray is also described in this step as follows.
In the field of visible light calculation, most objects do not have self-emission radiance, so the radiance projected towards a camera is mainly the reflection of the radiance of a light source in a scene, and sampling is performed in a hemispherical space according to a Lanfibrate sampling method, only few reflected light rays can intersect with the light source, and further the direct illumination calculation of the light source is slow in convergence. Therefore, the invention uses Shadow Ray method (Shadow Ray), after detecting that the Ray intersects with the object, besides calculating the reflected Ray, additionally emitting a beam of Shadow Ray to the light source, when the Shadow Ray does not collide with other objects in the scene, the direct illumination of the light source can be considered to irradiate the intersection point of the Ray and the object, and the radiant brightness corresponding to the Shadow Ray is accumulated into the Prd data, thus the convergence rate of the illumination calculation can be accelerated.
FIG. 4 is a schematic diagram of the calculation of shadow rays.
In fig. 4, x ' is a point on the light source, x is a point where the surface emits the shadow ray, and assuming that the area of the infinitesimal where the point x ' is located is dA ', the integral term of the rendering equation of equation (1-1) becomes:
Figure BDA0003747427410000111
wherein x-x 'is a vector pointing from the point x to the point x' on the light source; f (x-x', ω) o ) Is the bidirectional reflectance distribution function, BRDF, of the object at the surface x point; l is i (x ', x-x') is the radiance projected by the light source towards x; theta i Is the angle between the vector x-x' and the normal vector n at the point x; theta 'is the angle between the vector x' -x and the normal n 'of the normal vector at the point x', omega o Is the direction of reflection of the light from the light source at point x towards the camera.
Again using the monte carlo integration for equations (1-12) yields the following:
Figure BDA0003747427410000112
in the formula, A is the area of the light source, and 1/A is the probability density of sampling on the light source. v (x, x') is a visual function, which is 1 when the shadow ray is not occluded, and 0 otherwise.
Equations (1-13) are the shadow ray method for calculating the direct light source, which is also called importance sampling, i.e. more densely sampling the area with larger influence of the integrand to accelerate the convergence rate of the illumination calculation.
With respect to step 108, the GPU determines the target radiance from the ray tracing result, including:
taking the average value of the radiance of a plurality of light rays in each pixel as the radiance of the corresponding pixel;
and combining the radiance of each pixel to obtain the radiance of the target.
In this embodiment, since each pixel has a plurality of rays, each ray contributes to the radiance of the pixel, and therefore, the average value of the radiance of all the rays can be used as the final radiance of the pixel. After the radiance of each pixel is determined, the radiance of the entire object can be determined.
Finally, in step 110, the target radiance is sent to the CPU, and the CPU may display or output an image on a screen after receiving the target radiance.
In order to prove the calculation effect of the method of the embodiment of the invention, the inventor utilizes the method of the invention to simulate and calculate the distribution situation of model radiance under sunlight irradiation, and the GPU of the program calculating device is RTX 2060, CUDA version 11.1, video card drive version 11.3, Optix version 7.2, the maximum depth of iteration is set to 10, the resolution is 6000 × 4800, and the number of photons per pixel is 1000. Specific parameters, coordinate system arrangement and light source direction settings of the satellite model are shown in table 1, fig. 5(a) and fig. 5(b), images of three wavelength bands of red, green and blue and the whole visible light wavelength band are respectively calculated according to different reflectivity data of the three wavelength bands of red, green and blue, and the calculation results are shown in fig. 6(a) to 6 (e).
TABLE 1 calculation of satellite model-related parameters
Model (model) Number of cells Number of material Whether or not to use texture Number of texture pictures
Satellite model 34981 1 Is that 1
As can be seen from fig. 6(a) to 6(e), the method of the present invention can calculate the radiance of the target well in each band, and output the target image based on the calculated radiance. Fig. 6(a) is an image calculated in a red light band, fig. 6(b) is an image calculated in a green light band, fig. 6(c) is an image calculated in a blue light band, fig. 6(d) is an image displayed by placing three colors of red, green and blue in three channels in a visible light band, that is, an image visible to human eyes, and fig. 6(e) is a gray image displayed by placing three colors of red, green and blue in the same channel in a visible light band, that is, an image capable of reflecting total energy distribution.
In addition, the inventor also verifies the acceleration effect of the embodiment of the present invention (in the verification model, the number of grids is 34981, the CPU does not use the acceleration structure, and the GPU uses the BVH acceleration structure), and the verification result is shown in table 2:
TABLE 2 radiance time comparison of CPU and GPU calculation models
Figure BDA0003747427410000121
As can be seen from Table 2, the acceleration ratio is increased with the increase of the number of photons, that is, the method of the present invention can greatly shorten the calculation time of the target radiance.
As shown in fig. 7 and 8, an embodiment of the present invention provides a target radiance calculation apparatus. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. From a hardware aspect, as shown in fig. 7, for a hardware architecture diagram of a computing device where a target radiance computing apparatus according to an embodiment of the present invention is located, in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 7, the computing device where the apparatus is located in the embodiment may also include other hardware, such as a forwarding chip responsible for processing a message. Taking a software implementation as an example, as shown in fig. 8, as a logical means, the device is formed by reading a corresponding computer program in a non-volatile memory into a memory by a CPU of a computing device where the device is located and running the computer program. The present embodiment provides a target radiance calculation apparatus, including:
a first obtaining module 800, configured to obtain a simulation model, where the simulation model includes a target model, a camera model, and a light source model;
a first sending module 802 for transferring the simulation model to the GPU;
a building module 804, configured to build an acceleration structure of the target model;
a second obtaining module 806, configured to start ray tracing of the simulation model based on the Optix framework, and accelerate the ray tracing by using an acceleration structure to obtain a ray tracing result;
a determining module 808, configured to determine target radiance according to a ray tracing result;
and a second sending module 810, configured to send the target radiance to the CPU.
In an embodiment of the present invention, the first obtaining module 800 may be configured to perform step 100 in the above-described method embodiment, the building module 802 may be configured to perform step 102 in the above-described method embodiment, the first sending module 804 may be configured to perform step 104 in the above-described method embodiment, the second obtaining module 806 may be configured to perform step 106 in the above-described method embodiment, the determining module 808 may be configured to perform step 108 in the above-described method embodiment, and the second sending module 810 may be configured to perform step 110 in the above-described method embodiment.
In some embodiments, the first sending module 802 is configured to perform:
transferring a target model to a GPU (graphics processing unit) by constructing a Shader Binding Table (SBT), wherein the target model comprises a target position, geometric parameters, textures and physical parameters;
updating the camera model and the light source model to the GPU by updating a starting parameter Launch Params; the camera model includes the position, resolution and pitch angle of the camera, and the light source model includes the geometric parameters, position, orientation and radiance of the light source.
In some embodiments, a creating module 812 is further included, and prior to executing the building module 804, the creating module 812 is configured to perform:
initialize the Optix framework, create contexts, modules, program groups, and program pipelines.
In some embodiments, in building block 804, the acceleration structure is a BVH structure.
In some embodiments, simulating each pixel in the model imaging plane for a number of rays further comprises, before executing the second obtaining module 806:
according to the position, the resolution and the pitch angle of the camera, allocating a thread ID to each pixel in the imaging plane of the simulation model;
according to the thread ID of each pixel, the Prd data of each ray in the pixel is initialized, and the Prd data comprises the direction of the ray, the radiance of the ray and a random number sequence.
In some embodiments, the second obtaining module 806 is configured to perform:
for each ray of each pixel in the imaging plane, performing:
s1, judging whether the ray intersects with the target surface;
if the ray does not intersect, using the background light radiation brightness corresponding to the ray as a return value, updating the Prd data according to the return value, stopping tracking the ray, and executing S3;
if the intersection exists, calling a preset function at the first intersection point, updating Prd data according to the calculation result of the preset function, transmitting shadow rays to the light source by taking the first intersection point as a starting point, and judging whether the shadow rays can reach the light source; if so, updating the light source radiation brightness corresponding to the shadow ray to the Prd data, and executing S2; if not, the Prd data is not updated and S2 is executed; the first intersection point is the intersection point which is closest to the camera in the intersection points of the light rays and the target surface;
s2, adding 1 to the depth value, judging whether the updated depth value is equal to the depth threshold value, if yes, stopping tracing the light, and executing S3; if not, continuing to track the reflected ray generated by the ray at the first intersection point, and returning to execute S1;
s3, the radiance of the light in the current Prd data is used as the tracking result of the light.
In some embodiments, the preset function is a close _ hit function.
In some embodiments, the determining module 808 is configured to perform:
taking the average value of the radiance of a plurality of light rays in each pixel as the radiance of the corresponding pixel;
and combining the radiance of each pixel to obtain the radiance of the target.
The embodiment of the invention also provides terminal equipment which comprises a CPU and a GPU and is provided with an Optix framework of a ray tracing application program;
the CPU is used for acquiring a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model, and transferring the simulation model to the GPU;
the GPU is used for constructing an acceleration structure of the target model, starting ray tracing of the simulation model based on the Optix framework, accelerating the ray tracing by using the acceleration structure and obtaining a ray tracing result; and determining the target radiation brightness according to the ray tracing result, and sending the target radiation brightness to the CPU.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, causes the processor to execute a method for calculating target radiance in any embodiment of the present invention.
Specifically, a system or an apparatus equipped with a storage medium on which software program codes that realize the functions of any of the above-described embodiments are stored may be provided, and a computer (or a CPU or MPU) of the system or the apparatus is caused to read out and execute the program codes stored in the storage medium.
In this case, the program code itself read from the storage medium can realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code constitute a part of the present invention.
Examples of the storage medium for supplying the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer by a communications network.
Further, it should be clear that the functions of any one of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform a part or all of the actual operations based on instructions of the program code.
Further, it is to be understood that the program code read out from the storage medium is written to a memory provided in an expansion board inserted into the computer or to a memory provided in an expansion module connected to the computer, and then causes a CPU or the like mounted on the expansion board or the expansion module to perform part or all of the actual operations based on instructions of the program code, thereby realizing the functions of any of the above-described embodiments.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other similar elements in a process, method, article, or apparatus that comprises the element.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A target radiance calculation method is applied to a terminal device, wherein the terminal device comprises a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU), and a ray tracing application program Optix framework is configured in the terminal device, and the method comprises the following steps:
the CPU obtains a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model;
transferring the simulation model to the GPU;
the GPU builds an acceleration structure of the target model;
the GPU starts ray tracing of the simulation model based on the Optix framework, accelerates the ray tracing by using the acceleration structure and obtains a ray tracing result;
and the GPU determines the target radiance according to the ray tracing result and sends the target radiance to the CPU.
2. The method of claim 1, wherein transferring the simulation model to the GPU comprises:
transferring the target model to the GPU by constructing a Shader Binding Table (SBT), wherein the target model comprises a position, a geometric parameter, a texture and a physical parameter of a target;
updating the camera model and the light source model to the GPU by updating a starting parameter Launch Params; the camera model comprises the position, resolution and pitch angle of the camera, and the light source model comprises the geometric parameters, position, direction and radiance of the light source.
3. The method of claim 1, further comprising, prior to the GPU building the acceleration structure of the target model:
initialize the Optix framework, create contexts, modules, program groups, and program pipelines.
4. The method of claim 1 wherein the accelerating structure is a BVH structure.
5. The method of claim 4, wherein each pixel in the phantom imaging plane corresponds to a number of rays;
before the GPU initiates ray tracing based on the Optix framework, the method further comprises the following steps:
according to the position, the resolution and the pitch angle of the camera, distributing a thread ID to each pixel in the simulation model imaging plane;
according to the thread ID of each pixel, initializing Prd data of each ray in the pixel, wherein the Prd data comprises the direction of the ray, the radiance of the ray and a random number sequence.
6. The method of claim 5, wherein the GPU initiates ray tracing of the simulation model based on the Optix framework and accelerates the ray tracing using the acceleration structure to obtain ray tracing results, comprising:
for each ray of each pixel in the imaging plane, performing:
s1, judging whether the ray intersects with the target surface;
if the ray does not intersect, using the background light radiation brightness corresponding to the ray as a return value, updating the Prd data according to the return value, stopping tracking the ray, and executing S3;
if the intersection exists, calling a preset function at a first intersection point, updating the Prd data according to the calculation result of the preset function, transmitting shadow rays to a light source by taking the first intersection point as a starting point, and judging whether the shadow rays can reach the light source; if yes, updating the light source radiation brightness corresponding to the shadow ray to the Prd data, and executing S2; if not, not updating the Prd data and executing S2; the first intersection point is the intersection point which is closest to the camera in the intersection points of the light rays and the target surface;
s2, adding 1 to the depth value, judging whether the updated depth value is equal to the depth threshold value, if yes, stopping tracing the light, and executing S3; if not, continuing to track the reflected ray generated by the ray at the first intersection point, and returning to execute S1;
s3, the radiance of the light in the current Prd data is used as the tracking result of the light.
7. The method of claim 6, wherein the predetermined function is a close _ hit function.
8. The method of claim 6, wherein determining, by the GPU, a target radiance from the ray tracing result comprises:
taking the average value of the radiance of a plurality of light rays in each pixel as the radiance of the corresponding pixel;
and combining the radiance of each pixel to obtain the radiance of the target.
9. A target radiance calculation device applied to a terminal device includes:
the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a simulation model, and the simulation model comprises a target model, a camera model and a light source model;
the first sending module is used for transferring the simulation model to the GPU;
a construction module for constructing an acceleration structure of the target model;
the second obtaining module is used for starting ray tracing of the simulation model based on the Optix framework, accelerating the ray tracing by using the accelerating structure and obtaining a ray tracing result;
the determining module is used for determining the target radiation brightness according to the ray tracing result;
and the second sending module is used for sending the target radiation brightness to the CPU.
10. A terminal device is characterized by comprising a CPU and a GPU, and a ray tracing application program Optix framework is configured;
the CPU is used for acquiring a simulation model, wherein the simulation model comprises a target model, a camera model and a light source model, and transferring the simulation model to the GPU;
the GPU is used for constructing an acceleration structure of the target model, starting ray tracing of the simulation model based on the Optix framework, accelerating the ray tracing by using the acceleration structure and obtaining a ray tracing result; and determining the target radiation brightness according to the ray tracing result, and sending the target radiation brightness to the CPU.
CN202210829013.4A 2022-07-15 2022-07-15 Target radiance calculation method and device and terminal equipment Pending CN115082613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210829013.4A CN115082613A (en) 2022-07-15 2022-07-15 Target radiance calculation method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210829013.4A CN115082613A (en) 2022-07-15 2022-07-15 Target radiance calculation method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN115082613A true CN115082613A (en) 2022-09-20

Family

ID=83258973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210829013.4A Pending CN115082613A (en) 2022-07-15 2022-07-15 Target radiance calculation method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN115082613A (en)

Similar Documents

Publication Publication Date Title
CN111311723B (en) Pixel point identification and illumination rendering method and device, electronic equipment and storage medium
US11024077B2 (en) Global illumination calculation method and apparatus
US10380785B2 (en) Path tracing method employing distributed accelerating structures
US7453460B2 (en) System and method for generating pixel values for pixels in an image using strictly deterministic methodologies for generating sample points
US8542231B2 (en) Method, computer graphics image rendering system and computer-readable data storage medium for computing of indirect illumination in a computer graphics image of a scene
JP2669599B2 (en) Shadow drawing method and three-dimensional graphic computer system
CN104157004B (en) The method that a kind of fusion GPU and CPU calculates radiancy illumination
US7184042B2 (en) Computer graphic system and computer-implemented method for generating images using a ray tracing methodology that makes use of a ray tree generated using low-discrepancy sequences and ray tracer for use therewith
US9460553B2 (en) Point-based global illumination directional importance mapping
CN111968215A (en) Volume light rendering method and device, electronic equipment and storage medium
US20080049019A1 (en) Generating Images Using Multiple Photon Maps
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
JPH10510074A (en) Image composition
JP5873672B2 (en) Method for estimating the amount of light received at a point in a virtual environment
US20230230311A1 (en) Rendering Method and Apparatus, and Device
JP6367624B2 (en) Program, information processing apparatus, calculation method, and recording medium
US6525730B2 (en) Radiosity with intersecting or touching surfaces
US9235663B2 (en) Method for computing the quantity of light received by a participating media, and corresponding device
CN115984449A (en) Illumination rendering method and device, electronic equipment and storage medium
WO2022105641A1 (en) Rendering method, device and system
CN115082613A (en) Target radiance calculation method and device and terminal equipment
JP6393153B2 (en) Program, recording medium, luminance calculation device, and luminance calculation method
CN113298925B (en) Dynamic scene rendering acceleration method based on ray path multiplexing
CN110832549A (en) Method for the rapid generation of ray traced reflections of virtual objects in a real world environment
US20230274493A1 (en) Direct volume rendering apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination