CN105118083B - A kind of Photon Mapping method for drafting of unbiased - Google Patents

A kind of Photon Mapping method for drafting of unbiased Download PDF

Info

Publication number
CN105118083B
CN105118083B CN201510489335.9A CN201510489335A CN105118083B CN 105118083 B CN105118083 B CN 105118083B CN 201510489335 A CN201510489335 A CN 201510489335A CN 105118083 B CN105118083 B CN 105118083B
Authority
CN
China
Prior art keywords
mrow
msup
msub
prime
mover
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510489335.9A
Other languages
Chinese (zh)
Other versions
CN105118083A (en
Inventor
侯启明
秦昊
孙鑫
周昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510489335.9A priority Critical patent/CN105118083B/en
Publication of CN105118083A publication Critical patent/CN105118083A/en
Application granted granted Critical
Publication of CN105118083B publication Critical patent/CN105118083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Generation (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention discloses a kind of Photon Mapping method for drafting of unbiased.Photon Mapping is acknowledged as one of maximally efficient method for drafting, but Photon Mapping algorithm before has inclined in the calculating for collecting photon, causes different between the result finally drawn and real result.The present invention calculates the luminous energy of the light path transmission, have estimated to more unbiased the probability of sight collection photon, so as to obtain the Photon Mapping method for drafting of a unbiased with the additional cost of very little exactly by the way that sight and the light path where photon are directly connected to.In addition, the weight of a set of multiple importance sampling is further developed, so as to which unbiased Photon Mapping method be combined with two-way ray tracking technology.The inventive method efficiency and reliability are high.The extensive use of the inventive method, it is expected to reduce the cost that image rendering is drawn in animation, film, advertisement and building trade.

Description

Unbiased photon mapping drawing method
Technical Field
The invention relates to the technical field of graph drawing, in particular to a drawing method of photon mapping.
Background
The related research background of the invention is briefly described as follows:
one, photon mapping
The core idea of Photon mapping is to estimate local light energy by Using Photon density, which can be referred to JENSEN, h.w.2001. reactive Image Synthesis Using Photon mapping.a.k.pets, ltd., Natick, MA, USA. The locality of photon collection makes this approach particularly effective for challenging caustic and multiple-reflection indirect illumination. In addition, photon mapping can reuse the optical path corresponding to the photon, thereby reducing the cost of sampling. Density estimation can effectively reduce noise, but at the same time introduces bias into the final result, so all previous photon mapping methods are biased. Although the latest series of progressive photon mapping algorithms can consistently converge to correct results when the number of photons is infinite, the specific method can refer to hacisuka, t., OGAKI, s., AND JENSEN, h.w.2008.progressive photon mapping. acm trans. graph.27,5(Dec.),130: 1-130: 8; KNAUS, C., AND ZWICKER, M.2011.progressive photoston mapping A.basitilically. ACM Trans.graph.30,3(May),25: 1-25: 13; hachicuka, t., AND JENSEN, h.w.2009.acm trans.graph.28,5(Dec.),141: 1-141: 8; KAPLANYAN, A.S., AND DACHSBACHER, C.2013.adaptive progressive photomapping.ACM Trans.graph.32,2(Apr.),16: 1-16: 13. But without exception these methods all have biased results within any finite time, and our method has yielded unbiased rendering results throughout. There are also efforts to combine biased photon mapping with bi-directional ray tracing by multiple importance sampling, see VORBA, j.2011. direct photon mapping. in proc. of the Central European semiconductor Computer Graphics (cesccg 11); TOKUYOSHI, Y.2009.photon density engineering multiple opportunity sampling. in ACM SIGGRAPH ASIA2009Posters, ACM, New York, NY, USA, SIGRAPH ASIA' 09,37: 1-37: 1. Our work improved the previous sampling of multiple importance, the first time it combines unbiased photon mapping with other sampling techniques.
Two, two-way ray tracing and multiple importance sampling
The rendering equation proposed by Kajiya in 1986, which is a multiple integral that can be solved by the monte carlo method, is a mathematical description of the physical laws of light energy propagation, and can be found in Kajiya, j.t.1986, the rendering equation, siggraph com.20, 4(Aug.), 143-150. Bidirectional ray tracing is to sample the complete optical path by sampling the line of sight and the ray separately and connecting the two optical paths. The method is particularly effective for indoor scenes AND light sources AND cameras of limited size, AND specific methods can be found in LAFORTUNE, e.p., AND WILLEMS, y.d.1993.bi-directional path tracking.advances of CompuGraphics, vol.93, 145-153; VEACH, E., AND GUIBAS, L.J.1994. Birectional estimators for light transport in Proceedings of the Fifth Europathics Workshop on retrieving, Europathics, 147-162.
Multiple importance sampling was later proposed by Veach et al, and was primarily used to combine the different connections of various optical paths in bi-directional ray tracing, see Veach, E.1998.robust Monte Carlo Methods for light Transport simulation. PhD thesis, Stanford, CA, USA. AAI9837162. Photon mapping and bidirectional ray tracing are optical path sampling accomplished by connecting line of sight and ray formation. Compared with bidirectional ray tracking, photon mapping has advantages in sampling complicated caustic and indirect illumination of multiple reflections because the sight end point is very close to photons; however, in the case of direct lighting, etc., bidirectional ray tracing is more effective. These two methods are therefore complementary in different types of optical path sampling, as can be seen in particular in HASAN ˇ, m., KRIV ˇ ANEK', j, WALTER, b., andblan, k.2009.virtual therapeutic light for management-light rendering of photosystems. acm trans. graph.28,5(Dec.),143: 1-143: 6; VORBA, J.2011.Bidirectorphoron mapping. in Proc.of the Central European similar semi-amine Computer Graphics (CESCG 11).
Three, unified sampling
Vertex Merging (Vertex Merging) redefines photon mapping as a process in which an optical path is bi-directionally connected with a probability determined by the distance between two points being connected, GEORGIEV, i., KRIVANEK', j., DAVIDOVIC, t., AND S lusalek, p.2012.light transport with transformation connection AND measuring.acm trans.graph.31,6(Nov.),192: 1-192: 10. Unified Path Sampling (Unified Path Sampling) is another angle of the same problem, which considers bi-directional ray tracing as a photon mapping where the photon and the sight-line end point coincide completely
Disclosure of Invention
The invention aims to provide an unbiased photon mapping drawing method aiming at the defects of the prior art, which can eliminate the error problem which troubles the method for a long time while giving full play to the high efficiency of the photon mapping drawing method. The method not only breaks through the limitation of the original photon mapping method in theory, but also has high practical value.
The purpose of the invention is realized by the following technical scheme: an unbiased photon mapping drawing method includes the following steps:
(1) inputting a three-dimensional scene file, and analyzing the three-dimensional scene file; the three-dimensional scene file comprises geometric information, material, a map, lighting information and camera setting of an object;
(2) drawing initialization, and establishing a space acceleration structure for a three-dimensional scene;
(3) starting to draw the image, and starting to execute a sampling cycle according to the sampling number or the drawing time specified by the user; drawing all pixels by one sample in each sampling cycle; the calculation process within each sampling cycle comprises the following sub-steps:
(3.1) sampling of photons: after the light rays are emitted from the light source, a series of reflection and refraction are carried out in the scene; when every reflection and refraction of the light occurs, a photon is created at the position, and the energy carried by the light at that time and the information of the current light reflection times are recorded in the photon; when all the rays traverse in the scene, a batch of photons are obtained, and then a space acceleration structure is established for the photons;
(3.2) sampling the sight line: the sampling number of the sight lines is consistent with that of the light rays and is equal to the number of the pixels; traversing in a scene after the sight line is transmitted, and collecting photons in a sphere with radius d around a reflection refraction point when the reflection refraction is carried out on the surface of an object each time; then, establishing a light path for each photon, and carrying out unbiased estimation on the luminous flux of each photon, wherein the unbiased estimation process comprises the following steps:
(3.2.1) error analysis of photon mapping
Transfer function of light energy in photon mappingModified to strict optical energy transfer functionThe formula is as follows:
connecting probabilities in mapping photonsRevised to strict connection probabilityThe formula is as follows:
wherein,the light path for representing photon mapping is formed by connecting light with the length of s 'and a sight line with the length of t' -1; y and z represent the vertices of the ray and the line of sight, respectively, the subscript representing the position of the point in the respective sub-path, ys′Representing the last point on the ray, i.e. a photon, and ys′-1Represents the last point of a photon, zt′And zt′-1Respectively representing the end point and the previous point of the line of sight, zt′-2Is a point further before the sight-line end point; f. ofsIs the reflection coefficient of the material on the surface of the object, which represents the reflection amount of the light energy at a point on the object with a certain material after the incident angle of the light and the observation angle of the sight line are given, fs(zt′-2→zt′-1→zt′) Is shown at zt′-1Point position, light energy from zt′-2Come from a littleAfter passing through this point is reflected to zt′Reflection coefficient of the spot, fs(ys′-1→ys′(zt′)→zt′-1) Expressed in ys′Dot position, light energy from ys′-1Point incidence back edge zt′To zt′-1The reflection coefficient of the light emitted in the direction of (1); g represents the geometric coefficient between two points, px(zt′-2→zt′-1→ z) represents from zt′-2Click to zt′-1At z oft′-1Probability of hitting z point after point reflection; integration Range S (y)s′And d) is ys′The collection range with the radius around the point being d;
(3.2.2) estimation of the inverse of the unbiased connected probability integral: for each ray, emitting a probe ray from the last point of the photon, the distribution of the ray being identical to the actual ray sample, the probability that the ray sample can be accepted by the collection range of the sight-line end point beingRepeatedly generating the probe light to form a Bernoulli sample process under the same configuration; the unbiased estimation of the inverse of the joint probability integral is estimated by the number N of the first successful trial ray in the bernoulli sample, i.e. the number of the trial ray hitting the acquisition range in which the sight-line end point is located for the first time;
(4) and accumulating the light energy of each photon collected by the sight line to a corresponding pixel of the image, outputting the image after the specified sampling cycle is finished, and finishing the drawing.
Further, the step (3) further includes a step of angle limiting the probe light in the bernoulli sample in the step (3.2.2), specifically as follows:
projecting the acquisition range to a unit sphere where the starting point of the tentative light is located, and only generating the tentative light in the projected acquisition range; converting the projected acquisition range into a histogram defined in a sampled random number spaceA boresight aligned bounding volume; then obtaining a sample in the integration range by limiting the random number generation range, and reducing the number of angle-limited Bernoulli samples from N to NbAnd the inverse of the joint probability integral changes accordingly:
wherein the integration range omegabThe area of the projection of the acquisition range is reduced from the whole hemisphere.
Further, after obtaining an unbiased estimation of one optical path through step (3), combining the estimation with other optical path estimation through multiple importance sampling, and adopting an exponential weight, wherein the sampling probabilities of different sampling techniques in the weight are as follows:
probability of generating one light path by bidirectional ray tracing methodComprises the following steps:
whereinAndthe generation probability of the light ray and the generation probability of the sight line are respectively;
probability of generating a light path without mapping of polaritonsComprises the following steps:
the connection probability is approximated as:
the invention has the beneficial effects that: the present invention theoretically analyzes the source of the photon mapping error, completely eliminates the error through innovative bernoulli trial rays, and minimizes the additional cost through the angular limitation of the trial rays. Since photon mapping is transformed into an unbiased rendering technique, multiple importance sampling can be utilized in conjunction with other complementary rendering techniques. The invention comprehensively improves the efficiency of the important drawing technology of photon mapping, not only reduces the burden of adjusting parameters of a user, but also ensures that the user can obtain a satisfactory image in a shorter time. The wide application of the technology can improve the rendering and drawing efficiency of animation, movies, advertisements and the building industry, thereby reducing the production cost.
Drawings
FIG. 1 is a schematic illustration of Bernoulli probe ray and angle limiting in the present invention, where (a) shows Bernoulli probe ray sampling without angle limiting and (b) shows Bernoulli probe ray sampling after angle limiting.
Fig. 2 is the result of a palace scene plotted for 10 minutes using the vertex merging algorithm.
Fig. 3 is the result of a palace scene plotted for 10 minutes using the present invention.
Fig. 4 is the result of a palace scene plotted with bi-directional ray tracing for 24 hours, referenced as a correct result.
FIG. 5 is the result of a staircase scene plotted with two-way ray tracing for 1 hour.
Fig. 6 is the result of a staircase scene plotted for 1 hour with the photon mapping portion of the present invention.
FIG. 7 shows the results of a staircase scene plotted for 1 hour using the photon mapping combined with two-way ray tracing method of the present invention.
In the effect diagrams of fig. 2 to 7, in order to better compare the difference in rendering effect, a partially enlarged view is added below the original drawing. The enlarged local area is indicated in the artwork by a box.
Detailed Description
The core technology of the invention is the unbiased estimation of the optical energy carried by the optical path corresponding to each photon in photon mapping, therefore, the photons collected by the sight line are firstly connected into a complete optical path, then the unbiased estimation is carried out on the contribution of the optical path, and finally the sampling technology is combined with other sampling technologies through multiple importance sampling.
1. Inputting a three-dimensional scene, wherein the three-dimensional scene comprises geometric information, materials, a map, lighting information and camera settings of an object. The method adopts a scene file (reference) in Mitsuba formathttps://www.mitsuba- renderer.org/) And all information related to the scene is acquired by reading and analyzing the scene file.
2. Drawing initialization, and establishing a space acceleration structure for a scene, wherein an acceleration structure of SBVH is selected (refer to Spatial gradients in Bounding Volume hierarchy, Martin Stich et al, HPG'09Proceedings of the Conference on High Performance Graphics 2009, Pages 7-13). The acceleration structure is used to index the geometric information in space, accelerate the ray tracing calculations that follow.
3. The image is started to be drawn, and a sampling cycle is started to be executed according to the sampling number or the drawing time specified by the user. And drawing one sample for all pixels in each sampling cycle. The calculation process inside each sampling cycle comprises the following sub-steps:
3.1 sampling of photons. The number of rays sampled in each cycle is equal to the number of pixels, and the number of rays sampled in each cycle is equal to the number of lines of sight. After light is emitted from the light source, a series of catadioptric events are performed in the scene. Each time catadioptric event of a ray occurs, a photon is created at that location and information is recorded in the photon of the energy currently carried by the ray and the number of current ray reflections. When all the rays traverse the scene, a batch of photons are obtained, and then a space acceleration structure is established for the photons so as to accelerate the process of collecting the photons later.
And 3.2, sampling the sight lines, wherein the sampling number of the sight lines is consistent with that of the light rays and is equal to the number of pixels. The line of sight emission is also traversed in the scene, and each time catadioptric light is made on the surface of the object, photons are collected near the catadioptric point. Then, a light path is established for each photon, and the light flux is estimated unbiased. The specific estimation method of the unbiased optical path is as follows:
3.2.1 error analysis of photon mapping
First we review the path integration and vertex merging, which is the theoretical basis for unbiased estimation of the present invention.
The Monte Carlo light path integration is the integration of the whole light path space, one end of the light path is a camera, the other end is a light source, all the light paths in the middle form all possible propagation modes of light energy, and therefore the integration of all the light paths is the completely accurate drawing result under the current scene configuration. Since the integral cannot be solved analytically, an unbiased estimation is generally performed by the monte carlo method:
where x is a particular optical path, f (x) is the optical energy carried by the optical path, p (x) is the probability of the optical path being generated, and I is the exact image rendering result. Conventional bidirectional ray tracing is performed by sampling a line of sight z and a ray y, respectively, and then connecting the two rays to form a complete optical path, the contribution function of whichThe following were used:
whereinAnditems relating to light and line of sight, respectively, where we focus mainly on the middle, since these items are not relevant to our approachTerm, i.e. contribution of the connecting part:
wherein,the light path for representing photon mapping is formed by connecting light with the length of s 'and a sight line with the length of t' -1; y and z denote the vertices of the ray and line of sight, respectively, and the subscripts denote the position of that point in the respective sub-paths, e.g. ys′Representing the last point on the light ray,i.e. photons, and ys′-1Represents the last point of a photon, and so on, zt′And zt′-1Respectively representing the end point and the previous point of the sight line; f. ofsIs the reflection coefficient of the material on the surface of the object, which represents the reflection of light energy at a point on the object having a certain material, given the incident angle of the light and the viewing angle of the line of sight, e.g. fs(zt′-2→zt′-1→zt′) Is shown at zt′-1Point position, light energy from zt′-2Point comes through the point and is reflected to zt′The reflection coefficient of a point, a special case being fs(ys′-1→ys′(zt′)→zt′-1) The function is expressed in ys′Dot position, light energy from ys′-1Point incidence back edge zt′To zt′-1The reflection coefficient of the light emitted in the direction of (1); g represents the geometric coefficient between two points. Since bi-directional ray tracing is connected with a probability of 1, for the connected part of bi-directional ray tracing, the contribution here is only the term contained in the light energy function, and no term related to the probability.
In photon mapping, the incident light energy at the end of the line of sight is calculated by density estimation by searching all photons over a range of distances. The distance of the search photons is typically a set value d. According to the definition of the vertex merging on the photon mapping, the sight line is paired with each collected photon, the last point of the ray is removed at the end point of the connection of the sight line and the ray, so that a light path is formed, and the contribution function of the light pathThe following were used:
wherein the join term contribution in the previous formulaBecome intoIncluding the optical energy transfer functionAnd probability of connection
Vertex merging describes photon collection as a virtual Russian Roulette (Russian Roulette) process. Probability of photon collectionCorresponding to the probability of acceptance of an independent sampling from the last point of the photon to the end of the line of sight acquisition. Acceptance means that the distance of the photon from the end point of the line of sight is equal to or less than d (the acquisition radius set by the user). On the basis, photon mapping can be described as another bidirectional optical path sampling method, and the estimated deviation of the photon mapping can be found.
To obtain unbiased photon mapping, the method comprisesAndby comparison, it can be seen that the light energy transfer function is ensured firstAnd fcIs equal and then required toIs the true probability that a photon is collected. Whereas in conventional photon mapping or vertex merging,it is only calculated by density estimation and therefore neither of these requirements can be met.
Specifically, in vertex merging, the optical energy transfer function can be expressed as:
however strict optical energy transfer functionThe following should be:
in contrast, there is an approximation for the part that includes the BSDF function as well as the geometry terms.
And for the probability of photon collection, the true collection probabilityShould be an integral:
wherein p isx(zt′-2→zt′-1→ z) represents from zt′-2Click to zt′-1At z oft′-1Probability of hitting z point after point reflection; integration Range S (y)s′And d) is ys′The acquisition range with a radius d around the point.
However, this probability is approximated in vertex merging, namely:
it can be seen that on the one hand the vertex merge uses a sampled monte carlo method to estimate the integral, while the integration area is approximated as a circular disk with radius d.
If we adopt strict formulas for the calculation of the optical energy transfer function and the probability at the same time, we can obtain an unbiased photon mapping algorithm, and the contribution function of the algorithmComprises the following steps:
the alpha function therein describes an imaginary russian roulette process, i.e., the expectation that, at the time of photon collection, a photon emitted from the last point of the ray will fall within the collection radius of the line of sight. z is a line-of-sight independent sample, which is sampled in the same way as a normal ray sample, from the last point of the photon. Accepting the sample if it falls within the acquisition radius of the line of sight endpoint, otherwise rejecting the sample, such that the expectation of the probability that the sample falls within the acquisition range isIt is noted that this process is actually a process in which two optical paths, line of sight and light, can be connected by photons, and therefore this process is very efficientProbability is the probability that a light path is generated by photon collection.
Note that this join process is different from the vertex merge algorithm because the condition for the whole optical path unbiased estimation is strictly observed. To implement such an algorithm, we need to strictly compute each of the above mathematical expressions so as not to affect the overall unbiased properties. The most important challenge comes from the connection probabilityAnd (4) calculating. The probability includes not only the material reflection function at the previous point of the ray, but also the geometric information between the two points, including the relative distance, the respective emergent angles at the two points, and the visibility, so that it is impossible to calculate the connection probability analytically.
We therefore introduce a separate monte carlo procedure to estimate this connection probability. However, applying the monte carlo estimation directly to the integral function of this probability introduces bias in the final result, since this probability integral appears on the denominator in the final estimation. The Jensen inequality illustrates this:
this inequality can only be achieved if the estimated noise is zero, whereas in our problem zero noise is not possible. We therefore need a method to estimate the inverse of the probability integral unbiased.
3.2.2 estimation of the inverse of the unbiased connected probability integral
As shown in fig. 1, we follow the previously explicit photon collection procedure, and for each ray a probe ray is emitted from the last point of the photon, whose distribution is exactly the same as the actual ray sample. The ray samples can be line-of-sight terminated as previously describedThe probability accepted for the acquisition range isThe repeated generation of the probe light in the same configuration results in a bernoulli sample process. Assuming that the first accepted probe ray is the nth ray, N follows the following geometric distribution:
the expectation of N is then exactly the inverse of the probability integral we need:
thus, an unbiased estimate of the reciprocal of the probability integral can be estimated by the number of the first successful probe ray in the bernoulli sample, i.e. the number of probe rays that hit the acquisition range in which the sight-line end is located for the first time.
3.2.3 Angle limitation of probing light
The previously mentioned algorithm for unbiased estimation of the reciprocal of probability integrals has a significant problem in that bernoulli specimens can take an arbitrarily long time to first succeed, at an intolerable cost. In order for our algorithm to be practically usable, we must avoid obviously impossible connections as much as possible.
To this end, we use the relative size of the collection range to the starting point of the probe ray. Specifically, we project the acquisition range (i.e., a sphere centered at the end of the line of sight) onto the unit sphere where the start point of the probe ray is located, and then generate only the probe ray within the projected acquisition range, as shown in fig. 1 (b). We convert the projected acquisition range into an AABB (coordinate axis aligned bounding box) defined in the sampled random number space. We can then get a sample within the integration range directly by limiting the range of random number generation.
Therefore, the number of Bernoulli samples is effectively reduced, the reduction effect is related to the distance between the starting point of the probe light and the end point of the sight line, and the number of the Bernoulli samples is reducedThe following relationship is satisfied:
based on our experiments, this angular limitation can significantly reduce the number of Bernoulli samples, reducing their complexity from O (n ^2) to O (n), n being the number of lines of sight or rays.
After the angle limitation is added, the reciprocal of the probability integral also needs to be changed correspondingly:
wherein the integration range omegabThe area of the projection of the acquisition range is reduced from the whole hemisphere. It is noted that hereSince there is no visibility issue, we have to compute this probability integral analytically.
In this section, we put the mapping problem of photon mapping under the framework of path integration for analysis and compare it with the existing unbiased and biased methods, thus obtaining the method of unbiased photon mapping. Furthermore, we originally propose a method of probing light to calculate the reciprocal of one of the probability integrals, which becomes the determining factor for the realization of this method. While this simple implementation of the trial light results in a significant loss of performance, we further propose an angle-limiting approach to the trial light to minimize the additional cost of this component. By combining the above steps, we can finally estimate the correct energy contained in the photons collected by the line of sight unbiased.
3.3 narrowing the path estimate noise by combining multiple importance sampling with other complementary sampling techniques
After an unbiased estimate of one optical path is obtained, the estimate is combined with other optical path estimates through multiple importance sampling.
The optical path integration after adding multiple importance samples is:
where m denotes the number of sampling techniques, niRepresenting the number of samples taken for the ith sampling technique.
Furthermore, we use widely accepted exponential weights
we use an index β of 2. since this weight satisfies the condition of the unbiased estimation mentioned earlier, the main challenge of multiple importance sampling is to calculate the relative probability densities of the same optical path for different sampling techniques.
A length is givenThe optical path with the degree of k, the sampling technology of k +1 bidirectional ray tracing and the sampling technology of k-1 unpolarized photon mapping can generate completely same optical paths, and the difference mainly lies in the difference of the connection positions. The sampling technique of bidirectional ray tracing is more because it can process sub-optical paths with length of zero. Probability of generating one light path by bidirectional ray tracing methodComprises the following steps:
whereinAndthe probability of generating the light ray and the probability of generating the sight line are respectively, and since the bidirectional ray tracing is connected at the joint by the probability of 1, the probability of generating the final light path is the product of the two probabilities.
Probability of generating an optical path without polarizer mapping, on the other handComprises the following steps:
one more item p compared to bidirectional ray tracingcI.e. the connection probability. However, the join probability cannot be calculated analytically, and the unbiased estimation of the probability destroys the consistency of the multiple importance samples. We therefore take an approximation here for the connection probability. Note that although we introduce an approximation to the weight function of multiple importance samples, as long as this is the caseThe weighting function satisfies the condition of the unbiased estimation mentioned before, and the unbiased nature of the final estimation is not destroyed. We approximate the connection probability as:
the reason for this minimum truncation is very simple, since pcIs the integral of the probability density function, i.e. a probability, which cannot be larger than 1, so we add a truncation here to improve the quality of the approximation. If this truncation is not applied, a probability of much more than 1 occurs in the actual calculation, so that too much weight is obtained in the weight function, thereby introducing unnecessary noise.
4. And accumulating the light energy of each photon collected by the sight line to a corresponding pixel of the image, outputting the image after the specified sampling cycle is finished, and finishing the drawing.
Examples of the embodiments
The inventor realizes the algorithm described above on a computer equipped with a four-core central processing unit of Intel i 73.40ghz and a memory of 16GB, and realizes other rendering methods closely related to our algorithm, and generates rendering results in the appendix. The experimental result shows that compared with the traditional biased photon mapping, the method can effectively eliminate the image error, and simultaneously can give full play to the advantages of the photon mapping because the acquisition radius does not need to be reduced, so that the noise of the image is smaller. Combining our method with bi-directional ray tracing, our method can more fully exploit the advantages of photon mapping sampling techniques, compared to vertex merging (VCM) methods, while avoiding image errors. After combining our method with the markov chain Monte Carlo Method (MCMC), our method can significantly help the original MCMC method to improve the efficiency on the light path of multiple reflections and the metal material reflections.
Fig. 2, 3, 4 show the comparison of the results of the rendering of our method and vertex merging method. The vertex merging algorithm of fig. 2 severely blurs the details of the model surface due to the problem of photon mapping bias. Our approach in fig. 3 significantly corrects this problem while also having a noise level comparable to that of fig. 2. The unbiased method in fig. 4 is the result after the complete convergence of the extremely long time rendering, and it can be seen that our results are completely consistent with the result.
Fig. 5, 6, and 7 show the effect of our polarizer-free mapping method in combination with the bidirectional ray tracing algorithm. The two-way ray tracing algorithm in fig. 5 is very noisy inside the building, however it works well in areas directly illuminated by the light source. In fig. 6, the noise level in the room is significantly reduced by our method, but the rendering effect is reduced in the area directly illuminated by the light source. By combining these two complementary sampling techniques, a more robust rendering technique can be obtained, with the result shown in fig. 7. The combined method inherits respective advantages of the two methods and is organically combined, so that the drawing effect exceeding that of any single technology is obtained.

Claims (3)

1. An unbiased photon mapping drawing method, characterized by comprising the following steps:
(1) inputting a three-dimensional scene file, and analyzing the three-dimensional scene file; the three-dimensional scene file comprises geometric information, material, a map, lighting information and camera setting of an object;
(2) drawing initialization, and establishing a space acceleration structure for a three-dimensional scene;
(3) starting to draw the image, and starting to execute a sampling cycle according to the sampling number or the drawing time specified by the user; drawing all pixels by one sample in each sampling cycle; the calculation process within each sampling cycle comprises the following sub-steps:
(3.1) sampling of photons: after the light rays are emitted from the light source, a series of reflection and refraction are carried out in the scene; when every time of reflection and refraction of the light ray occurs, a photon is created at the intersection position of the light ray and the scene surface, and the energy carried by the light ray at that time and the information of the current light ray reflection times are recorded in the photon; when all the rays traverse in the scene, a batch of photons are obtained, and then a space acceleration structure is established for the photons;
(3.2) sampling the sight line: the sampling number of the sight lines is consistent with that of the light rays and is equal to the number of the pixels; traversing in a scene after the sight line is transmitted, and collecting photons in a sphere with radius d around a reflection refraction point when the reflection refraction is carried out on the surface of an object each time; then, establishing a light path for each photon, and carrying out unbiased estimation on the luminous flux of each photon, wherein the unbiased estimation process comprises the following steps:
(3.2.1) error analysis of photon mapping
Transfer function of light energy in photon mappingModified to strict optical energy transfer functionThe formula is as follows:
<mrow> <msubsup> <mi>f</mi> <mrow> <mi>V</mi> <mi>M</mi> </mrow> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>f</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>y</mi> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>(</mo> <msub> <mi>z</mi> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>)</mo> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mi>G</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>&amp;LeftRightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mi>f</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>2</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mrow>
<mrow> <msup> <mi>f</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>f</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>y</mi> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mi>G</mi> <mrow> <mo>(</mo> <msub> <mi>y</mi> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>&amp;LeftRightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mi>f</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>2</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>y</mi> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mrow>
connecting probabilities in mapping photonsRevised to strict connection probabilityThe formula is as follows:
<mrow> <msubsup> <mi>p</mi> <mrow> <mi>V</mi> <mi>M</mi> </mrow> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>&amp;pi;d</mi> <mn>2</mn> </msup> <msub> <mi>p</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>2</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>y</mi> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mrow>
<mrow> <msup> <mi>p</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mo>&amp;Integral;</mo> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <msub> <mi>y</mi> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> </mrow> </msub> <msub> <mi>p</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>2</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <mi>z</mi> <mo>)</mo> </mrow> <mi>d</mi> <mi>z</mi> </mrow>
wherein,the light path for representing photon mapping is formed by connecting light with the length of s 'and a sight line with the length of t' -1; y and z represent the vertices of the ray and line of sight, respectively, the subscripts representing the position of the vertices in the respective sub-paths, ys′Representing the last point on the ray, i.e. a photon, and ys′-1Represents the last point of a photon, zt′And zt′-1Respectively representing the end point and the previous point of the line of sight, zt′-2Is a point further before the sight-line end point; f. ofsIs the reflection coefficient of the material on the surface of the object, which represents the reflection amount of the light energy at a point on the object with a certain material after the incident angle of the light and the observation angle of the sight line are given, fs(zt′-2→zt′-1→zt′) Is shown at zt′-1Point position, light energy from zt′-2Point comes through the point and is reflected to zt′Reflection coefficient of the spot, fs(ys′-1→ys′(zt′)→zt′-1) Expressed in ys′Dot position, light energy from ys′-1Point incidence back edge zt′To zt′-1The reflection coefficient of the light emitted in the direction of (1); g represents the geometric coefficient between two points, px(zt′-2→zt′-1→ z) represents from zt′-2Click to zt′-1At z oft′-1Probability of hitting z point after point reflection; integration Range S (y)s′And d) is ys′The collection range with the radius around the point being d;
(3.2.2) estimation of the inverse of the unbiased connected probability integral: for each ray, emitting a probing ray from the last point of the photon, wherein the distribution of the probing rays is completely consistent with the actual ray sampling, and the probability that the probing ray sampling can be accepted by the acquisition range of the sight end point isIn the same configurationThe process of repeatedly generating the probing light to form a Bernoulli sample is carried out; the unbiased estimation of the inverse of the joint probability integral is estimated by the number N of the first successful trial ray in the bernoulli sample, i.e. the number of the trial ray hitting the acquisition range in which the sight-line end point is located for the first time;
(4) and accumulating the light energy of each photon collected by the sight line to a corresponding pixel of the image, outputting the image after the specified sampling cycle is finished, and finishing the drawing.
2. The unbiased photon mapping method of claim 1, wherein step (3) further includes the step of angle limiting the probe light rays in the bernoulli sample of step (3.2.2), as follows:
projecting the acquisition range to a unit sphere where the starting point of the tentative light is located, and only generating the tentative light in the projected acquisition range; converting the projected acquisition range into a bounding box which is defined in a sampled random number space and is aligned with a coordinate axis; then obtaining a sample in the integral range by limiting the range of random number generation, and reducing the number of angle-limited Bernoulli samples from N to NbAnd the inverse of the joint probability integral changes accordingly:
<mrow> <mfrac> <mn>1</mn> <mrow> <msup> <mi>p</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>E</mi> <mo>&amp;lsqb;</mo> <msup> <mi>N</mi> <mi>b</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mrow> <msup> <mi>p</mi> <mi>b</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
<mrow> <msup> <mi>p</mi> <mi>b</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mo>&amp;Integral;</mo> <msub> <mi>&amp;Omega;</mi> <mi>b</mi> </msub> </msub> <msub> <mi>p</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>2</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <mi>z</mi> <mo>)</mo> </mrow> <mi>d</mi> <mi>z</mi> </mrow>
wherein,number of Bernoulli samples after angle limitationThe expected value of (c) is,integration range omega for the probability of generating a test ray within an angular limitbThe area of the projection of the acquisition range is reduced from the whole hemisphere.
3. The unbiased photon mapping drawing method according to claim 1, characterized in that, after obtaining an unbiased estimate of one optical path through step (3), the estimate is combined with other optical path estimates through multiple importance sampling, and exponential weights are used, wherein the sampling probabilities of different sampling techniques in the weights are as follows:
probability of generating one light path by bidirectional ray tracing methodComprises the following steps:
<mrow> <msubsup> <mi>p</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>t</mi> </mrow> <mrow> <mi>B</mi> <mi>D</mi> <mi>P</mi> <mi>T</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>p</mi> <mi>L</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mi>s</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>E</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mi>s</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow>
whereinAndare respectively light raysThe generation probability of (2) and the generation probability of the sight line;
probability of generating a light path without mapping of polaritonsComprises the following steps:
<mrow> <msubsup> <mi>p</mi> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>U</mi> <mi>P</mi> <mi>G</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>p</mi> <mi>L</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msup> <mi>p</mi> <mi>E</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow>
whereinAndare respectively light raysThe generation probability of (2) and the generation probability of the sight line;
the connection probability is approximated as:
<mrow> <msup> <mi>p</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;ap;</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <msup> <mi>&amp;pi;d</mi> <mn>2</mn> </msup> <msub> <mi>p</mi> <mi>x</mi> </msub> <mo>(</mo> <mrow> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>2</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>z</mi> <mrow> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;RightArrow;</mo> <msub> <mi>y</mi> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> <mo>)</mo> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
CN201510489335.9A 2015-08-11 2015-08-11 A kind of Photon Mapping method for drafting of unbiased Active CN105118083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510489335.9A CN105118083B (en) 2015-08-11 2015-08-11 A kind of Photon Mapping method for drafting of unbiased

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510489335.9A CN105118083B (en) 2015-08-11 2015-08-11 A kind of Photon Mapping method for drafting of unbiased

Publications (2)

Publication Number Publication Date
CN105118083A CN105118083A (en) 2015-12-02
CN105118083B true CN105118083B (en) 2018-03-16

Family

ID=54666056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510489335.9A Active CN105118083B (en) 2015-08-11 2015-08-11 A kind of Photon Mapping method for drafting of unbiased

Country Status (1)

Country Link
CN (1) CN105118083B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017024511A1 (en) * 2015-08-11 2017-02-16 浙江大学 Unbiased photon mapping and tracing method
CN105869204B (en) * 2016-03-28 2018-07-17 浙江大学 A kind of unbiased Photon Mapping method for drafting in participating media

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295188A (en) * 2012-02-28 2013-09-11 上海联影医疗科技有限公司 Path integral method for X-ray Monte Carlo simulation
US8638331B1 (en) * 2011-09-16 2014-01-28 Disney Enterprises, Inc. Image processing using iterative generation of intermediate images using photon beams of varying parameters
CN104699952A (en) * 2015-01-29 2015-06-10 北京航空航天大学 BRDF Monte Carlo model of wetland aquatic vegetation canopies

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8638331B1 (en) * 2011-09-16 2014-01-28 Disney Enterprises, Inc. Image processing using iterative generation of intermediate images using photon beams of varying parameters
CN103295188A (en) * 2012-02-28 2013-09-11 上海联影医疗科技有限公司 Path integral method for X-ray Monte Carlo simulation
CN104699952A (en) * 2015-01-29 2015-06-10 北京航空航天大学 BRDF Monte Carlo model of wetland aquatic vegetation canopies

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CLAUDE KNAUS et al.Progressive Photon Mapping:A Probabilistic Approach.《ACM Transaction on Graphics》.2011,第30卷(第3期),第1-13页. *
孙鑫 等.可变材质的实时全局光照明绘制.《软件学报》.2008,第19卷(第4期),第1004-1015页. *

Also Published As

Publication number Publication date
CN105118083A (en) 2015-12-02

Similar Documents

Publication Publication Date Title
JP7413321B2 (en) Daily scene restoration engine
US9245377B1 (en) Image processing using progressive generation of intermediate images using photon beams of varying parameters
Müller et al. Neural control variates
Löw et al. BRDF models for accurate and efficient rendering of glossy surfaces
Hanika et al. Manifold next event estimation
Jarosz et al. Radiance caching for participating media
US9401043B2 (en) Photon beam diffusion
Walter et al. Single scattering in refractive media with triangle mesh boundaries
Jönsson et al. Historygrams: Enabling interactive global illumination in direct volume rendering using photon mapping
Zhang et al. Real-time volume rendering in dynamic lighting environments using precomputed photon mapping
CN113808241A (en) Hardware acceleration of vertex-shared ray tracing primitives
Jarosz et al. Theory, analysis and applications of 2D global illumination
US12067667B2 (en) Using directional radiance for interactions in path tracing
Bittner et al. Adaptive global visibility sampling
Peters et al. Visibility Analysis in a Point Cloud Based on the Medial Axis Transform.
CN105118083B (en) A kind of Photon Mapping method for drafting of unbiased
Georgiev Implementing vertex connection and merging
WO2017024511A1 (en) Unbiased photon mapping and tracing method
Ge et al. Interactive simulation of scattering effects in participating media using a neural network model
Breckon et al. Three-dimensional surface relief completion via nonparametric techniques
Wang et al. A new era of indoor scene reconstruction: A survey
US20230206538A1 (en) Differentiable inverse rendering based on radiative backpropagation
Xu et al. A LiDAR data-based camera self-calibration method
CN105869204B (en) A kind of unbiased Photon Mapping method for drafting in participating media
Collin Using path integrals for the propagation of light in a scattering dominated medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant