CN101968880A  Method for producing image bokeh effect  Google Patents
Method for producing image bokeh effect Download PDFInfo
 Publication number
 CN101968880A CN101968880A CN 201010279057 CN201010279057A CN101968880A CN 101968880 A CN101968880 A CN 101968880A CN 201010279057 CN201010279057 CN 201010279057 CN 201010279057 A CN201010279057 A CN 201010279057A CN 101968880 A CN101968880 A CN 101968880A
 Authority
 CN
 China
 Prior art keywords
 light
 camera lens
 ray
 optical element
 method
 Prior art date
Links
 230000000694 effects Effects 0 title claims abstract description 45
 238000005070 sampling Methods 0 claims description 28
 238000009877 rendering Methods 0 claims description 10
 238000003384 imaging method Methods 0 claims description 6
 201000009310 astigmatism Diseases 0 claims description 4
 238000004364 calculation methods Methods 0 claims description 4
 238000000034 methods Methods 0 description 7
 238000004088 simulation Methods 0 description 1
Abstract
Description
Technical field
The invention belongs to field of Computer Graphics, be specifically related to a kind of diffusing scape effect simulation method with photorealistic, this technology can be simulated by the aperture diaphragm of camera lens and the caused diffusing scape effect of vignetting stop.
Background technology
In photography, scape (bokeh) effect of loosing is the phenomenon that the highlighted part of a kind of afocal (comprising point source of light) is blured on photo gradually, is called the afocal imaging again.The scape effect of loosing is everlasting and is occurred in the camera lens with shallow depth of field, as highaperture lenses, microlens or telephoto lens.Different camera lenses can produce different diffusing scape effects in the prospect of photo or background, these effects have various different circle of confusion shapes and light distribution because of the difference of lens optical characteristic.These photographic lenss are by generating the scape effect of loosing to strengthen the artistic effect of photo out of focus fuzzy region.Many camera lens manufacturers (as Nikon, Canon etc.) specialized designs some camera lenses, with the shooting that helps photographer the to be more prone to scape effect of loosing.Utilize these camera lenses, photographer can add various diffusing scape effects in their artistic work, to emphasize the special part in the photo, attracts spectators' notice, perhaps strengthens the artistic feeling of photo.In the synthetic image of computing machine, add the scape effect of loosing can strengthen image the sense of reality, improve deep vision and the user understands.
In recent years, many achievements in research of drawing about the optical imagery effect appear in field of Computer Graphics, as dazzle, the depth of field, diffusing scape etc.Glare effect is the class wave optics phenomenon relevant with diffraction, and mainly based on the diffraction theory of wave optics, the diffraction effect of combining camera or human eye inner structure is drawn out pearlescent glare effect at the rendering algorithm of this class effect.Deep Canvas is a kind of geometrical optics phenomenon relevant with lens opening size, focal length and object distance, at the rendering algorithm of this class effect based on theory of geometric optics, in conjunction with pinhole camera model or thin lens model, draw out and have the fuzzy effect of background or prospect.The scape effect of loosing and the depth of field are similar, but the optical principle of its formation is complicated, not only relevant with lens opening size, focal length and object distance, also relevant with the inner structure of camera lens more, mainly comprises aperture diaphragm and vignetting stop in the camera lens.At present, the method of drawing the scape effect of loosing can be divided into two classes: the first kind is based on the method for drafting of image, this class methods are used filtering technique behind image (as based on the collecting method of spatial convoluted with based on the scattering method of figure of confusion), the image of being drawn by the standard rendering algorithm (also can be the image that actual camera is taken) is handled, to draw out the scape effect of loosing.The advantage of these class methods is to need not to set up threedimensional scenic, has avoided tedious modeling course, saves a large amount of time, and render speed is very fast.By with layer combination technique and hardwareaccelerated technology) combine, based on the method for drafting of image can realize the loosing realtime rendering of scape effect.But these class methods have been used the pinhole camera model or have been had the simple lens model of finite aperture, and accurately the imaging process of analogue camera camera lens can not be drawn out by aperture shape and the coefficient diffusing scape effect of vignetting.Second class is based on the method for drafting of distributed ray tracing, and these class methods are at first set up threedimensional scenic, utilizes the distributed ray tracing technology that threedimensional scenic is sampled then, to draw out the scape effect of loosing.These class methods can be drawn out comparatively accurately the scape effect of loosing, but because what adopt is the pinhole camera model, and therefore the equally accurate imaging process of analogue camera camera lens can not be drawn out by aperture shape and the coefficient diffusing scape effect of vignetting.
Summary of the invention
At the shortcoming of existing method, the object of the present invention is to provide a kind of generation method of figure astigmatism scape effect.The present invention serves as that the basis generates figure with distributed ray tracing method and accurate camera lens model, is a kind of generation method of the diffusing scape effect based on theory of geometric optics.This method is set up accurate camera lens model with sequence ray tracing method, to simulate the influence to the scape effect of loosing of difform aperture diaphragm and vignetting; And utilize theory of geometric optics and sequence Image Synthesis by Ray Tracing to accurately calculate the position and the size of emergent pupil, to improve ray tracing efficient.
Technical scheme of the present invention is:
A kind of generation method of figure astigmatism scape effect the steps include:
1), utilize the ray tracing method to calculate the position of emergent pupil and the aperture of emergent pupil according to the structural information of optical element in the camera lens;
2) with in the position of the emergent pupil that calculates and the structural information that the aperture joins this camera lens;
3) structural information of in proper order also taking out optical element in this camera lens one by one, the intersection point of computing camera light and current taken component and the radiation direction behind this element, obtaining can be by the emergent ray of this camera lens;
4) the raytracing rendering module utilizes the emergent ray of step 3) gained to generate the diffusing scape effect of image that this camera is clapped.
Further, the structural information of described optical element comprises: the radius of optical element, thickness, refractive index, aperture.
Further, the computing method in the aperture of the position of described emergent pupil and emergent pupil are:
1) P that sets up an office _{0}Initialization value be this camera lens the picture planar central, the some P _{Min}Initialization value be this camera lens the rear lens center, the some P _{Max}Initialization value be on this rear lens edge a bit, light R _{Min}Be initialized as from a P _{0}To a P _{Min}, light R _{Max}Be initialized as from a P _{0}To a P _{Max}, light R is initialized as light R _{Max}
2) compute ray R _{Min}And R _{Max}Direction cosine, if R _{Min}And R _{Max}The direction cosine value differ by more than threshold value H and iterations is no more than preset threshold T, then carry out step 3), otherwise carry out step 4);
3) reversely tracing light R in this camera lens is if light R can pass through this camera lens, then R _{Min}=R, otherwise R _{Max}=R calculates R=(R then _{Min}+ R _{Max}Return step 2 behind)/2);
4) light R is defined as the marginal ray of emergent pupil, last optical element E that this light R can pass through is defined as aperture diaphragm;
5) P that sets up an office _{0}Initialization value be described aperture diaphragm the center, the some P _{3}Initialization value be the paraxial point that is positioned on the optical element of this aperture diaphragm back, light R _{3}Be initialized as from a P _{0}To a P _{3}, forward is followed the trail of light R _{3}
6) according to light R _{3}By determining the position P of emergent pupil behind this camera lens with the intersection point of optical axis, and the aperture D that determines emergent pupil by the position P and the marginal ray R of emergent pupil.
Further, the determined aperture diaphragm of step 4) is not circular, then at first obtains the smallest circle that comprises this noncircular aperture diaphragm, substitutes aperture diaphragm with this smallest circle then.
Further, the computing method of described emergent ray are:
1) point of stochastic sampling P on the imaging plane of this camera lens _{1}, at a point of emergent pupil stochastic sampling P _{2}, generate camera light R;
2) relevant information of in proper order also taking out optical element in this camera lens one by one is for each optical element: compute ray R and the current intersection point P that gets optical element _{0}If intersection point P _{0}In the pore diameter range of this optical element, then calculate this optical element at intersection point P _{0}The normal N at place, and light R reflected or light reflected T by this optical element, R is updated to R=T with light; If intersection point P _{0}Outside the pore diameter range of this optical element, then regenerate light R, compute ray R is reflected or light reflected T by this optical element;
3) can pass through the emergent ray of the light R of last optical element in this camera lens as this camera lens.
Further, adopt the described emergent ray of polycaryon processor parallel computation.
Compared with prior art, good effect of the present invention is:
Compare with the diffusing scape effect method for drafting in past, the present invention has following advantage: 1) based on accurate camera lens model, rather than ideal model (such as pinhole camera model or perspective camera model), so drawing result is true more and accurate; 2) based on sequence ray tracing method, be easy to be integrated in the renderer of any support ray tracing method, when playing up complex scene, compare with the time that ray tracing in the scene is spent, the spent time of ray tracing almost can be ignored in the camera lens, and has reduced the spent time of ray tracing in the camera lens further based on the light sampling algorithm of emergent pupil; 3) can simulate the influence of any aperture shape to the scape effect of loosing, the cost of increase only is the crossing test duration of light and aperture diaphragm; 4) can simulate the influence of vignetting to the scape effect of loosing, the cost of increase only is the crossing test duration of each lens frame in light and the camera lens.
Description of drawings
Fig. 1 is the main flow chart of the inventive method;
Fig. 2 is the subprocess figure of algorithm 2.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the present invention are further described in detail.
(1) calculating of emergent pupil
During ray tracing in carrying out camera lens, the most direct light method of sampling is to carry out between picture plane and rear lens (lens on the most close picture plane).Yet adopt the ray tracing efficient of this light method of sampling very low, this is all to be stopped to fall by the diaphragm of camera lens inside because of the many light by rear lens, and can not pass whole camera lens.
Can draw from the optical imagery theory, there are conjugate relation in aperture diaphragm, emergent pupil and entrance pupil, that is to say, the light that sends from an object point, if by entrance pupil, just must be by aperture diaphragm and emergent pupil, simultaneously by whole camera lens; If light can not pass through entrance pupil, then it equally can not be by aperture diaphragm and emergent pupil.Therefore sampling light can improve the efficient of ray tracing greatly between as plane and emergent pupil, and is relatively hour especially true at the aperture diaphragm diameter, is verified in the ray tracing efficient experiment that this point can be carried out in the back.
Therefore emergent pupil is the picture of aperture diaphragm, is nonexistent in the reality, is utilizing before emergent pupil carries out the light sampling, at first needs to calculate position (on optical axis) and aperture (radius or the diameter) of emergent pupil.The algorithm that this paper proposes is used to calculate emergent pupil at first utilizes the ray tracing method accurately to calculate the position of emergent pupil, utilizes the theoretical and ray tracing method of Gaussian optics to determine the diameter of emergent pupil then, and detailed algorithmic procedure is as follows:
Algorithm 1. calculates the position and the size of emergent pupil
Input: the data structure (radius, thickness, refractive index, the aperture that comprise optical element) of all optical elements of storage camera lens
Output: the position of emergent pupil and size
1. P of Step _{0}Be initialized as the picture planar central, some P _{Min}Be initialized as the rear lens center, some P _{Max}Be initialized as on the rear lens edge a bit;
Step 2. light R _{Min}Be initialized as from a P _{0}To a P _{Min}, light R _{Max}Be initialized as from a P _{0}To a P _{Max}, light R is initialized as light R _{Max}
If Step is 3. light R _{Min}And R _{Max}Direction cosine when differing by more than certain minimum value H and iterations and being no more than predefined maximum of T, then carry out next step, otherwise change Step 6;
Step 4. is reversely tracing light R in camera lens, last optical element that optical element E can pass through for light R;
If Step 5. light R can pass through camera lens, then R _{Min}=R, otherwise R _{Max}=R, R=(R then _{Min}+ R _{Max}Step 3 is returned in)/2;
Step 6. light R are the marginal ray of emergent pupil, and optical element E is aperture diaphragm;
7. P of Step _{0}Be initialized as the aperture diaphragm center, some P _{3}Be initialized as the paraxial point on the optical element that is positioned at the aperture diaphragm back;
Step 8. light R _{3}Be initialized as from a P _{0}To a P _{3}, forward is followed the trail of light R _{3}
The position P of Step 9. emergent pupils promptly is by light R _{3}By behind the camera lens with the intersection point of optical axis, the aperture D of emergent pupil is determined by the position P and the marginal ray R of emergent pupil, promptly by light R _{3}By determining the position P of emergent pupil behind the camera lens with the intersection point of optical axis, and the aperture D that determines emergent pupil by the position P and the marginal ray R of emergent pupil, algorithm finishes.
It should be noted that if aperture diaphragm is not circular, then at first obtain the smallest circle that comprises this noncircular aperture diaphragm, substitute aperture diaphragm with this smallest circle, in the hope of emergent pupil.The emergent pupil of obtaining only is used for light sampling, and during the sequence ray tracing in carrying out camera lens, will use actual aperture diaphragm shape.
(2) the sequence ray tracing in the camera lens
Optical element in the camera lens is arranged in a kind of orderly mode.Different with common ray tracing method, when in camera lens, carrying out ray tracing, do not need to seek the nearest intersection point of light, avoided a large amount of orderings and intersected test calculating, therefore sequence ray tracing method efficient is higher, and when integrated, both can set up accurate camera lens model with general ray tracking renderer, and can obviously not reduce the performance of rendering program again.When playing up the threedimensional scenic of a complexity, the ray tracing in the threedimensional scenic calculates the computing time that takies the overwhelming majority, and the shared time of sequence ray tracing in the camera lens almost can be ignored.The basic ideas of sequence ray tracing method are, at first all optical elements with camera lens store in the data structure, order is also taken out the optical element of this camera lens one by one, then utilize the relevant information of this optical element, the intersection point of compute ray and this element, and the radiation direction that is reflected by this element, can be used by general ray tracking renderer by the light of camera lens at last, carrying out the ray tracing (being the diffusing scape effect that the utilization of raytracing rendering module can the light by camera lens generates image) in the threedimensional scenic, computing velocity can be greatly improved during the light of raytracing rendering resume module the present invention input.Detailed algorithmic procedure is as follows:
Algorithm 2. orders are followed the trail of the light in the camera lens
Input: the data structure (radius, thickness, refractive index, the aperture that comprise optical element) of all optical elements of storage camera lens
Output: the emergent ray of camera lens
Step 1. is a point of stochastic sampling P on imaging plane _{1}, at a point of emergent pupil stochastic sampling P _{2}, generate camera light R=Ray (P _{1}, P _{1}→ P _{2});
Step 2. orders travel through each optical element in the camera lens, if also have optical element, then carry out next step, otherwise change Step 6;
The intersection point P of Step 3. compute ray R and this optical element _{0}
If Step is 4. intersection point P _{0}Outside the pore diameter range of optical element, light R can not pass through this optical element, and light is blocked, and return Step 1 and choose light R again, otherwise light R can pass through this optical element, changes Step 5;
Step 5. calculates the normal N of this optical element at the intersection point place, and compute ray R is reflected by optical element or light reflected T, upgrades R (R=T), returns Step 2;
Step 6.R is the emergent ray of camera lens, and algorithm finishes.
Every light by camera lens is separate, is independent of each other, and the ray tracing method in camera lens can executed in parallel, to make full use of the multinuclear advantage of main flow CPU, improves and follows the trail of efficient.
(3) last raytracing rendering module (reference: Pharr M, Humphreys G.Physically based rendering:fromtheory to implementation[M] .San Francisco:Morgan Kaufmann, 2004) utilize and can generate the scape effect of loosing by the light of camera lens.
Combination algorithm 2 and raytracing rendering module are carried out ray trace from the camera to the threedimensional scenic to generate the scape effect image that looses.
The ray tracing efficiency ratio
The front has been analyzed the light method of sampling based on emergent pupil theoretically and be better than the light method of sampling based on rear lens on efficient.This section is tested with the sequence ray tracing that these two kinds of method of samplings are carried out in the camera lens respectively, relatively their ray tracing efficient.In the experiment, DGAUSS lens optical parameter (radius, thickness and aperture unit are mm) as shown in table 1 is 35mm as planar dimension, and resolution is 512*512,16 light of each pixel sampling.As shown in table 2 to the result that these two kinds of method of samplings are carried out ray tracing.
Table 1DGAUSS lens optical parameter
Table 2 ray tracing efficiency ratio
In the table 2, when carrying out the ray tracing experiment, total sampling light number is 4.261*10 at every turn ^{6}Bar.F value (camera aperture coefficient) is 2.0 o'clock, and when use was carried out ray tracing based on the method for sampling of rear lens, effective sunlight was 2.556*10 ^{6}Bar (21.4 seconds consuming time), when use was carried out ray tracing based on the method for sampling of emergent pupil, effective sunlight was 3.076*10 ^{6}Bar (23.5 seconds consuming time) is followed the trail of improved efficiency about 20%; Similarly, the F value is 2.8 o'clock, follows the trail of improved efficiency about 235%; The F value is 4.0 o'clock, follows the trail of improved efficiency about 374%.This shows, obviously be better than the light method of sampling based on rear lens based on the light method of sampling of emergent pupil, and along with the increase of F value, higher based on the efficient of the light method of sampling of emergent pupil, advantage is more obvious.The time aspect, along with the increase of F value, based on the method for sampling institute timeconsuming minimizing of rear lens, this is because the light sampling efficiency reduces gradually; And increase based on the method for sampling institute timeconsuming of emergent pupil, this is because the light sampling efficiency improves gradually.Since higher relatively based on the method for sampling light sampling efficiency of emergent pupil, need follow the trail of calculating to more rays, therefore need expend more time.As can be seen from Table 2, to carry out ray tracing efficient based on the method for sampling of emergent pupil very high although use, and still has part light can not pass through camera lens, and this is the existence because of gradual halation phenomena.And along with the diminishing of F value, the change of aperture diaphragm big (being inversely proportional to) with the F value, gradual halation phenomena is more and more obvious, causes ray tracing efficient to reduce gradually.
Claims (6)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN 201010279057 CN101968880A (en)  20100910  20100910  Method for producing image bokeh effect 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN 201010279057 CN101968880A (en)  20100910  20100910  Method for producing image bokeh effect 
Publications (1)
Publication Number  Publication Date 

CN101968880A true CN101968880A (en)  20110209 
Family
ID=43548030
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN 201010279057 CN101968880A (en)  20100910  20100910  Method for producing image bokeh effect 
Country Status (1)
Country  Link 

CN (1)  CN101968880A (en) 
Cited By (4)
Publication number  Priority date  Publication date  Assignee  Title 

CN104463787A (en) *  20141211  20150325  厦门美图之家科技有限公司  Light spot blurring special effect implementation method 
US9342875B2 (en)  20140925  20160517  Altek Semiconductor Corp.  Method for generating image bokeh effect and image capturing device 
CN105809729A (en) *  20160304  20160727  深圳华强数码电影有限公司  Spherical panorama rendering method for virtual scene 
CN107092752A (en) *  20170424  20170825  北京理工大学  A kind of optical camera simulation imaging method and system based on ray tracing 
Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

JP2005223528A (en) *  20040204  20050818  Sony Corp  Imaging apparatus and imaging method 

2010
 20100910 CN CN 201010279057 patent/CN101968880A/en not_active Application Discontinuation
Patent Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

JP2005223528A (en) *  20040204  20050818  Sony Corp  Imaging apparatus and imaging method 
NonPatent Citations (2)
Title 

《系 统 仿 真 学 报》 20090430 袁麟 等 星空环境成像效果的仿真研究 21742184 16 第21卷, 第8期 2 * 
《计算机辅助设计与图形学学报》 20100531 吴佳泽 等 散景效果的真实感绘制 746752，761 16 第22卷, 第5期 2 * 
Cited By (6)
Publication number  Priority date  Publication date  Assignee  Title 

US9342875B2 (en)  20140925  20160517  Altek Semiconductor Corp.  Method for generating image bokeh effect and image capturing device 
CN104463787A (en) *  20141211  20150325  厦门美图之家科技有限公司  Light spot blurring special effect implementation method 
CN105809729A (en) *  20160304  20160727  深圳华强数码电影有限公司  Spherical panorama rendering method for virtual scene 
CN105809729B (en) *  20160304  20190426  深圳华强数码电影有限公司  A kind of spherical panorama rendering method of virtual scene 
CN107092752A (en) *  20170424  20170825  北京理工大学  A kind of optical camera simulation imaging method and system based on ray tracing 
CN107092752B (en) *  20170424  20190614  北京理工大学  A kind of optical camera simulation imaging method and system based on ray tracing 
Similar Documents
Publication  Publication Date  Title 

Zhou et al.  Stereo magnification: Learning view synthesis using multiplane images  
US9392153B2 (en)  Plenoptic camera resolution  
Schöps et al.  3D modeling on the go: Interactive 3D reconstruction of largescale scenes on mobile devices  
JP5871862B2 (en)  Image blur based on 3D depth information  
Chaudhuri et al.  Depth from defocus: a real aperture imaging approach  
CN102540464B (en)  Headmounted display device which provides surround video  
CN103472909B (en)  Realistic occlusion for a head mounted augmented reality display  
US20170076430A1 (en)  Image Processing Method and Image Processing Apparatus  
Jarosz et al.  A comprehensive theory of volumetric radiance estimation using photon points and beams  
Srinivasan et al.  Learning to synthesize a 4d rgbd light field from a single image  
Delaunoy et al.  Photometric bundle adjustment for dense multiview 3d modeling  
Lee et al.  Realtime lens blur effects and focus control  
Debevec  Rendering synthetic objects into real scenes: Bridging traditional and imagebased graphics with global illumination and high dynamic range photography  
US8497934B2 (en)  Actively addressable aperture light field camera  
Keating  Geometric, physical, and visual optics  
US8737721B2 (en)  Procedural authoring  
CN105765631B (en)  To tracking and the steady largescale surface reconstruct of mapping error  
US20130321396A1 (en)  Multiinput free viewpoint video processing pipeline  
CN106716223A (en)  Waveguide eye tracking employing switchable diffraction gratings  
US8265478B1 (en)  Plenoptic camera with large depth of field  
Georgiev et al.  Focused plenoptic camera and rendering  
JP2013038775A (en)  Ray image modeling for fast catadioptric light field rendering  
Hasinoff et al.  A layerbased restoration framework for variableaperture photography  
CN102047651B (en)  Image processing device and method, and viewpointconverted image generation device  
US20160173864A1 (en)  Pickup of objects in threedimensional display 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
C06  Publication  
SE01  Entry into force of request for substantive examination  
C10  Entry into substantive examination  
WD01  Invention patent application deemed withdrawn after publication 
Open date: 20110209 

C02  Deemed withdrawal of patent application after publication (patent law 2001) 