CN103873773B - Primary-auxiliary synergy double light path design-based omnidirectional imaging method - Google Patents

Primary-auxiliary synergy double light path design-based omnidirectional imaging method Download PDF

Info

Publication number
CN103873773B
CN103873773B CN201410076397.2A CN201410076397A CN103873773B CN 103873773 B CN103873773 B CN 103873773B CN 201410076397 A CN201410076397 A CN 201410076397A CN 103873773 B CN103873773 B CN 103873773B
Authority
CN
China
Prior art keywords
image
imaging
omnidirectional
omni
blurred
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410076397.2A
Other languages
Chinese (zh)
Other versions
CN103873773A (en
Inventor
刘煜
张茂军
王炜
熊志辉
尹晓晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201410076397.2A priority Critical patent/CN103873773B/en
Publication of CN103873773A publication Critical patent/CN103873773A/en
Application granted granted Critical
Publication of CN103873773B publication Critical patent/CN103873773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the field of image information processing, solves the refraction and reflection omnidirectional imaging defocusing blurring problem and provides a primary-auxiliary synergy double light path design-based omnidirectional imaging method. Firstly a primary-auxiliary synergy double light path imaging system is used for imaging light rays in the same scene respectively on an ordinary aperture camera and a coded aperture camera via a light splitting device; designing a coded aperture scheme suitable for a refraction and reflection imaging system, so that the system can more accurately identify the blurred scale of an omnidirectional image; designing a deconvolution method aiming at the imaging features of the omnidirectional image, and restoring a defocused blurred image to obtain a wholly-focused omnidirectional image. By adopting the method, improvement of an imaging structure and improvement of an imaging method are tightly combined, the effect of removing defocusing blurring for refraction and reflection omnidirectional imaging is effectively improved, and the method has significance for improving image quality and promoting the application of the method in relevant fields.

Description

Based on the omnidirectional image method of major-minor collaborative double light design
Technical field
The invention belongs to Image Information Processing field, be specifically related to omnidirectional image method.
Background technology
Omni-directional image/video can cover large-scale wide viewing angle scene, by means of the advantage at 360 degree of visual angles, the crossing domain researchers such as a large amount of computer vision, optical instrument have been attracted to study many-sided expansion the such as basic theory, product design, application and development in recent years.Panoramic imaging techniques is roughly divided into several classes such as one camera rotary [1], flake imaging [2], joining image-forming [3] and catadioptric imaging [4].Wherein, the catadioptric imaging mode that reflecting surface combines with refractor is the imaging mode that Recent study is comparatively concentrated, its principle utilizes curved reflector collection from the light of object within the scope of three dimensions 360 degree, and reflect it to optical imaging system imaging.There is serious concentric ring-shaped distortion in the refractive-reflective all figure of acquired original, is not suitable for intuitively reflecting real nature scene, usually needs to be expanded into the cylinder panoramic image more meeting human eye vision custom in actual applications, as shown in Figure 2.
But the intrinsic spatial resolution of refractive-reflective all imaging is low, the underlying issue such as resolution is uneven, defocusing blurring becomes the main factor affecting omnidirectional image definition day by day, seriously limits the development of refractive-reflective all imaging technique.No matter refractive-reflective all imaging system focuses on endocyclic area, middle annular zone or outer region, the defocusing blurring in other region is caused in capital, as shown in Figure 3, and Fig. 3 (a) inner ring clear picture, ring clear picture in figure (b), figure (c) outer shroud clear picture.At present, the technical difficult points of catadioptric defocusing blurring problem comprises two aspects: the image defocus that (1) catadioptric imaging system curved surface optical structure causes is fuzzy, is difficult to its Blur scale of identification exactly; (2) because the imaging mode of refractive-reflective all image is different from normal image, only adopt traditional image restoration Deconvolution Algorithm Based on Frequency, be then difficult to reach better effect.
In order to deal with problems, to adopt at present extensively and comparatively effective method is code aperture technology [5].As shown in Figure 4, this technology utilizes the mask of particular design to improve imaging device, has carried out encoding targetedly processing to the light beam also not arriving imageing sensor, for the processing procedure after IMAQ provides how effective information.The image obtained through code aperture, can pick out the Blur scale of regional more accurately, and then utilize the yardstick of point spread function and Deconvolution Algorithm Based on Frequency to carry out image restoration more accurately.But the single width coded image of code aperture collected by camera is the information of characteristic frequency due to mask eclipsing loss, have impact on the effect [6] that image removes defocusing blurring.
Summary of the invention
The present invention is the defocusing blurring problem solving refractive-reflective all imaging existence, a kind of omnidirectional image method based on major-minor collaborative double light design is provided, by the mutual supplement with each other's advantages of common aperture and code aperture, utilize code aperture auxiliary identification image defocus Blur scale, again by common aperture collection do not have the image of information loss do deconvolution restore obtain total focus omni-directional image, improve imaging effect further.
For realizing above-mentioned technique effect, the concrete technical scheme that the present invention adopts is:
Present invention also offers a kind of omnidirectional image method based on major-minor collaborative double light design, comprise the steps:
Step one, the light of Same Scene is divided into for image restoration main optical path and be used for the auxiliary optical path of code aperture identification Blur scale, respectively at common aperture camera and code aperture camera imaging, wherein the luminous energy of main optical path is greater than the luminous energy of auxiliary optical path;
Step 2, structure refractive-reflective all imaging defocusing blurring Mathematical Modeling, select code aperture scheme, pick out the defocusing blurring yardstick of refractive-reflective all image regional;
Step 3, recovery carried out to defocus blurred image obtain total focus omni-directional image:
1, first defocus blurred image is divided into multiple annular section;
2, propose, based on the minimized image recovery method of the full variation of omnidirectional, to restore the defocus blurred image of each block annular section;
3, the picture rich in detail of the annular section after all recoveries is spliced, obtain total focus omni-directional image;
Based on omnidirectional's minimized image recovery method of full variation be wherein: by the pixel back projection in omni-directional image in cylinder panoramic image, according to the neighbor pixel in gradient calculation method determination horizontal and vertical direction in cylinder panoramic image, then orthographic projection is in omni-directional image, find the relevant position of corresponding pixel points, calculate the Grad size of this pixel.
Specifically, the construction method of the imaging of refractive-reflective all described in step 2 defocusing blurring Mathematical Modeling is as follows: set up ROZ coordinate system, wherein initial point O is directly over photocentre plane, is C with photocentre plan range, Z axis is perpendicular to photocentre plane, and R axle is parallel to photocentre plane.1 m on known outdoor scene dimensional target point w and mirror surface r(α, β), wherein α, β are mirror surface parameter, incident ray w mrunit direction vector be e i(α, β), 1 m on mirror surface rthe unit direction vector of the reflection ray of (α, β) is e r(α, β), 1 m on mirror surface rthe unit normal vector of (α, β) is n r(α, β), according to known mirror surface parameter, tries to achieve the position of the virtual image wc (α, β) that unit normal vector nr (α, β) and Imaging space object point w 1 mr (α, β) on mirror surface becomes; Re-use known Jacobian method and try to achieve virtual image w c(α, β) is to a m rthe distance rc of (α, β); As the some m on mirror surface r(α, β) is positioned at the intersection point m of main incident ray wf and minute surface 0during place, if virtual image w cthe object point degree of depth at (α, β) place is S d, then S dfor:
S d=(d c+m 0p 0)sinμ (1)
Wherein, d cfor a w cwith a m 0between distance.Obtain refractive-reflective all imaging further and defocus model:
Y=X*F S+η (2)
Wherein, Y is the blurred picture obtained, and X is the picture rich in detail of total focus, F sthe convolution matrix be made up of convolution kernel shape F and convolution yardstick S, convolution yardstick S and depth S dbe one to one, η is the noise of image.Model is defocused according to above, just can according to the convolution kernel shape F of code aperture below, set up the fuzzy standard of a set of each convolution yardstick S, and establishing criteria just can analyze the Blur scale of blurred picture, find the convolution yardstick matched to carry out deconvolution and do image restoration.
Specifically, the system of selection of described code aperture is: adopt Multi-encoding aperture design scheme to draw quadratic sum curve respectively, the residual sum of squares (RSS) curve that therefrom selected shape sharpness is the highest, the code aperture design that this residual sum of squares (RSS) curve is corresponding is optimal case.
Because good code aperture convolution kernel shape F can pick out different convolution yardstick S relatively easily, so the quality of code aperture design, determine the Blur scale whether accurately can determining image, also determine the quality of image restoration to a great extent.In order to determine the optimum code aperture schemes of omni-directional image, deconvolution is carried out to all possible Blur scale of blurred picture, power spectrum statistical model is adopted to calculate residual sum of squares (RSS), and the Blur scale corresponding to result images that record residual difference quadratic sum is minimum, be the correct Blur scale of blurred picture.The method for building up of described power spectrum statistical model is:
If ignore noise, imaging defocuses model being expressed as under frequency domain:
|Y|=|XF|=|X||F| (3)
Wherein, Y is blurred picture, and X is the picture rich in detail of total focus, fit is convolution kernel shape.
First frequency spectrum F (u, v) is transformed into F (w, φ) under polar coordinates, then to its φ coordinate integration, Power Spectrum Model is:
S ω ( | F | ) = 1 360 Σ φ = 1 360 F | ω , φ | ≈ A ω γ , ω = 1,2 , . . . , M - - - ( 4 )
Wherein A, r are constant.M=127, because high frequency is greatly affected by noise, so do not process HFS.Formula (4) is taken the logarithm computing, obtains linear equation:
log(S ω(|F|))=log(A)-γlog(w)=log(A)-γx (5)
Wherein, x=log (w).
Utilize formula (5) to carry out least square fitting and try to achieve residual sum of squares (RSS).
Described in step 3 is specially based on omnidirectional's minimized image recovery method of full variation: establish some I (i, j) back projection in omni-directional image to some P (u, v) in cylinder panoramic image.First the consecutive points P (u+1 in horizontal and vertical direction in cylinder panoramic image is found, v) with P (u, v+1), point I (i+s1 corresponding in omni-directional image is found again according to orthographic projection, and I (i+s2 j+t1), j+t2), then the gradient magnitude at omni-directional image mid point I (i, j) place represents with following formula:
| ▿ I ( i , j ) | = ( I ( i + s 1 , j + t 1 ) - I ( i , j ) ) 2 + ( I ( i + s 2 , j + t 2 ) - I ( i , j ) ) 2 - - - ( 6 )
The full variation J of omnidirectional of omni-directional image I is defined as:
J ( I ) = Σ ( i , j ) | ▿ I ( i , j ) | = Σ ( i , j ) ( I ( i + s 1 , j + t 1 ) - I ( i , j ) ) 2 + ( I ( i + s 2 , j + t 2 ) - I ( i , j ) ) 2 - - - ( 7 )
Utilize Lagrange rule, by complete for omnidirectional variation minimizing image, problem of restoring is converted into minimization problem below:
f *=argmin[||h*f-g|| 2+λJ(f)] (8)
In formula (8), h is fuzzy core, and g is blurred picture.λ is the constant of control algolithm convergence.Full variation Method for minimization can restraint speckle, but does not force smoothing effect to solution, namely can preserve image outline details and can ensure, under the prerequisite that image definition improves, to suppress the ringing ripples that image restoration brings.λ J (f) is constraints, i.e. Image Priori Knowledge.Pass through minimum reconstructed || h*f-g|| 2, can in the hope of the picture rich in detail f after recovery *.
In sum, the method for the invention selects the code aperture scheme being applicable to catadioptric imaging system, finally designs the image recovery method for omni-directional image imaging characteristics.The present invention effectively can overcome the deficiency that conventional method exists, and improves omni-directional image and goes defocusing blurring effect, improve the image quality of refractive-reflective all imaging system.
Accompanying drawing explanation
Fig. 1 is the flow chart of the method for the invention;
Fig. 2 (a)-(d) is respectively refractive-reflective all imaging device, imaging process, omnidirectional figure and launches Cylindrical panoramic image;
Fig. 3 is three different focusing refractive-reflective all imaging defocusing blurring actual effect figure;
Fig. 4 is the code aperture camera lens course of processing;
Fig. 5 is the catadioptric total focus imaging technique schematic diagram of code aperture aided solving;
Fig. 6 is the calculating schematic diagram of image space virtual image image distance;
Fig. 7 (a) is cylinder back projection schematic diagram;
Fig. 7 (b) is the full Variational Calculation schematic diagram of omnidirectional;
Fig. 8 is the Multi-encoding aperture schemes for com-parison and analysis;
Fig. 9 is the code aperture scheme that native system adopts;
Figure 10 is fuzzy omni-directional image;
The clear omni-directional image that Figure 11 obtains after being through native system process.
Embodiment
Below in conjunction with the drawings and specific embodiments, the invention will be further described.
The invention provides a kind of omnidirectional image method based on major-minor collaborative double light design, as shown in Figure 1, comprise the steps:
Step one: build major-minor collaborative double-light path imaging system, passes through light-dividing device, respectively at common aperture and code aperture camera imaging to the light of Same Scene.Imaging system comprises: catadioptric device, optical splitter, common aperture camera and code aperture camera.
As shown in Figure 5, from external environment incident light through catadioptric device reflection after, reverberation is divided into main optical path and auxiliary optical path two parts after light-dividing device.Main optical path, at common aperture camera imaging, obtains the perfect information image with complete information; Auxiliary optical path imaging in the camera of code aperture, analyzes the Blur scale in each region of omni-directional image by coded image.In light path design, adopt non-homogeneous spectroscopic modes, main optical path is used for successive image and restores, and image partially secretly directly can have influence on recovery effect, therefore needs the luminous energy of larger proportion.All the other luminous energy is used for auxiliary optical path, only need ensure that code aperture can identification Blur scale.The distribution of concrete luminous intensity can be determined by repetition test comparison and optimization.
Step 2: build refractive-reflective all imaging defocusing blurring Mathematical Modeling, select the code aperture scheme being applicable to catadioptric imaging system, the defocusing blurring yardstick of accurate recognition refractive-reflective all image regional.
Defocus principle in conjunction with refractive-reflective all imaging system, build refractive-reflective all imaging defocusing blurring Mathematical Modeling.Set up ROZ coordinate system as shown in Figure 6.1 m on known outdoor scene dimensional target point w and mirror surface r(α, β), α, β are mirror surface parameter.Incident ray w mrunit direction vector be e i(α, β), 1 m on mirror surface rthe unit direction vector of the reflection ray of (α, β) is e r(α, β), 1 m on mirror surface rthe unit normal vector of (α, β) is n r(α, β).Because mirror surface parameter is known, n can be tried to achieve r(α, β), also can try to achieve Imaging space object point w 1 m on mirror surface rthe virtual image w that (α, β) becomes cthe position of (α, β).Use known Jacobian method can in the hope of virtual image w c(α, β) is to a m rthe distance rc of (α, β).As a m r(α, β) is positioned at the intersection point m of main incident ray wf and minute surface 0during place, if the object point degree of depth at virtual image place is S d, then S dfor:
S d=(d c+m 0p 0)sinμ (1)
Wherein, d cfor a w cwith a m 0between distance.Obtain refractive-reflective all imaging further and defocus model:
Y=X*F S+η (2)
Wherein, Y is the blurred picture obtained, and X is the picture rich in detail of total focus, F sthe convolution matrix be made up of convolution kernel shape F and convolution yardstick S, convolution yardstick S and depth S dbe one to one, η is the noise of image.Model is defocused according to above, just can according to the convolution kernel shape F of code aperture below, set up the fuzzy standard of a set of each convolution yardstick S, and establishing criteria just can analyze the Blur scale of blurred picture, find the convolution yardstick matched to carry out deconvolution and do image restoration.
A kind of good code aperture F can pick out different convolution yardstick S relatively easily.So the quality of code aperture design, determine the Blur scale whether accurately can determining image, also determine the quality of image restoration to a great extent.In order to determine the optimum code aperture schemes of omni-directional image, known power spectrum statistical model [7] is adopted to screen further.
According to power spectrum statistical model basic theories, image is fuzzyyer, and the statistical number strong point of its power spectrum is loose all the more, and the residual sum of squares (RSS) of its correspondence is also larger.Therefore, under Power Spectrum Model, residual sum of squares (RSS) can well weigh the Blur scale of omni-directional image.For the omni-directional image of a width Blur scale the unknown, only have and use correct Blur scale deconvolution, just can obtain the result images that residual sum of squares (RSS) is minimum under Power Spectrum Distribution model.Therefore deconvolution is carried out to all possible Blur scale of blurred picture, and the Blur scale corresponding to result images that record residual difference quadratic sum is minimum, be the correct Blur scale of blurred picture.Set up power spectrum statistical model:
If ignore noise, imaging defocuses model being expressed as under frequency domain:
|Y|=|XF|=|X||F| (3)
Wherein, Y is blurred picture, and X is the picture rich in detail of total focus, and F is convolution kernel shape.
First frequency spectrum F (u, v) is transformed into F (w, φ) under polar coordinates, then to its φ coordinate integration, Power Spectrum Model is:
S ω ( | F | ) = 1 360 Σ φ = 1 360 F | ω , φ | ≈ A ω γ , ω = 1,2 , . . . , M - - - ( 4 )
Wherein A, r are constant.M=127, because high frequency is greatly affected by noise, so do not process HFS.Formula (4) is taken the logarithm computing, obtains linear equation:
log(S ω(|F|))=log(A)-γlog(w)=log(A)-γx (5)
Wherein, x=log (w).
Utilize formula (5) to carry out least square fitting and try to achieve residual sum of squares (RSS).
In order to select optimum code aperture, the defining method of described code aperture is: adopt Multi-encoding aperture design scheme to draw quadratic sum curve respectively, as shown in Figure 8, the residual sum of squares (RSS) curve that therefrom selected shape sharpness is the highest, the code aperture design that this curve is corresponding is optimal case, as shown in Figure 9.
Step 3: first carry out a point ring process, defocus blurred image is divided into multiple annular section, then utilize and restore based on the defocus blurred image of the minimized image recovery method of the full variation of omnidirectional to each block annular section, finally the picture rich in detail after recovery is spliced, obtain total focus omni-directional image.
In order to obtain more effective Image Priori Knowledge and make full use of refractive-reflective all imaging characteristics, defocusing blurring effect is better gone in can reaching based on omnidirectional's minimized image recovery method of full variation of proposition.Concrete grammar is, by the pixel back projection in omni-directional image in cylinder panoramic image, according to the neighbor pixel in gradient calculation method determination horizontal and vertical direction in cylinder panoramic image, then orthographic projection is in omni-directional image, find the relevant position of corresponding pixel points, calculate the Grad size of this pixel.
Describedly to be specially based on omnidirectional's minimized image recovery method of full variation: as shown in Figure 7, some I (i, the j) back projection in omni-directional image is to some P (u, v) in cylinder panoramic image.First the consecutive points P (u+1 in horizontal and vertical direction in cylinder panoramic image is found, v) with P (u, v+1), point I (i+s1 corresponding in omni-directional image is found again according to orthographic projection, and I (i+s2 j+t1), j+t2), then the gradient magnitude at omni-directional image mid point I (i, j) place represents with following formula:
| ▿ I ( i , j ) | = ( I ( i + s 1 , j + t 1 ) - I ( i , j ) ) 2 + ( I ( i + s 2 , j + t 2 ) - I ( i , j ) ) 2 - - - ( 6 )
The full variation J of omnidirectional of omni-directional image I is defined as:
J ( I ) = Σ ( i , j ) | ▿ I ( i , j ) | = Σ ( i , j ) ( I ( i + s 1 , j + t 1 ) - I ( i , j ) ) 2 + ( I ( i + s 2 , j + t 2 ) - I ( i , j ) ) 2 - - - ( 7 )
Utilize Lagrange rule, by complete for omnidirectional variation minimizing image, problem of restoring is converted into minimization problem below:
f *=argmin[||h*f-g|| 2+λJ(f)] (8)
In formula (8), h is fuzzy core, and g is blurred picture, and λ is the constant of control algolithm convergence.Full variation Method for minimization can restraint speckle, but does not force smoothing effect to solution, namely can preserve image outline details and can ensure, under the prerequisite that image definition improves, to suppress the ringing ripples that image restoration brings.λ J (f) is constraints, i.e. Image Priori Knowledge.Pass through minimum reconstructed || h*f-g|| 2, can in the hope of the picture rich in detail f after recovery *.
The picture rich in detail obtained by said method as shown in figure 11, is improve omni-directional image and goes defocusing blurring effect, improve the image quality of refractive-reflective all imaging system.
List of references
[1]O.Romain,T.Ea,C.Gastaud and P.Garda.An omnidirectional stereoscopicsensor:spherical color image acquisition.IEEE International Conference on ImageProcess,Volume.2,pp.209-212,2002.
[2]Z.Hall and E.Cao.Omnidirectional Viewing using a Fish Eye Lens.SPIE Optics,Illumination and Image Sensing for Machine Vision,Volume.0728,pp.250-256,1987.
[3]M.Doi and T.Yamamoto.PanoVi:A Multi-Camera Panoramic Movie System byUsing Client-Side Image Mosaicing.International Conference on Modeling andSimulation,pp:503-508,2003.
[4]S.Baker and S.K.Nayar.A Theory of Single-Viewpoint Catadioptric ImageFormation.International Journal of Computer Vision.Volume.35,Issue 2,pp.1-22,1999.
[5]A.Levin,R.Fergus,F.Durand and W.T.Freeman,Image and Depth from aConventional Camera with a Coded Aperture.ACM SIGGRAPH.Volume 26,Issue.3,pp.1-9,2007.
[6]C.Zhou,S.Lin and S.K Nayar.Coded Aperture Pairs for Depth from Defocusand Defocus Deblurring. International Journal of Computer Vision. Volume 93, Issue.1, pp.53-72, 2011.
[7] A. van der Schaaf and J. H. van Hateren, Modeling the power spectra of naturalimages: Statistics and information, Vision Research, Volume. 36, Issue. 17, pp.2759-2770, 1996.
[8] S. Kullback and R. Leibler. On Information and Sufficiency. The Annals ofMathematical Statistics. Volume. 22, Issue. 1, pp. 79–86, 1951.

Claims (5)

1., based on an omnidirectional image method for major-minor collaborative double light design, it is characterized in that, comprise the steps:
Step one, the light of Same Scene is divided into for image restoration main optical path and be used for the auxiliary optical path of code aperture identification Blur scale, respectively at common aperture camera and code aperture camera imaging, wherein the luminous energy of main optical path is greater than the luminous energy of auxiliary optical path;
Step 2, structure refractive-reflective all imaging defocusing blurring Mathematical Modeling, select code aperture scheme, in conjunction with the structure of mirror-lens system, pick out the defocusing blurring yardstick of refractive-reflective all image regional;
Step 3, recovery carried out to defocus blurred image obtain total focus omni-directional image:
1) first defocus blurred image is divided into multiple annular section;
2) propose based on the minimized image recovery method of the full variation of omnidirectional, the defocus blurred image of each block annular section is restored;
3) picture rich in detail of the annular section after all recoveries is spliced, obtain total focus omni-directional image;
Based on omnidirectional's minimized image recovery method of full variation be wherein: by the pixel back projection in omni-directional image in cylinder panoramic image, according to the neighbor pixel in gradient calculation method determination horizontal and vertical direction in cylinder panoramic image, then orthographic projection is in omni-directional image, find the relevant position of corresponding pixel points, calculate the Grad size of this pixel.
2. according to claim 1 based on the omnidirectional image method of major-minor collaborative double light design, it is characterized in that, the construction method of the imaging of refractive-reflective all described in step 2 defocusing blurring Mathematical Modeling is as follows: set up ROZ coordinate system, set up ROZ coordinate system, wherein initial point O is directly over photocentre plane, be C with photocentre plan range, Z axis is perpendicular to photocentre plane, and R axle is parallel to photocentre plane; 1 m on known outdoor scene dimensional target point w and mirror surface r(α, β), wherein α, β are mirror surface parameter, incident ray w mrunit direction vector be e i(α, β), 1 m on mirror surface rthe unit direction vector of the reflection ray of (α, β) is e r(α, β), 1 m on mirror surface rthe unit normal vector of (α, β) is n r(α, β), according to known mirror surface parameter, tries to achieve the position of the virtual image wc (α, β) that unit normal vector nr (α, β) and Imaging space object point w 1 mr (α, β) on mirror surface becomes; Re-use known Jacobian method and try to achieve virtual image w c(α, β) is to a m rthe distance rc of (α, β); As the some m on mirror surface r(α, β) is positioned at the intersection point m of main incident ray wf and minute surface 0during place, if virtual image w cthe object point degree of depth at (α, β) place is S d, then S dfor:
S d=(d c+m 0p 0)sinμ (1)
Wherein, d cfor a w cwith a m 0between distance; Obtain refractive-reflective all imaging further and defocus model:
Y=X*F S+η (2)
Wherein, Y is the blurred picture obtained, and X is the picture rich in detail of total focus, F sthe convolution matrix be made up of convolution kernel shape F and convolution yardstick S, convolution yardstick S and depth S dbe one to one, η is the noise of image; Model is defocused according to above refractive-reflective all imaging, again according to the convolution kernel shape F of code aperture, set up the fuzzy standard of a set of each convolution yardstick S, and according to the Blur scale of fuzzy standard analysis blurred picture, find the convolution yardstick matched to carry out deconvolution and do image restoration.
3. according to claim 1 based on the omnidirectional image method of major-minor collaborative double light design, it is characterized in that, the system of selection of described code aperture is: adopt Multi-encoding aperture design scheme to draw quadratic sum curve respectively, the residual sum of squares (RSS) curve that therefrom selected shape sharpness is the highest, the code aperture design that this residual sum of squares (RSS) curve is corresponding is optimal case.
4. according to claim 1 based on the omnidirectional image method of major-minor collaborative double light design, it is characterized in that, deconvolution is carried out to all possible Blur scale of blurred picture, power spectrum statistical model is adopted to solve residual sum of squares (RSS) respectively, the Blur scale that minimum that width result images of record residual difference quadratic sum uses is exactly the correct Blur scale of this blurred picture; The method for building up of described power spectrum statistical model is:
If ignore noise, imaging defocuses model being expressed as under frequency domain:
|Y|=|XF|=|X||F| (3)
Wherein, Y is blurred picture, and X is the picture rich in detail of total focus, and F is convolution kernel shape,
First frequency spectrum F (u, v) is transformed into F (w, φ) under polar coordinates, then to its φ coordinate integration, Power Spectrum Model is:
S ω ( | F | ) = 1 360 Σ φ = 1 360 | F ( ω , φ ) | ≈ A ω γ , ω = 1,2 , · · · , M - - - ( 4 )
Wherein A, r are constant, M=127, because high frequency is greatly affected by noise, so do not process HFS, to take the logarithm computing, obtain linear equation to formula (4):
log(S ω(|F|))=log(A)-γlog(w)=log(A)-γx (5)
Wherein, x=log (w),
Utilize formula (5) to carry out least square fitting and try to achieve residual sum of squares (RSS).
5. according to one of claim 1-4 Suo Shu based on the omnidirectional image method of major-minor collaborative double light design, it is characterized in that, described in step 3 is specially based on omnidirectional's minimized image recovery method of full variation: establish the some I (i in omni-directional image, j) back projection is to the some P (u in cylinder panoramic image, v), first the consecutive points P (u+1 in horizontal and vertical direction in cylinder panoramic image is found, v) with P (u, v+1), point I (i+s1 corresponding in omni-directional image is found again according to orthographic projection, and I (i+s2 j+t1), j+t2), then omni-directional image mid point I (i, j) gradient magnitude at place represents with following formula:
| ▿ I ( i , j ) | = ( I ( i + s 1 , j + t 1 ) - I ( i , j ) ) 2 + ( I ( i + s 2 , j + t 2 ) - I ( i , j ) ) 2 - - - ( 6 )
The full variation J of omnidirectional of omni-directional image I is defined as:
J ( I ) = Σ ( i , j ) | ▿ I ( i , j ) | = Σ ( i , j ) ( I ( i + s 1 , j + t 1 ) - I ( i , j ) ) 2 + ( I ( i + s 2 , j + t 2 ) - I ( i , j ) ) 2 - - - ( 7 )
Utilize Lagrange rule, by complete for omnidirectional variation minimizing image, problem of restoring is converted into minimization problem below:
f *=argmin[||h*f-g|| 2+λJ(f)] (8)
In formula (8), h is fuzzy core, and g is blurred picture, and λ is the constant of control algolithm convergence; λ J (f) is constraints, i.e. Image Priori Knowledge; Pass through minimum reconstructed || h*f-g|| 2, try to achieve the picture rich in detail f after recovery *.
CN201410076397.2A 2014-03-04 2014-03-04 Primary-auxiliary synergy double light path design-based omnidirectional imaging method Active CN103873773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410076397.2A CN103873773B (en) 2014-03-04 2014-03-04 Primary-auxiliary synergy double light path design-based omnidirectional imaging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410076397.2A CN103873773B (en) 2014-03-04 2014-03-04 Primary-auxiliary synergy double light path design-based omnidirectional imaging method

Publications (2)

Publication Number Publication Date
CN103873773A CN103873773A (en) 2014-06-18
CN103873773B true CN103873773B (en) 2015-04-15

Family

ID=50911862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410076397.2A Active CN103873773B (en) 2014-03-04 2014-03-04 Primary-auxiliary synergy double light path design-based omnidirectional imaging method

Country Status (1)

Country Link
CN (1) CN103873773B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154732A (en) * 2015-04-17 2016-11-23 博立码杰通讯(深圳)有限公司 Full-view image harvester
US10249019B2 (en) 2016-05-06 2019-04-02 Mediatek Inc. Method and apparatus for mapping omnidirectional image to a layout output format
CN108763847B (en) * 2018-09-19 2019-01-08 中国人民解放军国防科技大学 Non-invasive electric signal analysis tracking method and device and electronic equipment
CN115190247B (en) * 2022-09-07 2022-12-13 国仪量子(合肥)技术有限公司 Astigmatism eliminating method, astigmatism eliminating device, electronic apparatus and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777193A (en) * 2010-02-03 2010-07-14 中国人民解放军国防科学技术大学 Bidirectional mapping precise interpolation-based cylindrical panoramic expansion method for refractive-reflective all-directional picture
CN101957552B (en) * 2010-02-10 2011-12-28 中国人民解放军国防科学技术大学 Complementary structure refraction and reflection omnidirectional imaging device and super-resolution cylindrical surface panoramic space construction method based on same
CN102932583B (en) * 2012-07-19 2015-05-06 湖南源信光电科技有限公司 Refraction-reflection whole-focusing imaging method

Also Published As

Publication number Publication date
CN103873773A (en) 2014-06-18

Similar Documents

Publication Publication Date Title
CN107635129B (en) Three-dimensional trinocular camera device and depth fusion method
CN103325112B (en) Moving target method for quick in dynamic scene
CN103873773B (en) Primary-auxiliary synergy double light path design-based omnidirectional imaging method
JP2013531268A (en) Measuring distance using coded aperture
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN112348775B (en) Vehicle-mounted looking-around-based pavement pit detection system and method
CN105258673B (en) A kind of target ranging method based on binocular synthetic aperture focusing image, device
CN109064505A (en) A kind of depth estimation method extracted based on sliding window tensor
Liu et al. High quality depth map estimation of object surface from light-field images
JP2016024052A (en) Three-dimensional measurement system, three-dimensional measurement method and program
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
Fan et al. Multiscale cross-connected dehazing network with scene depth fusion
CN105550981A (en) Image registration and splicing method on the basis of Lucas-Kanade algorithm
Sahay et al. Dealing with parallax in shape-from-focus
CN115330653A (en) Multi-source image fusion method based on side window filtering
CN106709894A (en) Real-time image splicing method and system
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras
Shen et al. Depth map enhancement method based on joint bilateral filter
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN113436130A (en) Intelligent sensing system and device for unstructured light field
CN105574875A (en) Fish-eye image dense stereo algorithm based on polar curve geometry
CN116934819A (en) Multi-source image high-precision registration data augmentation method based on low-rank feature map
Wang et al. Contour extraction of a laser stripe located on a microscope image from a stereo light microscope
Xiong et al. GauU-Scene: A Scene Reconstruction Benchmark on Large Scale 3D Reconstruction Dataset Using Gaussian Splatting
Zakeri et al. Guided optimization framework for the fusion of time-of-flight with stereo depth

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant