WO2009108050A1 - Image reconstructor - Google Patents
Image reconstructor Download PDFInfo
- Publication number
- WO2009108050A1 WO2009108050A1 PCT/NL2009/050084 NL2009050084W WO2009108050A1 WO 2009108050 A1 WO2009108050 A1 WO 2009108050A1 NL 2009050084 W NL2009050084 W NL 2009050084W WO 2009108050 A1 WO2009108050 A1 WO 2009108050A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- defocus
- optical
- document
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 104
- 238000003384 imaging method Methods 0.000 claims abstract description 34
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 31
- 230000003287 optical effect Effects 0.000 claims description 102
- 238000001228 spectrum Methods 0.000 claims description 49
- 230000004075 alteration Effects 0.000 claims description 21
- 210000001747 pupil Anatomy 0.000 claims description 16
- 238000012546 transfer Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 13
- 230000001427 coherent effect Effects 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 8
- 230000006978 adaptation Effects 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 6
- 239000007787 solid Substances 0.000 claims description 6
- 230000003595 spectral effect Effects 0.000 claims description 6
- 230000003247 decreasing effect Effects 0.000 claims description 5
- 238000009795 derivation Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 4
- 238000002604 ultrasonography Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 2
- 238000000205 computational method Methods 0.000 claims description 2
- 238000010297 mechanical methods and process Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 19
- 238000005259 measurement Methods 0.000 abstract description 5
- 238000013461 design Methods 0.000 description 7
- 230000001419 dependent effect Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000005316 response function Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000009987 spinning Methods 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000001839 endoscopy Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000003245 working effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0075—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10148—Varying focus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present invention relates to imaging and metering techniques. Firstly, the invention provides methods, systems and embodiments of these for estimating aberration errors of an image and reconstruction of said image based on a set of multiple intermediate images by non-iterative algorithms and, secondly, provides methods to reconstruct wave-fronts.
- An apparatus based on the invention can be either a dedicated camera or wave-front sensor, or these functions can be combined.
- the invention has a broad scope of embodiments and applications including, image reconstruction for one or more focal distances, image reconstruction for EDF, speed, distance and direction measurement device and wave-front sensors for various applications. Reconstruction of images independent from the defocus aberration has most practical applications. Therefore, the device or derivates thereof can be applied for digital imaging insensitive to defocus (in cameras), digital imaging for extended depth of field ("EDF", in cameras), as optical distance, speed and direction measurement device (in measuring and metering devices).
- EDF extended depth of field
- Camera units and wave-front sensors according to the methods and embodiments set forth in this document can be designed to be entirely solid state, with no moving parts, to be constructed from only very few components, for example, in a basic embodiment: simple optics, for selected application even only one lens, one beam splitter (or other beam splitting element, for example, phase grating) and two sensors and to be combined with dedicated data processing units/processing chips, with all these components in, for example, one solid polymer assembly.
- intermediate image refers to a phase-diverse intermediate image which has an unknown defocus compared to the in-focus image plane but a known a priori diversity defocus in respect of any other intermediate image in multiple intermediate images.
- the "in-focus image” plane is a plane optically conjugate to an object plane and thus having zero defocus error.
- object and image conform to the notations of Goodman for a generalized imaging system (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6).
- the object is positioned in the "object plane” and the corresponding image is positioned in the "image plane”.
- EDF is an abbreviation for Extended Depth of Field.
- in-focus refers to in focus/optical sharpness/in optimal focus
- defocus to defocus/optical un-sharpness/blurring.
- An image is meant to be in-focus when the image plane is optically conjugate to the corresponding object plane.
- This invention can, in principle, be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes.
- Colour imaging can be achieved by splitting white light into narrow spectral bands.
- White, visible light can be imaged when separated in, for example, red (R), blue (B) and green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer pattern filters providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images.
- the invention can be applied to infrared (IR) spectra.
- X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but the methods can be used for X-rays by application of, for example, crystalline monochromators to produce monochromacity.
- Optical digital technologies regarding defocus correction and EDF started with a publication of Hausler (Optics Communications 6(1), pp. 38-42, 1972) which described a combination of multiple images into a single image in such a way that the final image results in EDF.
- This method does not reconstruct the final image from the set of defocused images but combines various in-focus areas of different images.
- the present invention differs from this approach because it reconstructs the final image from intermediate, defocused images that may not contain in-focus areas at all, and, automatically, combines these images into a sharp final EDF image.
- the present invention described in this document does neither include coding of wave- fronts nor the use of phase filters.
- phase-diversity methods determine the phase of an object by comparison of a precisely focused image with a defocused image, refer to, for example, US 6771422 and US2004/0052426.
- US2004/0052426 describes non-iterative techniques for phase retrieval for estimating errors of an optical system, and includes capturing a sharp image of an object at a focal point and combining this image with a number of, intentionally, blurred unfocused images of the same object.
- This concept differs from the concept described in this document in that, firstly, the distance to the object must be known beforehand, or, alternatively, the camera be focused on the object, and, secondly, the method is designed and intends to estimate of optical errors of the optics employed in said imaging.
- This technique requires at least one focused image at a first focal point in combination with multiple unfocused images. These images are then used to calculate wave- front errors.
- the present invention differs from US 2004/0052426 in that the present invention does not require a focused image, i. e. knowledge of the distance from an object to the first principal plane of the optical system, prior to capture of the intermediate images, and uses only a set of unfocused intermediate images with unknown degree of defocus relative to the object.
- US6771422 describes a tracking system with EDF including a plurality of photo- sensors, a way of determining the defocus status of each sensor and to produce an enhanced final image.
- the defocus aberration is found by solving the transport equation derived from the parabolic equation for the complex field amplitude of a monochromatic and coherent light wave.
- the present invention differs from US6771422 in that it does not intend to solve the transport equation.
- the present invention is based on the known a priori information on the incoherent optical transfer function (OTF) of the optical system to predict the evolution of intensity distribution for different image planes and, thus, the degree of defocus by direct calculations with non-iterative algorithms.
- OTF incoherent optical transfer function
- WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), use an optical system designed such that it allows determination of the intensity and angle of propagation, by an array of micro lenses, of the light at different locations on the sensor plane resulting in a so-called "plenoptic" camera.
- the sharp images of the object points at different distances from the camera can be recalculated (for example, by ray-tracing).
- the intensity and angle of incidence of the light rays at different locations on the intermediate image plane can be derived and methods analogous to WO2006/039486, i. e. ray-tracing, can be applied to calculate sharp images of an extended object.
- the present invention described in this document differs from WO2006039486 and related documents in that the present invention does not explicitly use such information on angle of incidence obtained with an array of microlenses, for example, a Shack- Hartman wave-front sensor, but instead the respective light ray direction is directly calculated by finding the relative lateral displacement for at least one pair of phase- diverse images and using the a priori known defocus distance between them. Additionally, the intermediate phase-diverse images described in this document can also be used for determining the angle and intensity of individual rays and to compose an EDF image by ray- tracing.
- the present invention relates to imaging techniques. From the single invention a number of applications can be derived:
- the invention provides a method for estimation of defocus in the optical system without prior knowledge of the distance to the object; the method is based on digital processing of multiple intermediate defocused images, and,
- EDF images can reconstruct EDF images by either combining images from various focal planes (for example "image stacking"), or, by combining in-focus sub-images (for example “image stitching”), or, alternatively, by correction of wave-fronts, or, alternatively, by ray-tracing to project an image in a plane of choice.
- various focal planes for example "image stacking”
- in-focus sub-images for example “image stitching”
- Fifthly provides methods to calculate speed and distance of an object by analyzing subsequent images of the object including speed in all directions, X, Y and Z based on a multiple of intermediate images and consequently the acquired information on focal planes, and, Sixthly, can be used to estimate the shape of a wave-front by reconstruction of tilt of individual rays by calculating the relative lateral displacement for at least one pair of phase-diverse images and using the a priori known defocus distance between them, and,
- Ninthly can be adapted to many non-optical applications, for example, tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system (J.. e. an intermediate image degradation can be attributed to a convolution with the system response function), of which the response function is known a priori, the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori.
- tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system
- the response function is known a priori
- the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori
- the absolute degree of blurring of any intermediate image is not known a priori.
- a focused final image of an object is derived, by digital reconstruction, from at least two, defocused intermediate images having an unknown degree of defocus compared to an ideal focal plane (or, alternatively, the distance from the object to the principal planes of imaging system), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image.
- the method starts with at least two defocused, i. e. phase-diverse, intermediate images from which a final in-focus image can be reconstructed by a non- iterative algorithm and an optical system having an optical transfer function which is a priori known.
- each intermediate image has a different and a priori unknown degree of defocus in relation to the in-focus image plane of the object, but the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known.
- a generating function comprising a combination of the spatial spectra of said intermediate images and a combination of their corresponding optical transfer functions is composed. said combinations of spatial spectra and optical transfer functions are adjusted such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in- focus image plane.
- This adjustment can take the form of adjustment of coefficients or adjustment of functional dependencies or a combination thereof, so the relationship between the combination of spatial spectra and their corresponding optical transfer functions can be designed as linear, non-linear or functional relationships, depending on the intended application.) the final in- focus image is reconstructed by a non- iterative algorithm based on said combinations of spatial spectra and corresponding optical transfer functions.
- An apparatus to carry out the tasks set forth above must include the necessary imaging means and processing means.
- Such method includes an equation based on the generating function/functional satisfying
- ⁇ B (known a priori from the system configuration) is the diversity defocus between the n-th intermediate image plane and a chosen reference image plane.
- the spatial spectrum of the object i. e. final image
- x and y are the transverse coordinates in the object plane.
- I n ( ⁇ x , ⁇ y ) H( ⁇ x , ⁇ y ,(p o + ⁇ (p - A(p n )I o ( ⁇ x , ⁇ y )
- H( ⁇ x , ⁇ y , ⁇ p) denotes the de focused incoherent optical transfer function (OTF) of the optical system
- the unknown defocus ⁇ is substituted by a sum of the defocus estimate ⁇ 0 and the deviation ⁇ ⁇ ⁇ - ⁇ 0 ,
- « 1 : ⁇ ⁇ 0 + ⁇ .
- the generating function/functional ⁇ is chosen to have zero first- and higher-order derivatives up to the K -order with respect to unknown ⁇ :
- An important example of the generating function is a linear combination of the spatial spectra I n (G) x , G) y ) of the intermediate phase-diverse images
- OTF incoherent optical transfer function
- the analytical expression for the system OTF H(G) x ,G) y , ⁇ ) can be found by many ways including fitting of the calculated OTF, general formulas are given, for example, by Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996).
- MSE J J 1 Z 0 ( ⁇ x , ⁇ y ) - Z 0 ( ⁇ x , ⁇ y )
- Eq. 10 describes the non- iterative algorithm for the object reconstruction with the generating function chosen as a linear combination of the spatial spectra of the phase- diverse images.
- the defocus estimate ⁇ 0 can be found by many ways, for example, from a pair of phase diverse images. If Z 1 (G) x , GO 1 ,) is the spatial spectrum of the first image that is characterized by unknown defocus ⁇ and Z 2 (GD x , GD ) is the spatial spectrum of the second image with defocus ⁇ + ⁇ , here ⁇ being the difference in defocus predetermined by the system configuration, then the estimate of defocus is given by an appropriate expression:
- the estimate ⁇ 0 of the is unknown defocus ⁇ can be found from three consecutive phase-diverse images Z 1 (G) x , G) 1 ,) with defocus ⁇ — ⁇ 1 ? I 2 ( ⁇ x ,( ⁇ y ) with defocus ⁇ and I 3 (( ⁇ x ,( ⁇ y ) with defocus ⁇ + ⁇ 2 ( ⁇ j and ⁇ 2 are specified by the system arrangement):
- the coefficient ⁇ is the ratio of images spectra
- an estimate of defocus ( ⁇ 0 in Eq. 1) is necessary to start these computations, the estimate is automatically provided by the formulas specifying the reconstruction algorithm above.
- Such an estimate can also be provided by other analytical methods, for example, by determining the first zero-crossing in the spatial spectrum of the defocused image as described by I. Raveh et al. (I. Raveh, et al., Optical Engineering 38(10), pp. 1620-1626, 1999).
- calculations according to Eq. 16 together with Eq. 14 can be used for an apparatus to determine degree of defocus with, at least, two photo-sensors having only one photo-sensitive spot, for example photo-diodes or photo-resistors.
- a construction for such apparatus likely includes a photo-sensor but also an amplitude mask, focusing optics and processing means which are adapted to calculate the degree of defocus of, at least, one intermediate image.
- the advantage of such system is that no Fourier transformations required for calculations which significantly reduces calculation time. This could be achieved by, for example, simplification of Eq. 16 to a derivate of the Parseval's theorem, for example:
- U(x,y) defines the amplitude mask in one or multiple image planes.
- photo-diodes and photo-resistors are significantly less expensive compared to photo-sensor arrays and are more easily assembled.
- a Fourier transformation can be achieved by processing methods as described above, but can also be achieved by optical methods, for example, by optical means, for example, by an additional optical element between the beam splitter and imaging photosensor. Using such optical Fourier transformation will significantly reduce digital processing time which might be advantageous for specific applications.
- Such apparatus can be applied as, for example, a precise and inexpensive optical range meter, camera component, distance meter.
- Such apparatus differs from existing range finders with multiple discrete photo-sensors which all use phase-detection methods.
- the distance of the object to the camera can be estimated once the degree of defocus is known via a simple optical calculation, so the methods can be applied to a distance metering device.
- the speed and direction of an object in X, Y and Z directions (also: 3D space) can be estimated with additional computation means and information on at least two subsequent final images and the time in between capture of the intermediate images for these final images.
- Such inexpensive component for solid state image reconstruction will increase consumer, military (sensing and targeting, with or without the camera function and with or without wave-front sensing functions) and technical applications.
- the estimate can be obtained by an additional device, for example, an optical or ultrasound distance measuring system.
- an additional device for example, an optical or ultrasound distance measuring system.
- the estimate is provided by the algorithm itself without the aid of any additional measuring device.
- the invention also provides an apparatus for providing at least two, phase-diverse intermediate images of said object wherein each of the intermediate images has a different degree of defocus compared to an ideal focal plane (i. e. an image plane of the same system with no defocus error), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image.
- the apparatus includes processing means for reconstructing a focused image of the object by an algorithm expressed by Eq. 6.
- each phase-diverse image is characterized by an unknown absolute magnitude of an aberration but a known a priori difference in the aberration magnitude relative to any other phase-diverse image
- the processing functions mentioned above can be applied to any set of set of images or signals which are blurred, but of which the transfer (blurring) function is known.
- the processing function can be used to reconstruct images/signals with motion blur or Gaussian blur in addition to said out- of-focus blur.
- an additional generating function to provide the degree of defocus of at least one of said intermediate images compared to the in- focus image plane is provided here, and the degree of defocus can be calculated by additional processing by an apparatus.
- An improved estimate for unknown defocus can be directly calculated from at least two phase-diverse, intermediate images obtained with the optical system by a non-iterative algorithm according to: ⁇ ⁇ ⁇ -A , (18)
- I n (( ⁇ x ,( ⁇ y ) being the unshifted spectrum.
- the exit pupil of an optical system is a symmetrical region, for example, square or circle, and the defocused OTF H( ⁇ x , ⁇ j ,, ⁇ ) e Re (real value).
- the formulas above 15 give an example of a method for correcting the lateral images shift and there are also other methods to obtain shift-corrected spectra, for example, correlation technique, analysis of moments in intensity distribution.
- the degree of defocus of the intermediate image compared to the in- focus image plane can be included in the non-iterative algorithm and the processing means of an apparatus for such image reconstruction adapted accordingly. 5
- At least two intermediate images are required for the reconstruction algorithm specified by Eq. 6, but any number of intermediate images can be used providing higher quality of restoration and weaker sensitivity to the initial defocus estimate ⁇ 0 since the generating function ⁇ gives the (M - 1) - th order approximation to 5 0 ( ⁇ x , ( ⁇ y , ⁇ 0 , ⁇ j K , ⁇ M , [I 0 ( ⁇ x ,(f> y )]) defined by Eq. 1 with respect to the unknown value ⁇ .
- the resolution and overall quality of the final image will increase with increasing the number M of intermediate images, at the expense of implementation of a larger number of photo-sensors or increasingly complex optical/mechanical arrangement, increasing computation time. Reconstruction via three intermediate images is used as an example in this document.
- the degrees of defocus of the multiple intermediate images relatively to the ideal focal plane i. e. an image plane of the same system with no defocus error
- the difference in degree of defocus ⁇ B of the multiple intermediate images relatively to each other (or any chosen image plane) must be known with great precision. This imposes no problems in practice, because the relative difference in defocus is specified in the design of the camera and its optics. Note that these relative differences vary in different camera designs, the type of photosensors ⁇ ) used and intended applications of the image reconstructor.
- the differences in defocus ⁇ B can be found and accounted in further computations by performing calibration measurements with well-defined objects.
- the degree of defocus of the image can be estimated using non- iterative calculations using fixed and straightforward formulas given above and the information provided by the intermediate images. Such non- iterative calculations are of low computational cost, provide stable and precise results. Furthermore, such non-iterative calculations can be performed by relatively simple dedicated electronic circuits, further expanding the possible applications of the invention.
- the reconstruction of a final sharply focused image is independent from the degree of defocus of any of the intermediate images relative to the object.
- the precision of the measurement of the absolute defocus (and, therefore, the precision of the range which is calculated from defocus values), is fundamentally limited by the combination of the entrance aperture (D ) of the primary optics and the distance (z ) from the primary optics to an object of interest.
- a high precision for defocus and range estimates requires, by definition, a large aperture of an optical system. This can be achieved by fitting, for example, a very large lens to the apparatus. However, such lens may require a lens-diameter of one meter, a size likely not practical for the majority of applications which require small camera units.
- An effectively large aperture can also be obtained by optically combining light signals from multiple, at least two, optical elements, for example, relatively small reflective or refractive elements positioned outside of the optical axis. Such optical elements must be positioned in the direction perpendicular to the optical axis, but not necessarily so.
- the theoretical depth of focus i. e.
- axial resolution corresponds to the resolution of the whole refractive surface of which the dimension is characterized by the distance between the optical elements.
- the optical elements can be regarded as small individual sectors at the periphery of a large refractive surface.
- the total light intensity received by an image sensor depends on the combined apertures of the multiple optical elements.
- Such system with multiple apertures can be made flat and, in the case of only two light sources, also linear.
- an apparatus for, for example, range finding applications, can be constructed which combines, at least two, light signals from, at least two, optical elements which are positioned opposite at a distances perpendicular to the optical axis.
- the final image can be restored digitally by subsequent processing of, at least one, spatially modulated intermediate image according to existing and well-known decoding algorithms, or, alternatively, by algorithms adapted from procedures described in this document which adaptations to formulas above will be set forth below.
- Said modulations preferably includes defocus, but not necessarily so.
- Such wave-front encoding can be achieved by, for example, including, at least one, phase mask, or, at least one, amplitude mask, or a combination of any number of phase and amplitude masks having a precisely known modulation function.
- the system embodiment implies that, at least one, phase and /or amplitude mask is located in the exit pupils of the imaging system.
- ⁇ « [
- Eq. 26 the function i ⁇ ( ⁇ , ⁇ ) , J P B (0) ( ⁇ , ⁇ )e R , is the amplitude transmission function corresponding to n -th amplitude mask, and -& n ( ⁇ , ⁇ ) is the phase function representing the n -th phase mask in the exit pupil.
- ⁇ B ⁇ ( ⁇ 2 + ⁇ 2 )
- a new generating function/functional ⁇ ' can be constructed by properly combining i3- B ( ⁇ , ⁇ ) and/or P ⁇ 0) ( ⁇ , ⁇ ) to retain only linear terms in ⁇ in the right-hand side of Eq. 19.
- Unknown defocus ⁇ can be subsequently found from Eq. 21 by substituting ⁇ ' .
- an imaging apparatus can be designed which includes, in addition to the basic image forming optics described elsewhere in this document, at least one, optical mask to spatially modulate the incoming light signal.
- Either the phase or the intensity of said signal of, at least one, intermediate image can be modulated.
- Both phase and intensity of, at least one, intermediate image can be spatially modulated by, at least one, phase mask, or separate phase masks are included for separate and independent modulation functions.
- the resulting modulation results in, at least one, spatially modulated light signal which can be subsequently reconstructed in accordance with the method described above by digital means to diminish sensitivity of the imaging apparatus to, at least one, selected optical aberrations which can be defocus aberration.
- Image reconstruction an example with three intermediate images At least two intermediate images are required for a reconstruction as described above but any number can be used as starting point for such reconstruction.
- the reconstruction algorithm set forth in the present document we now consider an example with three intermediate images. Assume that the spatial spectra of three consecutive phase-diverse images are
- the optimum difference in defocus ⁇ between the intermediate images is related to the specific dynamic range of the image photo-sensors, i. e. their pixel depth, as well as optical features of the object of interest. Depending on defocus magnitude, the difference in distance between the photo-sensors must exceed at least one wavelength of light to produce a detectable difference in intensity of images.
- Various embodiments of a device can be designed, which include, but which are not restricted to, various embodiments described below.
- a preferred embodiment provides a method and apparatus wherein the intermediate images depict more than one object, each of the depicted objects having a different degree of focus in each of the intermediate images, and before the execution of said method one of those objects is selected.
- the image reconstructor with its providing means of intermediate images must have at least one optical component (to project an image) and at least one photo-sensor (to capture the image/light). Additionally, the reconstructor requires digital processing means, displays and all other components required for digital imaging.
- a preferred embodiment for providing means includes one image photo-sensor which can move mechanically, for example the device can be designed including optics to form an image on one sensor, which image photo-sensor or, alternatively, the whole camera assembly moves a predetermined and precise distance along the optical axis in between the subsequent intermediate exposures.
- the simplicity of such device is the need for only one photo-sensor, the complexity is mechanical needs for precise movement. Such precise movement is most effectively reached for only two images because of only need for two alternative stopping positions of the device.
- another embodiment has mechanical moving parts is a system with optics and one sensor, but with a spinning disc with, stepwise, sectors with different optical thickness. An image is taken each time a sector of different and known thickness is in front of the photo-sensor. The thickness of the material provides for a precisely known delay of the wave-front for each image separately and, thus, a set of intermediate images can be provided for subsequent reconstruction by the image reconstruction means.
- a solid state device (with no mechanical parts/movement) can be employed.
- the optics can be designed such that at least two independent intermediate images are provided to one fixed image photosensor. These images can be, for example, two large distinct sub-areas each covering approximately half of the photo-sensor and the required diversity defocus can be provided by, for example, a planar mask.
- the device can be designed including optics to form an image which image is split in multiple images by, for example, at least one, beam splitter, or alternatively phase grating, with a sensor at the end of each splitted beam with a light path which is precisely known and which represents a known degree of defocus compared to at least one other intermediate image.
- Such design for example, with mirror optics analogous to the optics of a Fabry-Perot interferometer
- a scanning device can provide the intermediate images.
- a line scanning arrangement is applied.
- Line scanners with linear photo-sensors are well known and can be implemented without much technical difficulty as providing means for an image reconstructor.
- the image can be sensed by a linear sensor scanning in the image plane.
- Such sensors even at high pixel depth, are inexpensive and mechanical means to move such sensors are well known from a myriad of applications.
- disadvantages of this embodiment are complex mechanics and increased time to capture intermediate images because scanning takes time.
- a scanner configuration employing several line photo-sensors positioned in the intermediate image planes displaced along the optical axis can be used to take the intermediate images simultaneously.
- the intermediate images can be produced by different light frequency ranges.
- Pixels of the sensor can be fitted alternatively with a red, blue or a green filter in a pattern, for example, in a well known Bayer pattern.
- Such image photo-sensors are commonplace in technical and consumer cameras.
- the colour split provides a delay and subsequent difference in defocus of the pixel groups.
- a disadvantage of this approach is that only grey-scale images will result as a final image.
- the colour split is applied to the final image, and intermediate images for the different colours reconstructed separately prior to stacking of such images.
- Arrangement for coloured images are well known, for example, Bayer pattern filters for the image photo-sensor or spinning discs with different colour filters in front of the optics of the providing means which disc is synchronized with the image capture process.
- red (R), blue (B) and green (G) spectral bands (“RGB”), or any other combination of spectral bands can also be separated by prismatic methods, as is common in professional imaging systems.
- a spatial light modulator for example, a liquid crystal device or an adaptive mirror
- the adaptive mirror can be of a most simple design because only defocus alteration is required which greatly reduces the number of actuators in such mirror.
- Such modulator can be of a planar design, i. e. "piston" phase filter, just to lengthen the path of the light, or such modulator can have any other phase modulating shape, for example, cubic filter. Using cubic filters allows for combinations of methods described in this document with wave-front coding/decoding technologies, to which references can be found in this document.
- an image reconstructor adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in- focus sub-images can be constructed for EDF and wave-front applications.
- Such reconstructor has at least one image photo-sensor (for an image/measuring light intensity) or multiple image photo-sensors (for measuring light intensity only) each divided in multiple sub- sensors with each sub-sensor producing an intermediate image independent of the other sub-sensors by projecting intermediate images on the sensor by, for example, a segmented input lens, or segmented input lens array.
- small sub-areas of at least two intermediate images can be distributed over the photo-sensors in a pattern.
- the sensor can be fitted with a device or optical layer including optical steps, which delays the incoming wave-front differently for sub-areas in the pattern of, for example, lines or dots.
- the sub-areas can have the size of one photo-sensor pixel.
- the sub-areas must, of course, be digitally read out separately to produce at least two intermediate images with different but known degrees of defocus (phase shift).
- the final image quality is dependent on the number of pixels representing an intermediate image. From at least two adjacent final sub-images a composite final image can be made, for example, for EDF applications.
- An image reconstructor which reconstructs sub-images of the total image, which sub- images can be adjacent, independent, randomly selected or overlapping can also be applied as a wave-front sensor, other words, it can detect differences in phase for each sub-image by estimation of the local defocus or, alternatively, estimate tilts per sub- image based on comparison of the spatial spectra of neighbouring images.
- the apparatus should therefore include processing means to reconstruct a wave-front by combining defocus curvatures of, at least two, intermediate sub-images.
- the method which determines defocus for a total final image, or a total object can be extended to a system which estimates the degree of defocus in a multiple of sub- intermediate- images (henceforth: sub-images) based on, at least, two intermediate full images.
- sub-images sub-images
- the local curvature can be approximated by defocus curvature (degree of defocus), and at small sub-images any aberration of any order higher or equal to 2 can be approximated by local curvature, i. e. degree of defocus. Consequently, the wave-front can be reconstructed based on the local curvatures determined for the small sub- images and the image reconstruction device becomes effectively a wave-front sensor.
- This approach is, albeit using local curvatures and not tilts, in essence an analogue to the workings of a Shack-Hartmann sensor which uses local tilt within each local sub-aperture to estimate the shape of a wave-front.
- local curvatures are used for the same.
- the well known Shack-Hartmann algorithms can be adapted to process information on curvatures rather than tilts.
- the sub-images can have, in principle, any shape and can be independent or partly overlapping depending on the required accuracy and application. For example, scanning the intermediate image by a linear photo-sensor, i. e. scanning can produce sub-images of lines.
- For wave-front sensors applications are numerous, which applications will increase with less expensive wave-front sensors.
- the intermediate images can also be used to estimate the angulation (from lateral displacements of sub-images) of light rays compared to the optical axis by comparison of the spatial spectra of the neighbouring intermediate images and then reconstruct the shape of the wave-front by applying methods developed for the analysis of so called heartmanngrams.
- the apparatus should therefore include means adapted to reconstruct a wave-front by combining lateral shifts of at least two intermediate sub- images.
- a new image of the object can be calculated as it projected on the plane perpendicular to the optical axis at any distance from the exit pupil, i.e. reconstruction of final in- focus images by ray-tracing.
- the spatial spectrum of the first image is I 1 (Gi x , ( ⁇ )
- the spectrum of the second image taken in the plane displaced by ⁇ z along the Z-axis is I 2 (Gi x ,Gi y ) .
- Lateral shift of the second image by Ax and ⁇ y results in following change in the spatial spectrum of the second image
- I 2 (Gi x ,Gi y ) being the unshifted spectrum.
- the exit pupil of an optical system is a symmetrical region, for example, square or circle, and, thus, the defocused OTF H( ⁇ x , ⁇ j ,, ⁇ ) e Re (real value).
- H( ⁇ x , ⁇ y ,z) is the system OTF with defocus expressed in terms of the displacement z with respect to the exit pupil plane.
- the intermediate images specified by /,((O 11 (D j ,) and I 2 ( ⁇ x ,Gi y ) are supposed to be displaced longitudinally by a small distance
- Eq. 45 The integration in Eq. 45 is performed over the image/sub- image area.
- the procedure described above can be applied to each sub-area separately, resulting in a final image as it is projected on the image plane at any given distance from the exit pupil and having the number of "pixels" equal to the number of sub-areas.
- This function is close to the principle described in WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), but the essential difference is that the method described in the present document does not require an additional array of microlenses.
- the information on local tilts i. e. ray directions, is recalculated from the comparison of the spatial spectra of the intermediate defocused images. It should be noted that the estimated computational cost for the described method is significantly lower than those given in WO2006/039486, in other words, the described method can provide real-time capability.
- Images with EDF can be obtained by correction of a single wave-front in a single final image.
- the non-iterative computation methods described in this document will allow for rapid computations on, for example, dedicated electronic circuits. Extended computation time on powerful computers has been a drawback of various EDF imaging techniques to date.
- EDF images can also be obtained by dividing a total image in sub- images of a much less number than the wave-front application, requiring likely thousands of sub-images, described above.
- the degree of defocus is determined per sub-image (which can be small number of sub-images, say, only a dozen or so sub- images per total image, or very large numbers with each sub-image represented by only a number of pixels.
- the desired number of sub-images depends on required accuracy, specifications of the device and its application), and the sub-images corrected accordingly followed by reconstruction of a final image by combination of corrected sub-images. This procedure results in a final image in which all extended (three- dimensional) objects are sharply in focus.
- EDF images can also be obtained by stacking at least two final images each reconstructed to correct for defocus for at least one focal plane of the same objects in cubic space. Such digital stacking procedures are well known.
- Non-iteration is most simple and save computing time. In our prototypes we reach reconstruction times -50 ms allowing real-time imaging. However, two or three iterations of calculations can improve estimate of defocus in selected cases and improve image quality. Whether iterations should be applied depends on the application and likely need for real-time imaging. Also, for example, two intermediate images combined with re-iteration of calculations can be preferred by the user to three intermediate images combined with non-iterative calculations. The embodiments and methods of reconstruction are dependent on the intended application.
- Image scanning is a well known technology and can hereby be extended for camera applications.
- images with an EDF can be reconstructed by dividing the intermediate images in a multiple of sub-sectors. For each sub-area the degree of defocus can be determined and, consequently, the optical sharpness of the sub-sector reconstructed. So, the final image will be composed of a multiple of optical focused images and have an EDF, even at full aperture camera settings.
- Linear scanning can be employed to define such linear sub-areas.
- Pattern recognition and object tracking is extremely sensitive to a variety of distortions including defocus.
- This invention provides a single sharp image of the object by single exposures as well as additional information on speed, distance and angle of travelling by multiple exposures.
- Applications can be military tracking and targeting systems, but also medical, for example, endoscopy with added information of distances.
- Methods described in this document are sensitive to wavelength. This phenomenon can be employed to split images at varying image depth when light sources of different wavelength are employed. For example, focusing at different layer depth in multilayer CD/DVD discs can be achieved for different depth simultaneously with lasers of different wavelength. A multilayer DVD pick-up optical system which reads different layers simultaneously can thus be designed.
- Other applications involve consumer and technical cameras insensitive to defocus error, iris scanning cameras insensitive to the distance of the eye to the optics, and a multiple of homeland security camera applications.
- automotive cameras can be designed which are not only insensitive to defocus, but also, for example, calculate distance and speed of chosen objects, parking aids, wave-front sensors in numerous military and medical applications. Availability of inexpensive wave-front sensors will only increase the number of applications.
- the reconstruction method described above is highly dependent of the wavelength of light forming the image. So, the methods can be adapted to determine the wavelength of light when defocus is known precisely. Consequently, the image reconstructor can, alternatively be designed as a spectrometer.
- Figure 1 shows a sequence of defocused intermediate images from the image side of the optical system from which intermediate images the final image can be reconstructed.
- An optical system with exit pupil, 1 provides, in this particular example, three photosensors (or sections/parts thereof, or as subsequent images in time, see various options in the description of the invention in this document) with three intermediate images, 2, 3, 4, with the optical axis, 5, and which images have a precisely known distance, 6, 7, 8, compared to the exit pupil, 1 , and, alternatively, precisely known distances compared to each other, 9, 10.
- a precisely known distance of an photo-sensor/image plane compared to the principle plane in such a system translates, via standard and traditional optical formulas, in a precisely known difference of defocus compared to each other.
- the reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14-16 bit/pixel. Note that all defocused images are distinctly unreadable to a degree that even the mathematical integral sign can not be recognized from any of the intermediate images.
- the reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14 bit/pixel.
- the reconstruction was carried out on intermediate images with digitally simulated defocus.
- the final image, 19, has a dynamic range of 14- bit/pixel, is reconstructed with a three-step defocus correction, with the final defocus deviation from exact value: ⁇ 0.8.
- Figure 5 shows an example of an embodiment of the imaging system employing two intermediate images to reconstruct a sharp final image.
- Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a beam splitter, 26, into two light signals.
- the light signals are finally detected by two photo-sensors, 27 and 28, positioned in the image planes shifted, one with respect to another, by a specified distance along the optical axis.
- Photo-sensors 27 and 28 provide simultaneously two intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
- Figure 6 shows an example of an embodiment of the imaging system employing three intermediate images to reconstruct a sharp final image.
- Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a first beam splitter, 26, into two light signals.
- the reflected part of light is detected by a photo-sensor, 27, whereas the transmitted light is divided by a second beam splitter, 28.
- the light signals from the beam splitter 28 are, in turn, detected by two photo-sensors, 29 and 30, positioned in the image planes shifted, one with respect to another, and relative to the image plane of the sensor 27.
- Photo-sensors 27, 29 and 30 provide simultaneously three intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
- Figure 7 illustrates the method, in this example for two intermediate images, to calculate an object image in an arbitrary image plane, i. e. at an arbitrary defocus, based on local ray- vector and intensity determined for a small sub-area of the whole (defocused) image.
- Two consecutive phase-diverse images, 2 and 3, with a predetermined defocus, or alternatively displacement, 9, along the optical axis, 5, are divided by a digital (software-based) procedure into a plurality of sub-images.
- Comparison of the spatial spectra calculated for a selected image area, 31, on phase- diverse images allows evaluating the ray-vector direction, 32, which characterizes light propagation, in geometric optics limit, along the optical axis 5.
- a corresponding image point, 33, z. e. point intensity and position, located in an arbitrary image plane, 34 can be found by ray-tracing.
- the distance, 35, from the new image plane 34 to one of the intermediate image planes is assumed to be specified.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Image Processing (AREA)
Abstract
A method and apparatus to reconstruct a sharp image from multiple phase diverse intermediate images is described. The degree of defocus of all intermediate images is unknown, but the diversity defocus is known. Images can be processed real-time because of intrinsically non- iterative algorithms. Such an apparatus is insensitive to defocus and can be included in imaging systems for extended depth of field (EDF) imaging, range finding and 3D-imaging. Additionally, wave-front sensors can be constructed by processing sub-areas of images. Applications include digital imaging, distance, speed and direction measurement and wave-front sensing, which function can be combined with the camera function.
Description
Image reconstructor
Background of the invention
The present invention relates to imaging and metering techniques. Firstly, the invention provides methods, systems and embodiments of these for estimating aberration errors of an image and reconstruction of said image based on a set of multiple intermediate images by non-iterative algorithms and, secondly, provides methods to reconstruct wave-fronts. An apparatus based on the invention can be either a dedicated camera or wave-front sensor, or these functions can be combined.
The invention has a broad scope of embodiments and applications including, image reconstruction for one or more focal distances, image reconstruction for EDF, speed, distance and direction measurement device and wave-front sensors for various applications. Reconstruction of images independent from the defocus aberration has most practical applications. Therefore, the device or derivates thereof can be applied for digital imaging insensitive to defocus (in cameras), digital imaging for extended depth of field ("EDF", in cameras), as optical distance, speed and direction measurement device (in measuring and metering devices). Camera units and wave-front sensors according to the methods and embodiments set forth in this document can be designed to be entirely solid state, with no moving parts, to be constructed from only very few components, for example, in a basic embodiment: simple optics, for selected application even only one lens, one beam splitter (or other beam splitting element, for example, phase grating) and two sensors and to be combined with dedicated data processing units/processing chips, with all these components in, for example, one solid polymer assembly.
In this document "intermediate image" refers to a phase-diverse intermediate image which has an unknown defocus compared to the in-focus image plane but a known a priori diversity defocus in respect of any other intermediate image in multiple intermediate images. The "in-focus image" plane is a plane optically conjugate to an object plane and thus having zero defocus error.
The terms "object" and "image" conform to the notations of Goodman for a generalized imaging system (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc.,
New York, 1996, Chap. 6). The object is positioned in the "object plane" and the corresponding image is positioned in the "image plane". "EDF" is an abbreviation for Extended Depth of Field.
The term "in-focus" refers to in focus/optical sharpness/in optimal focus, and the term "defocus" to defocus/optical un-sharpness/blurring. An image is meant to be in-focus when the image plane is optically conjugate to the corresponding object plane.
This document merely, by the way of example, applies the invention to camera applications for image reconstruction resulting in a corrected in-focus image because defocus is, in practice, the most important aberration. The methods and algorithms described herein can be adapted to analyse and correct for any aberration of any order or combination of aberrations of different orders. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to other aberrations as well by adaptation of formulas presented as for the applications above.
This invention can, in principle, be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes. Colour imaging can be achieved by splitting white light into narrow spectral bands. White, visible light can be imaged when separated in, for example, red (R), blue (B) and green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer pattern filters providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images. The invention can be applied to infrared (IR) spectra. X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but the methods can be used for X-rays by application of, for example, crystalline monochromators to produce monochromacity.
For ultrasound and coherent radio frequency signals the formulas can be adapted for the coherent amplitude transfer function of the corresponding system. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to almost any wave process and to almost any aberration of choice by adaptation and derivation of the formulas and mathematical concepts presented in this document.
This document describes methods to obtain sharp, focused images in planes, slices, along the optical axis as well as optical sharpness in three-dimensional space, and EDF imaging in which all objects in the intended cubic space are sharp and in-focus. The traditional focusing process, i. e. changing the distance between imaging optics and image on film or photo-detector or, otherwise, changing the focal distance of the optics takes time, requires additional, generally mechanically moving components to the camera and, last but not least, knowledge of the distance to the object of interest. Such focusing shifts the plane of focus along the optical axis. Depth of field in a single image can, traditionally, only be extended by decreasing the diameter of the pupil of the optics, i. e. by using low-NA objectives or, alternatively, apodized optics. However, decreasing the diameter of the aperture reduces the light intensity reaching the photo-sensors or photographic film and significantly degrades the image resolution due to narrowing of the image spatial spectrum. Focusing and EDF at full aperture by using computational methods present a considerable interest in imaging systems and is clearly preferable to such traditional optical/mechanical methods.
Furthermore, a method to achieve this with no moving parts (as a solid state system) is generally preferable for both manufacturer and end-user because of low cost of equipment and ease of use.
Several methods have been proposed for digital reconstruction of in-focus images some of which will be summarized below in the context of the present invention described in this document.
Optical digital technologies regarding defocus correction and EDF started with a publication of Hausler (Optics Communications 6(1), pp. 38-42, 1972) which described a combination of multiple images into a single image in such a way that the final image results in EDF. This method does not reconstruct the final image from the set of defocused images but combines various in-focus areas of different images. The present invention differs from this approach because it reconstructs the final image from intermediate, defocused images that may not contain in-focus areas at all, and, automatically, combines these images into a sharp final EDF image.
Later, methods based on phase coding/decoding which include an optical mask in the optical system which is designed such that the incoherent optical transfer function
remains unchanged within a range of defocuses. Dowsky and co-workers (refer to, for example, US2005264886, WO9957599 and E.R. Dowski and W.T. Cathey, Applied Optics 34(11), pp. 1859-1866, 1995) developed methods and applications of EDF imaging systems based on wave front coding/decoding with a phase filter followed by a straightforward decoding algorithm to reconstruct the final EDF image from the phase encoded intermediate image.
The present invention described in this document does neither include coding of wave- fronts nor the use of phase filters.
Also, various phase-diversity methods determine the phase of an object by comparison of a precisely focused image with a defocused image, refer to, for example, US 6771422 and US2004/0052426.
US2004/0052426 describes non-iterative techniques for phase retrieval for estimating errors of an optical system, and includes capturing a sharp image of an object at a focal point and combining this image with a number of, intentionally, blurred unfocused images of the same object. This concept differs from the concept described in this document in that, firstly, the distance to the object must be known beforehand, or, alternatively, the camera be focused on the object, and, secondly, the method is designed and intends to estimate of optical errors of the optics employed in said imaging. This technique requires at least one focused image at a first focal point in combination with multiple unfocused images. These images are then used to calculate wave- front errors.
The present invention differs from US 2004/0052426 in that the present invention does not require a focused image, i. e. knowledge of the distance from an object to the first principal plane of the optical system, prior to capture of the intermediate images, and uses only a set of unfocused intermediate images with unknown degree of defocus relative to the object.
US6771422, describes a tracking system with EDF including a plurality of photo- sensors, a way of determining the defocus status of each sensor and to produce an enhanced final image. The defocus aberration is found by solving the transport equation derived from the parabolic equation for the complex field amplitude of a monochromatic and coherent light wave.
The present invention differs from US6771422 in that it does not intend to solve the transport equation. The present invention is based on the known a priori information on the incoherent optical transfer function (OTF) of the optical system to predict the evolution of intensity distribution for different image planes and, thus, the degree of defocus by direct calculations with non-iterative algorithms.
Other methods to reconstruct images based on a plurality of intermediate images/intensity distributions taken at different and known degrees of defocus employ iterative phase diversity algorithms (see, for example, JJ. Dolne et al., Applied Optics 42(26), pp. 5284-5289, 2003). Such iteration can take considerable computational power and computing time which is unlikely to be carried out in real-time. The present invention described in this document differs from the standard phase diversity algorithms in that it is an essentially non-iterative method.
WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), use an optical system designed such that it allows determination of the intensity and angle of propagation, by an array of micro lenses, of the light at different locations on the sensor plane resulting in a so-called "plenoptic" camera. The sharp images of the object points at different distances from the camera can be recalculated (for example, by ray-tracing). It must be noted that with the method described in the present document the intensity and angle of incidence of the light rays at different locations on the intermediate image plane can be derived and methods analogous to WO2006/039486, i. e. ray-tracing, can be applied to calculate sharp images of an extended object.
The present invention described in this document differs from WO2006039486 and related documents in that the present invention does not explicitly use such information on angle of incidence obtained with an array of microlenses, for example, a Shack- Hartman wave-front sensor, but instead the respective light ray direction is directly calculated by finding the relative lateral displacement for at least one pair of phase- diverse images and using the a priori known defocus distance between them.
Additionally, the intermediate phase-diverse images described in this document can also be used for determining the angle and intensity of individual rays and to compose an EDF image by ray- tracing.
All documents mentioned in the sections above are included in this document by reference.
Description of the invention
The present invention relates to imaging techniques. From the single invention a number of applications can be derived:
Firstly, the invention provides a method for estimation of defocus in the optical system without prior knowledge of the distance to the object; the method is based on digital processing of multiple intermediate defocused images, and,
Secondly, provides means to digitally reconstruct a final in- focus image of the object based on digital processing of multiple intermediate defocused images, and,
Thirdly, can be used for wave-front sensing by analyzing local curvature of sub-images from which an estimated wave-front can be reconstructed, and,
Fourthly, can reconstruct EDF images by either combining images from various focal planes (for example "image stacking"), or, by combining in-focus sub-images (for example "image stitching"), or, alternatively, by correction of wave-fronts, or, alternatively, by ray-tracing to project an image in a plane of choice.
Fifthly, provides methods to calculate speed and distance of an object by analyzing subsequent images of the object including speed in all directions, X, Y and Z based on a multiple of intermediate images and consequently the acquired information on focal planes, and,
Sixthly, can be used to estimate the shape of a wave-front by reconstruction of tilt of individual rays by calculating the relative lateral displacement for at least one pair of phase-diverse images and using the a priori known defocus distance between them, and,
Seventhly, provides methods to calculate by ray-tracing a new image of an extended object in any image plane of the optical system (for example, approximating a "digital lens" device), and,
Eighthly, can be adapted to determine the wavelength of light when defocus is known precisely, providing the basis for a spectrometer, and,
Ninthly, can be adapted to many non-optical applications, for example, tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system (J.. e. an intermediate image degradation can be attributed to a convolution with the system response function), of which the response function is known a priori, the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori.
With the methods described in this document a focused final image of an object is derived, by digital reconstruction, from at least two, defocused intermediate images having an unknown degree of defocus compared to an ideal focal plane (or, alternatively, the distance from the object to the principal planes of imaging system), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image.
Firstly, a method of reconstruction and which method can be included in an apparatus will be described. The method starts with at least two defocused, i. e. phase-diverse, intermediate images from which a final in-focus image can be reconstructed by a non- iterative algorithm and an optical system having an optical transfer function which is a priori known. Note that each intermediate image has a different and a priori unknown degree of defocus in relation to the in-focus image plane of the object, but the degree of
defocus of any intermediate image in relation to any other intermediate image is a priori known.
To digitally process the images obtained above the method includes the following steps: a generating function comprising a combination of the spatial spectra of said intermediate images and a combination of their corresponding optical transfer functions is composed. said combinations of spatial spectra and optical transfer functions are adjusted such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in- focus image plane. (This adjustment can take the form of adjustment of coefficients or adjustment of functional dependencies or a combination thereof, so the relationship between the combination of spatial spectra and their corresponding optical transfer functions can be designed as linear, non-linear or functional relationships, depending on the intended application.) the final in- focus image is reconstructed by a non- iterative algorithm based on said combinations of spatial spectra and corresponding optical transfer functions.
An apparatus to carry out the tasks set forth above must include the necessary imaging means and processing means.
Such method includes an equation based on the generating function/functional satisfying
Ψ[/(ωx,ωy,φ - Δφ1),K ,/(ωx,ωj;,φ -ΔφM)] ≡
≡ Ψ[H(ωx,ωj;,φ - Δφ1)x/o(ωx,ωj;),K ,H(ωx,ωj;,φ - ΔφM)x/o(ωx,ωj;)] = (1) = K ,ΔφM,[/0(ωx,ωj;)])δφp,
where
/(ωx , ωy , φ - ΔφB ) = In (ωx , ωy ) = — J J /„ (x, y) eχp[-z(ωxx + ω^)] dxdy (2)
is the spatial spectrum of the n-th intermediate phase-diverse image, 1 < n < M ; x and y are the transverse coordinates in the intermediate image plane; M is the total number of intermediate images, M ≥ 2. Value ΔφB (known a priori from the system configuration) is the diversity defocus between the n-th intermediate image plane and a
chosen reference image plane. Analogously, the spatial spectrum of the object (i. e. final image) is
I0(ωx,ωy) = j- ] {/0(x/,/)eχp[-/(ωxx/ + ωJ,/)]JxV/ (3)
here x and y are the transverse coordinates in the object plane. In the right-side of Eq. 1 the spatial spectra of phase-diverse images are substituted with In(ωx,ωy) = H(ωx,ωy,(po + δ(p - A(pn)Io(ωx,ωy) , where H(ωx,ωy,<p) denotes the de focused incoherent optical transfer function (OTF) of the optical system; the unknown defocus φ is substituted by a sum of the defocus estimate φ0 and the deviation δφ ≡ φ -φ0 , | δφ /φ0 |« 1 : φ = φ0 +δφ . The series coefficients 5p(ωx,ωj,,φ0, Δφ1 K , ΔφM ,[I0((ύx,(ύy)]) functionally dependent on the spatial spectrum of the object I0((ϋx,(ϋy) can be found from Ψ by decomposing the defocused OTFs H((ϋx,(ϋy,φ0 +δφ - ΔφB) into Taylor series in δφ .
The generating function/functional Ψ is chosen to have zero first- and higher-order derivatives up to the K -order with respect to unknown δφ :
d'Ψ
- = 0, i = lK K. (4)
3W
Thus, Jδ!(ωx,ωj;,φ0,Δφ1K ,ΔφM ,[I0(ωx,ωy)]) = 0 for i = 1K K and Eq. 1 simplifies to
Ψ[I(ωx,ωy,(p -Aq)1XK ,I(ωx,ωy,<ρ -ΔφM)] = = 50(ωx,ωj;,φ0,Δφ1K ,ΔφM,[/0(ωx,ωj;)]) + O(δφi:+1).
Finally, neglecting the residual term O(δφκ+1) in Eq. 5, the object spatial spectrum I0((ύx,(ύy) can be found by solving the approximate equation
Ψ[I(ωx,ωy,φ -Δφ,),K ,I(ωx,ωy,φ -ΔφM)] =
(6) ≡ ^(ω^ω^ψo.Δφ^ ,ΔφM,[/0(ωx,ωy)]).
So, having two or more intermediate images In((ϋx,(ϋy) , n = 1,2... , and knowing a priori the system optical transfer function H((ϋx,(ϋy,φ) a generating function Ψ according to Eq. 1 independent from the unknown defocus φ (or δφ ), as required by
Eq. 4, can be composed by an appropriate choice of functional relation between In(G)x,G)y) and, subsequently between H(ωx,(ύy,% + δφ - ΔφB) corresponding to said spatial spectra In(G)x, (O y) . Reconstruction of the object spectrum I0(G)x, G) y) , which is the basis for the final in- focus image or in- focus picture, by a non-iterative algorithm based on Eq. 6 which includes, on the one hand, the combination of the spatial spectra In(G)x, G) y) and, on the other hand, the combination of incoherent OTFs
H(G)x,G)y,% +δφ - ΔφB) which are substituted by the corresponding Taylor expansions in δφ .
An important example of the generating function is a linear combination of the spatial spectra In(G)x, G) y) of the intermediate phase-diverse images
M ψ = ∑ ^(ωx,ωj;,φ0, Δφ1 K ΔφM)/B(ωx,ωj;) =
(7) = I0(G)x,G)y)∑Bp(G)x,G)y,φ0, Aφ1 K ,AφM)δφp, p≥O where the coefficients ^B(ωx,ωj,,φ0,Δφ1K ΔφM) with n = 1K M are chosen to comply with Eq. 4. In this case Eq. 5 results in
M
Ψ = ∑qn(G)x,G)y,%,Aq>ιK AφM)In(G)x,G)y) = «=1 (8)
= /0(ωx,ωj,)x {,80(ωx,ωj;,φ0,Δφ1K ,ΔφM)+ O(δφM)}.
The coefficients ^B(ωx,ωj,,φ0,Δφ1K ΔφM ) can be found from Eq. 8 by making substitutions In(G)x,G)y) = H(ωx,ωy,φ0 +δφ - Aφn) I0(G) x,ωy) , where an explicit expressions for the incoherent optical transfer function (OTF) H(G)x,G)y,φ) of the optical system is used. In such a way, ^B(ωx,ωj,,φ0,Δφ1K ΔφM ) are known a priori functions depending only on the optical system configuration. The analytical expression for the system OTF H(G)x,G)y,φ) can be found by many ways including fitting of the calculated OTF, general formulas are given, for example, by Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996).
The "least-mean-square" solution Z0(GDx, GD ) of Eq. 8 that minimizes the mean square error (MSE)
MSE = J J 1 Z0 (ωx , ωy ) - Z0 (ωx , ωy ) |2 dωxdωy (9)
takes the form
where the constant ε ! , by analogy with the least-mean-square-error filter (Wiener filter), denotes the signal-to-noise ratio. "Noise" in the algorithm is caused by the residual term O(δφκ+ι) in Eq. 8 depending on δφ . When | B0 \ has no zeros within the spatial frequency range of interest Ω , the constant 8 can be defined from Eq. 8 as follows: ε = min \ B0 *(ωx,ω φ0, ACp1K ,ΔφM)x0(δφM)| . (11)
So, Eq. 10 describes the non- iterative algorithm for the object reconstruction with the generating function chosen as a linear combination of the spatial spectra of the phase- diverse images.
The defocus estimate φ0 can be found by many ways, for example, from a pair of phase diverse images. If Z1(G)x, GO1,) is the spatial spectrum of the first image that is characterized by unknown defocus φ and Z2 (GDx, GD ) is the spatial spectrum of the second image with defocus φ + Δφ , here Δφ being the difference in defocus predetermined by the system configuration, then the estimate of defocus is given by an appropriate expression:
where the OTF expansion H((ϋx,(ϋy,φ) = J0 +Y1Cp2 + ... in the vicinity of φ = 0 valid at GDx + GD2 1« 1 is used. The coefficient A denotes the ratio
A(ωx,ωy)
and the averaging is carried out over low spatial frequencies | G)x + G)2 |« 1 . In addition, the estimate φ0 of the is unknown defocus φ can be found from three consecutive phase-diverse images Z1(G)x, G)1,) with defocus φ — Δφ1 ? I2(ωx,(ύy) with defocus φ and I3((ύx,(ύy) with defocus φ + Δφ2 (Δψj and Δφ2 are specified by the system arrangement):
φ Ψ o o =l 2 χχΔΔφ φ'i 2 +-ΔΔφφt2 2 (14)
The coefficient χ is the ratio of images spectra
Z2(G)x^) -Z1(G)x^)
averaged over low spatial frequencies | GX2 + G)2 |« 1 . Note that in practice the best estimates of defocus according Eq. 14 were achieved when the numerator and the denominator in Eq. 15 were averaged independently, i. e.
< I3(ωx,ωy) -I2(ωx,ωy) > < Z2 (G)x , G^ ) - Z1 (G)x , G)^ -
Note that an estimate of defocus (φ0 in Eq. 1) is necessary to start these computations, the estimate is automatically provided by the formulas specifying the reconstruction algorithm above. Such an estimate can also be provided by other analytical methods, for example, by determining the first zero-crossing in the spatial spectrum of the defocused image as described by I. Raveh et al. (I. Raveh, et al., Optical Engineering 38(10), pp. 1620-1626, 1999).
In practice, calculations according to Eq. 16 together with Eq. 14 can be used for an apparatus to determine degree of defocus with, at least, two photo-sensors having only one photo-sensitive spot, for example photo-diodes or photo-resistors. A construction for such apparatus likely includes a photo-sensor but also an amplitude mask, focusing optics and processing means which are adapted to calculate the degree of defocus of, at least, one intermediate image. The advantage of such system is that no Fourier
transformations required for calculations which significantly reduces calculation time. This could be achieved by, for example, simplification of Eq. 16 to a derivate of the Parseval's theorem, for example:
J ]u(x,y)x{I3(x,y)-I2(x,y)}dxdy χ == , (IV)
I JV(x, y)x{I2(x,y) - I1(X, y)}dxdy
where U(x,y) defines the amplitude mask in one or multiple image planes.
Also, photo-diodes and photo-resistors are significantly less expensive compared to photo-sensor arrays and are more easily assembled.
Note that a Fourier transformation can be achieved by processing methods as described above, but can also be achieved by optical methods, for example, by optical means, for example, by an additional optical element between the beam splitter and imaging photosensor. Using such optical Fourier transformation will significantly reduce digital processing time which might be advantageous for specific applications.
Such apparatus can be applied as, for example, a precise and inexpensive optical range meter, camera component, distance meter. Such apparatus differs from existing range finders with multiple discrete photo-sensors which all use phase-detection methods. The distance of the object to the camera can be estimated once the degree of defocus is known via a simple optical calculation, so the methods can be applied to a distance metering device. Also, the speed and direction of an object in X, Y and Z directions (also: 3D space) can be estimated with additional computation means and information on at least two subsequent final images and the time in between capture of the intermediate images for these final images. Such inexpensive component for solid state image reconstruction will increase consumer, military (sensing and targeting, with or without the camera function and with or without wave-front sensing functions) and technical applications.
As an alternative, the estimate can be obtained by an additional device, for example, an optical or ultrasound distance measuring system. However, most simply, in the embodiments described in this document, the estimate is provided by the algorithm
itself without the aid of any additional measuring device.
Note that an estimate on the precision of the degree of defocus can also be obtained by Cramer-Rao analysis as described in (DJ. Lee et al, J. Opt. Am. A 16(5), pp. 1005- 1015, 1999) which document is a part of this document by reference.
Apart from the method described above the invention also provides an apparatus for providing at least two, phase-diverse intermediate images of said object wherein each of the intermediate images has a different degree of defocus compared to an ideal focal plane (i. e. an image plane of the same system with no defocus error), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image. The apparatus includes processing means for reconstructing a focused image of the object by an algorithm expressed by Eq. 6.
Note that a man skilled in the art will conclude that: (a) - said Fourier-based processing with spatial spectra of images can also be carried out by processing the corresponding amplitudes of wavelets to the same effect, (b) - the described method of image restoration can be adapted to optical wave-front aberrations different from defocus. In this case each phase-diverse image is characterized by an unknown absolute magnitude of an aberration but a known a priori difference in the aberration magnitude relative to any other phase-diverse image, and, (c) - the processing functions mentioned above can be applied to any set of set of images or signals which are blurred, but of which the transfer (blurring) function is known. For example, the processing function can be used to reconstruct images/signals with motion blur or Gaussian blur in addition to said out- of-focus blur.
Secondly, an additional generating function to provide the degree of defocus of at least one of said intermediate images compared to the in- focus image plane is provided here, and the degree of defocus can be calculated by additional processing by an apparatus. An improved estimate for unknown defocus can be directly calculated from at least two phase-diverse, intermediate images obtained with the optical system by a non-iterative algorithm according to:
δφ ≡ ^-A , (18)
and thus an improved estimate becomes φ = φ0 + δφ . The generating function Ψ' in this case obeys
Ψ'[I(ωx,ωy,φ -Δφ^.K ,I(ωx,ωy,φ -ΔφM)] =
(19) = £,8;(ωx,ωj;,φ0,Δφ1K ,ΔφM,[/0(ωx,ωj;)])δφp, p≥ϋ and d'Ψ'
- = 0, i = 2K K. (20) d(δφ)
In compliance with Eq. 20, Jδ'(ωx,ωj;,φ0,Δφ1K ,ΔφM ,[/0(ωx,ωj;)]) = 0 for i = 2K K and Eq. 19 reduces to
Ψ'[/(ωx,Gvφ - Δφ,),K ,I(ωx,ωy,<p - Δ<pM)] =Bo' + B$φ + O(δφ*+1). (21)
The latter formula yields directly Eq. 18. Note that the coefficients Bp' are, in general, functional dependencies of the object spectrum I0(ωx,(ύy) which, in turn, can be found from Eq. 6.
It can be necessary to correct the spatial spectrum of at least one of said intermediate images to lateral shift of said image compared to any other intermediate image because the image reconstruction as described in this document is sensitive to lateral shifts. A method to do such correction is given below which method can be included in processing means of an apparatus carrying out such image reconstruction. The general algorithm according to Eq. 6 requires a set of defocused images as input data. However, due to, for example, mis-alignments of the optical system some intermediate images can be shifted in the plane perpendicular to the optical axis resulting in incorrect restoration of the final image.
Using shift- invariance property of the Fourier power spectra and the Hermitian redundancy in the image spectrum, i. e. image intensity is a real value, in combination with the Hermitian symmetry of the OTF, i. e. H(ωx,ω ,φ) = H*(-ωx,-ω ,φ) , the
spectrum of the shifted intermediate image can be recalculated to exclude the unknown shift. An example of the method for excluding shift dependence is described below. Assuming that the n - th intermediate image is shifted by Ax , Ay , in compliance with Eq. 3 its spectrum becomes
In (ωx,ω ) = -— [ [ In(x - Δx,y - Ay =)exp[-i(ωxx + ω y)]dxdy =
5 2π _J ∞_J ∞ , (22)
= In(ωx,ωy)Qχp[-i(ωxAx + ωyAy)]
In((ύx,(ύy) being the unshifted spectrum. In many practical cases the exit pupil of an optical system is a symmetrical region, for example, square or circle, and the defocused OTF H(ωx,ωj,,φ) e Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have in agreement with Eq. 22 i o 7n(ωx,ωy) = H(ωx,ωy,q>n)I0(ωx,ωy)eχp[-i(ωxAx+ωyAy)], Il((ox,(oy) = H((ox,(oy,(pl)IQ((ox,(oy).
From Eq. 23, [Tn(ωx,ωy)/ I,(ωx,ωy)f ~ exτp[-2i(ωxAx +ωyAy)] ≡ exp(-2/ϋ) , where i = V— 1 and the shift-dependent factor can be obviously excluded from In((ύx,(ύy) .
Thus, the shift-corrected spectrum takes the form In(ωx,ωy) = In(ωx,ωy)exτρ(iϋ) and it can be further used in the calculations according to Eq. 6. Note that the formulas above 15 give an example of a method for correcting the lateral images shift and there are also other methods to obtain shift-corrected spectra, for example, correlation technique, analysis of moments in intensity distribution.
The quality of reconstruction of an object I0((ϋx,(ϋy) according to the non-iterative 0 algorithm given by Eq. 6 can thus be significantly improved by replacing the initial de focus estimate φ0 with the improved estimate φ = φ0 + δφ , where δφ is provided by
Eq. 21. The degree of defocus of the intermediate image compared to the in- focus image plane can be included in the non-iterative algorithm and the processing means of an apparatus for such image reconstruction adapted accordingly. 5
At least two intermediate images are required for the reconstruction algorithm specified by Eq. 6, but any number of intermediate images can be used providing higher quality
of restoration and weaker sensitivity to the initial defocus estimate φ0 since the generating function Ψ gives the (M - 1) - th order approximation to 50 (ωx , (ύy ,φ0 , Δψj K , ΔφM , [I0 (ωx ,(f>y)]) defined by Eq. 1 with respect to the unknown value δφ . The resolution and overall quality of the final image will increase with increasing the number M of intermediate images, at the expense of implementation of a larger number of photo-sensors or increasingly complex optical/mechanical arrangement, increasing computation time. Reconstruction via three intermediate images is used as an example in this document.
The degrees of defocus of the multiple intermediate images relatively to the ideal focal plane (i. e. an image plane of the same system with no defocus error) differ. In Eq. 1 the defocus of the n - th intermediate image φB =φ -ΔφB (n = IK M and M is the total number of intermediate images) is unknown prior to provision of the intermediate images. However, as mentioned earlier, the difference in degree of defocus ΔφB of the multiple intermediate images relatively to each other (or any chosen image plane) must be known with great precision. This imposes no problems in practice, because the relative difference in defocus is specified in the design of the camera and its optics. Note that these relative differences vary in different camera designs, the type of photosensors^) used and intended applications of the image reconstructor. Moreover, the differences in defocus ΔφB can be found and accounted in further computations by performing calibration measurements with well-defined objects.
The degree of defocus of the image can be estimated using non- iterative calculations using fixed and straightforward formulas given above and the information provided by the intermediate images. Such non- iterative calculations are of low computational cost, provide stable and precise results. Furthermore, such non-iterative calculations can be performed by relatively simple dedicated electronic circuits, further expanding the possible applications of the invention. Thus, the reconstruction of a final sharply focused image is independent from the degree of defocus of any of the intermediate images relative to the object.
The precision of the measurement of the absolute defocus (and, therefore, the precision of the range which is calculated from defocus values), is fundamentally limited by the combination of the entrance aperture (D ) of the primary optics and the distance (z ) from the primary optics to an object of interest. In the case when a diffraction-limited spot implies the "circle of confusion" of an optical system, the depth of field becomes ~ (z/ D)2 and represents the defocus uncertainty. For a high aperture aplanatic lens, an explicit expression was derived by Sheppard (C.J.R. Sheppard, J. Microsc. 149, 73-75, 1988).
So, a high precision for defocus and range estimates requires, by definition, a large aperture of an optical system. This can be achieved by fitting, for example, a very large lens to the apparatus. However, such lens may require a lens-diameter of one meter, a size likely not practical for the majority of applications which require small camera units. An effectively large aperture can also be obtained by optically combining light signals from multiple, at least two, optical elements, for example, relatively small reflective or refractive elements positioned outside of the optical axis. Such optical elements must be positioned in the direction perpendicular to the optical axis, but not necessarily so. The theoretical depth of focus, i. e. axial resolution, corresponds to the resolution of the whole refractive surface of which the dimension is characterized by the distance between the optical elements. The optical elements can be regarded as small individual sectors at the periphery of a large refractive surface. Clearly, the total light intensity received by an image sensor depends on the combined apertures of the multiple optical elements. Such system with multiple apertures can be made flat and, in the case of only two light sources, also linear.
So, according to the above, an apparatus, for, for example, range finding applications, can be constructed which combines, at least two, light signals from, at least two, optical elements which are positioned opposite at a distances perpendicular to the optical axis.
The procedures described in this document so far require that the distances between the image planes is known precisely because the generating function, or functional (see Ψ , as in Eq. 1) combines spatial spectra of intermediate images with a priori known diversity defocus. However, a man skilled in the art may conclude that, alternatively,
such procedure can be adapted to process intermediate images that are spatially modulated by a priori known phase and amplitude phase masks. Such masks spatially modulate the phase and/or amplitude of the light waves on their way to image sensors, and result in spatially phase and/or amplitude modulated intermediate images. The final image can be restored digitally by subsequent processing of, at least one, spatially modulated intermediate image according to existing and well-known decoding algorithms, or, alternatively, by algorithms adapted from procedures described in this document which adaptations to formulas above will be set forth below. Said modulations preferably includes defocus, but not necessarily so. Such wave-front encoding can be achieved by, for example, including, at least one, phase mask, or, at least one, amplitude mask, or a combination of any number of phase and amplitude masks having a precisely known modulation function. The system embodiment implies that, at least one, phase and /or amplitude mask is located in the exit pupils of the imaging system.
For a set of intermediate images In (COx ,(ύy ) , 1 < n ≤ M , obtained with phase and/or amplitude masks, Eq. 1 can be rewritten
(24) where, the OTFs are (see, for example, H.H. Hopkins, Proc. Roy. Soc. of London, A231, 91-103, 1955)
Hn(ωx,ωy) = -±- ] J/iXξ +^,η +^)P;(ξ -^,η -^yξ</η , (25)
Ω« = [ [| Pn (ξ,η) |2 dζdϊ] being the area of the n -th pupil in canonical coordinates
(ξ ,η ) and the pupil function is given by
Pn (ξ ,η ) = C> (ξ ,η ) exp[^B (ξ ,η )] . (26)
In Eq. 26, the function i^ (ξ ,η ) , JPB (0)(ξ,η)e R , is the amplitude transmission function corresponding to n -th amplitude mask, and -&n (ξ ,η ) is the phase function representing
the n -th phase mask in the exit pupil. In case of defocus φB , for example, ■θB (ξ ,η ) = φ (ξ 2 +η 2 ) . It is important to note that Eq. 25 , or Eq. 26 in its phase function, implicitly contains unknown defocus φ , which alternatively can be expressed as φ =φo +δφ (φ0 defocus estimate).
Consider now reconstruction of the object spectrum I0((ϋx,(ϋy) from Eq. 24. The objective of the method is to properly choose combinations of i3-B (ξ ,η ) and/or PB (O)(ξ,η) for all intermediate images which combinations ensure validity of Eq. 4. Finally, the object spectrum I0(G)x, (ύy) can be recalculated from Eq. 6 with an alternative generating function/functional Ψ given by Eq.24 and invariant to defocus φ (up to terms ~ δφr).
Analogously to Eqs. 19-20, a new generating function/functional Ψ' can be constructed by properly combining i3-B (ξ ,η ) and/or P^0) (ξ ,η ) to retain only linear terms in δφ in the right-hand side of Eq. 19. Unknown defocus φ can be subsequently found from Eq. 21 by substituting Ψ' .
So, an imaging apparatus can be designed which includes, in addition to the basic image forming optics described elsewhere in this document, at least one, optical mask to spatially modulate the incoming light signal. Either the phase or the intensity of said signal of, at least one, intermediate image can be modulated. Both phase and intensity of, at least one, intermediate image, can be spatially modulated by, at least one, phase mask, or separate phase masks are included for separate and independent modulation functions. The resulting modulation results in, at least one, spatially modulated light signal which can be subsequently reconstructed in accordance with the method described above by digital means to diminish sensitivity of the imaging apparatus to, at least one, selected optical aberrations which can be defocus aberration.
Image reconstruction: an example with three intermediate images
At least two intermediate images are required for a reconstruction as described above but any number can be used as starting point for such reconstruction. As an illustration of the reconstruction algorithm set forth in the present document, we now consider an example with three intermediate images. Assume that the spatial spectra of three consecutive phase-diverse images are
Z1(G)x, (0,,) , /2(ωx, GO,,) and Z3 (ωx,ωy) , their defocuses are φ -Δφ , φ and φ +Δφ , respectively. In agreement with Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6) the reduced magnitude of defocus is specified as
where D is the exit pupil size, λ is the wavelength, Z1 is the position of the ideal image plane along the optical axis, za is the position of the shifted image plane. The defocus estimate (for the second image) can be found from Eq. 14
<Po - ^-^— p (28)
where, in agreement with Eq. 16,
_ H[I3(ωx,ωy)-I2(ωx,ωy)]dωxdωy JJ[Z2 (CO19(D,)- Z1 (ωx,ωy)]dωxdωy '
and the integration is performed over low spatial frequencies | GOx + (ϋy |« 1 . With φ0 in hand, and following to Eq. 7 the generating function satisfying Eq. 4 becomes
Ψ ≥ I0(ωx,ωy)x {h0 +v (hι + h3A<p2) + 2μ(h2 + h4A<p2) + O(δ<p3)} , (30)
and
Bo(ωx,ωy,φo,Aφ) = ho +v (hι + h3Aφ2) + 2μ(h2 + h4Aφ2) . (31)
the coefficient v and μ are v h2h3 -2\h4
4h2h4 -3h2 + Sh4 A<p2 '
and A1 (i = 0,4 ) are the Taylor series coefficients of the defocused OTF H ((D1, (D^, φ = φ0 +δφ) in the neighbourhood of φ0 , i. e.
H(ωx,ωy,φ0 +δφ) = A0 + A1 δφ + A2 δφ2 + A3 δφ3 + A4 δφ4 + O(δφ5) . (34)
Finally, the spectrum of the reconstructed image, in concordance with Eq. 10, can be rewritten as
An improved estimate of defocus φ = φ0 + δφ complies with Eq. 18 for the generating function specified by Eq. 7
where Bn is given by Eq. 31 and
^ = A0 +τ (A1 +A3Δφ2) + 2σ (A2 + A4Δφ2) , (37)
B[ = A1 + 2τ A2 + 6σA3 + 4τ A4Δφ 2 , (38)
The coefficients coefficient V and μ are specified by Eqs. 32-33, the coefficients τ and σ in Eqs. 37-39 satisfy the following equations
The optimum difference in defocus Δφ between the intermediate images is related to the specific dynamic range of the image photo-sensors, i. e. their pixel depth, as well as optical features of the object of interest. Depending on defocus magnitude, the difference in distance between the photo-sensors must exceed at least one wavelength of light to produce a detectable difference in intensity of images. The right-hand terms in Eqs. 35 and 39 are, in fact, the finite-difference approximations of the corresponding derivatives of the defocus-dependent image spectrum I(ωx,ωy,φ) = H(ωx,ωy,φ)Io(ωx,ωy) with respect to defocus φ . By reducing the difference in defocus between the intermediate images or, other words, by reducing the distance between the intermediate image planes the precision of approximation can be increased. High pixel-depth or, alternatively, high dynamic range allows for sensing small intensity variations and, thus, small difference in defocus between the intermediate images can be implemented which results in increased quality of the final image.
Various embodiments of a device can be designed, which include, but which are not restricted to, various embodiments described below.
Apart from the method and apparatus which are adapted to provide an image wherein a single object is depicted in-focus, a preferred embodiment provides a method and apparatus wherein the intermediate images depict more than one object, each of the depicted objects having a different degree of focus in each of the intermediate images, and before the execution of said method one of those objects is selected.
Clearly, the image reconstructor with its providing means of intermediate images must have at least one optical component (to project an image) and at least one photo-sensor (to capture the image/light). Additionally, the reconstructor requires digital processing means, displays and all other components required for digital imaging.
Firstly, a preferred embodiment for providing means includes one image photo-sensor which can move mechanically, for example the device can be designed including optics
to form an image on one sensor, which image photo-sensor or, alternatively, the whole camera assembly moves a predetermined and precise distance along the optical axis in between the subsequent intermediate exposures. The simplicity of such device is the need for only one photo-sensor, the complexity is mechanical needs for precise movement. Such precise movement is most effectively reached for only two images because of only need for two alternative stopping positions of the device. Alternatively, another embodiment has mechanical moving parts is a system with optics and one sensor, but with a spinning disc with, stepwise, sectors with different optical thickness. An image is taken each time a sector of different and known thickness is in front of the photo-sensor. The thickness of the material provides for a precisely known delay of the wave-front for each image separately and, thus, a set of intermediate images can be provided for subsequent reconstruction by the image reconstruction means.
Secondly, a solid state device (with no mechanical parts/movement) can be employed. In a preferred embodiment of the providing means the optics can be designed such that at least two independent intermediate images are provided to one fixed image photosensor. These images can be, for example, two large distinct sub-areas each covering approximately half of the photo-sensor and the required diversity defocus can be provided by, for example, a planar mask.
Also, at least two independent image photo-sensors (for example, three in the example set forth throughout this document) which independent image photo-sensors each produce separate intermediate images, likely, but not strictly necessary, simultaneously. The device can be designed including optics to form an image which image is split in multiple images by, for example, at least one, beam splitter, or alternatively phase grating, with a sensor at the end of each splitted beam with a light path which is precisely known and which represents a known degree of defocus compared to at least one other intermediate image. Such design (for example, with mirror optics analogous to the optics of a Fabry-Perot interferometer) has, for example, beam splitters to which a large number of sensors or independent sectors on one sensor, for example, three, can be added. The simplicity of such device is the absence of mechanical movement and its proven construction for other, for example said, interferometer, applications.
Thirdly, a scanning device can provide the intermediate images. Preferably, a line scanning arrangement is applied. Line scanners with linear photo-sensors are well known and can be implemented without much technical difficulty as providing means for an image reconstructor. The image can be sensed by a linear sensor scanning in the image plane. Such sensors, even at high pixel depth, are inexpensive and mechanical means to move such sensors are well known from a myriad of applications. Clearly, disadvantages of this embodiment are complex mechanics and increased time to capture intermediate images because scanning takes time. Alternatively, a scanner configuration employing several line photo-sensors positioned in the intermediate image planes displaced along the optical axis can be used to take the intermediate images simultaneously.
Fourthly, the intermediate images can be produced by different light frequency ranges. Pixels of the sensor can be fitted alternatively with a red, blue or a green filter in a pattern, for example, in a well known Bayer pattern. Such image photo-sensors are commonplace in technical and consumer cameras. Firstly, the colour split provides a delay and subsequent difference in defocus of the pixel groups. A disadvantage of this approach is that only grey-scale images will result as a final image. Alternatively, for colour images the colour split is applied to the final image, and intermediate images for the different colours reconstructed separately prior to stacking of such images.
Arrangement for coloured images are well known, for example, Bayer pattern filters for the image photo-sensor or spinning discs with different colour filters in front of the optics of the providing means which disc is synchronized with the image capture process. Alternatively, red (R), blue (B) and green (G) spectral bands ("RGB"), or any other combination of spectral bands, can also be separated by prismatic methods, as is common in professional imaging systems.
Fifthly, a spatial light modulator, for example, a liquid crystal device or an adaptive mirror, can be included in the light path, of the light path of at least one sensor, to modulate the light in between the taking of the intermediate images. Note that the adaptive mirror can be of a most simple design because only defocus alteration is required which greatly reduces the number of actuators in such mirror. Such modulator can be of a planar design, i. e. "piston" phase filter, just to lengthen the path of the light, or such modulator can have any other phase modulating shape, for example, cubic filter.
Using cubic filters allows for combinations of methods described in this document with wave-front coding/decoding technologies, to which references can be found in this document.
Lastly, an image reconstructor adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in- focus sub-images can be constructed for EDF and wave-front applications. Such reconstructor has at least one image photo-sensor (for an image/measuring light intensity) or multiple image photo-sensors (for measuring light intensity only) each divided in multiple sub- sensors with each sub-sensor producing an intermediate image independent of the other sub-sensors by projecting intermediate images on the sensor by, for example, a segmented input lens, or segmented input lens array.
It should be noted that increasing the number of intermediate images with consequently decreasing sensor area per intermediate image increases the precision of the estimate of de focus but decreases the image quality/resolution per intermediate image. So, for example, an application requiring high image quality the number of sub-sensors should be reduced whereas for applications requiring precise estimation of distance and speed the number of sub-sensors should be increased. Methods for calculating the optimum for such segmented lenses and lens arrays are known and summarized in, for example, Ng Ren et al. , 2005, Stanford Tech Report CTSR 2005-02 and technologies and methods related to Shack-Hartmann lens arrays. A man skilled in the art will recognize that undesired effects such a parallax between the intermediate images on the sub- sensors can be corrected for by calibration of the device during manufacturing, or, alternatively, digitally during image reconstruction, or by increasing the number of sub- sensors and their distribution on the photo-sensor.
Alternatively, small sub-areas of at least two intermediate images can be distributed over the photo-sensors in a pattern. For example, the sensor can be fitted with a device or optical layer including optical steps, which delays the incoming wave-front differently for sub-areas in the pattern of, for example, lines or dots. Theoretically, the sub-areas can have the size of one photo-sensor pixel. The sub-areas must, of course, be digitally read out separately to produce at least two intermediate images with different but known degrees of defocus (phase shift). Clearly, the final image quality is dependent on the number of pixels representing an intermediate image. From at least two adjacent final sub-images a composite final image can be made, for
example, for EDF applications.
An image reconstructor which reconstructs sub-images of the total image, which sub- images can be adjacent, independent, randomly selected or overlapping can also be applied as a wave-front sensor, other words, it can detect differences in phase for each sub-image by estimation of the local defocus or, alternatively, estimate tilts per sub- image based on comparison of the spatial spectra of neighbouring images. The apparatus should therefore include processing means to reconstruct a wave-front by combining defocus curvatures of, at least two, intermediate sub-images.
For wave-front sensing applications the method which determines defocus for a total final image, or a total object, can be extended to a system which estimates the degree of defocus in a multiple of sub- intermediate- images (henceforth: sub-images) based on, at least, two intermediate full images. For small areas the local curvature can be approximated by defocus curvature (degree of defocus), and at small sub-images any aberration of any order higher or equal to 2 can be approximated by local curvature, i. e. degree of defocus. Consequently, the wave-front can be reconstructed based on the local curvatures determined for the small sub- images and the image reconstruction device becomes effectively a wave-front sensor. This approach is, albeit using local curvatures and not tilts, in essence an analogue to the workings of a Shack-Hartmann sensor which uses local tilt within each local sub-aperture to estimate the shape of a wave-front. In the method described in this document local curvatures are used for the same. The well known Shack-Hartmann algorithms can be adapted to process information on curvatures rather than tilts. The sub-images can have, in principle, any shape and can be independent or partly overlapping depending on the required accuracy and application. For example, scanning the intermediate image by a linear photo-sensor, i. e. scanning can produce sub-images of lines. For wave-front sensors applications are numerous, which applications will increase with less expensive wave-front sensors.
However, the intermediate images can also be used to estimate the angulation (from lateral displacements of sub-images) of light rays compared to the optical axis by comparison of the spatial spectra of the neighbouring intermediate images and then reconstruct the shape of the wave-front by applying methods developed for the analysis of so called hartmanngrams. The apparatus should therefore include means adapted to
reconstruct a wave-front by combining lateral shifts of at least two intermediate sub- images.
Moreover, a new image of the object can be calculated as it projected on the plane perpendicular to the optical axis at any distance from the exit pupil, i.e. reconstruction of final in- focus images by ray-tracing. Assuming, for example, in the optical system using two intermediate images, the spatial spectrum of the first image is I1(Gix, (ϋ ) and the spectrum of the second image taken in the plane displaced by Δz along the Z-axis is I2(Gix,Giy) . Lateral shift of the second image by Ax and Δy , in conformity with Eq. 22, results in following change in the spatial spectrum of the second image
= I2(ωx,ωy)exp[-i(ωxΔx+ωyAy)]
I2(Gix,Giy) being the unshifted spectrum. In many practical cases the exit pupil of an optical system is a symmetrical region, for example, square or circle, and, thus, the defocused OTF H(ωx,ωj,,φ) e Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have by analogy with Eq. 23 72(cox, co ) = H(ωx, ω z + Δz)/o(cox,co )eχp[-/(coxΔx+co Δy)]
(43)
I1 (COx , CO jv)) == HH((CCoOxx,, cCoOv , Z)I0 (COx , CO )
where H(ωx,ωy,z) is the system OTF with defocus expressed in terms of the displacement z with respect to the exit pupil plane. The intermediate images specified by /,((O11(Dj,) and I2(ωx,Giy) are supposed to be displaced longitudinally by a small distance | Δz |« z to prevent significant lateral shift of Z2(COx, CO ) . From Eq. 43, it follows that
[I2 (COx , ωy ) / 7, (COx , ωy )f ~ exp[-2/(coxΔx + ωyAy)] ≡ exp(-2/β) , (44)
where i3- = C0xΔx + CO Ay . The lateral shifts Δx and Δy can be obviously found from
Eq. 44.
Note that other mathematical methods applicable to Fourier transforms of the images or/and their intensity distributions can be implemented to get information on lateral displacements Ax and Δy , for example, correlation method, analysis of moments in intensity distributions. From the formulas above, the ray-vector characterizing the whole image specified by the spatial spectra Z1(G)x, G)1,) becomes v = {Ax,Ay, Az} and a new image (rather a point of image) at any displaced plane with the coordinate z perpendicular to the optical axis Z can be conveniently calculated by ray-tracing (for example, D. Malacara and M. Malacara, Handbook of optical design, Marcel Dekker, Inc., New York, 2004). Note that the ray intensity /v is given by the integral intensity of the whole image/sub- image
/ = Il(x,y)dxdy . (45)
(x,y)e sub-image
The integration in Eq. 45 is performed over the image/sub- image area. By splitting the images into a large number of non-overlapping or even overlapping sub-areas depending on the application requirements, the procedure described above can be applied to each sub-area separately, resulting in a final image as it is projected on the image plane at any given distance from the exit pupil and having the number of "pixels" equal to the number of sub-areas. This function is close to the principle described in WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), but the essential difference is that the method described in the present document does not require an additional array of microlenses. The information on local tilts, i. e. ray directions, is recalculated from the comparison of the spatial spectra of the intermediate defocused images. It should be noted that the estimated computational cost for the described method is significantly lower than those given in WO2006/039486, in other words, the described method can provide real-time capability.
Images with EDF can be obtained by correction of a single wave-front in a single final image. The non-iterative computation methods described in this document will allow for rapid computations on, for example, dedicated electronic circuits. Extended computation time on powerful computers has been a drawback of various EDF imaging techniques to date. EDF images can also be obtained by dividing a total image in sub-
images of a much less number than the wave-front application, requiring likely thousands of sub-images, described above. The degree of defocus is determined per sub-image (which can be small number of sub-images, say, only a dozen or so sub- images per total image, or very large numbers with each sub-image represented by only a number of pixels. The desired number of sub-images depends on required accuracy, specifications of the device and its application), and the sub-images corrected accordingly followed by reconstruction of a final image by combination of corrected sub-images. This procedure results in a final image in which all extended (three- dimensional) objects are sharply in focus.
EDF images can also be obtained by stacking at least two final images each reconstructed to correct for defocus for at least one focal plane of the same objects in cubic space. Such digital stacking procedures are well known.
The list of embodiments above includes examples for possible embodiments, and other designs to the same effect can be implemented, albeit of likely increasing complexity. The choice of embodiment clearly depends on the specifics of the application.
It should be noted that the preferred methods above imply a non-iterative method for image reconstruction. Non-iteration is most simple and save computing time. In our prototypes we reach reconstruction times -50 ms allowing real-time imaging. However, two or three iterations of calculations can improve estimate of defocus in selected cases and improve image quality. Whether iterations should be applied depends on the application and likely need for real-time imaging. Also, for example, two intermediate images combined with re-iteration of calculations can be preferred by the user to three intermediate images combined with non-iterative calculations. The embodiments and methods of reconstruction are dependent on the intended application.
Applications of devices employing image reconstruction as described in this document are basically in nearly any optical camera system and are too numerous to list in full. Some, but not all, applications are listed below.
Scanning is an important application. Image scanning is a well known technology and can hereby be extended for camera applications. Note that images with an EDF can be reconstructed by dividing the intermediate images in a multiple of sub-sectors. For each
sub-area the degree of defocus can be determined and, consequently, the optical sharpness of the sub-sector reconstructed. So, the final image will be composed of a multiple of optical focused images and have an EDF, even at full aperture camera settings. Linear scanning can be employed to define such linear sub-areas.
Pattern recognition and object tracking is extremely sensitive to a variety of distortions including defocus. This invention provides a single sharp image of the object by single exposures as well as additional information on speed, distance and angle of travelling by multiple exposures. Applications can be military tracking and targeting systems, but also medical, for example, endoscopy with added information of distances.
Methods described in this document are sensitive to wavelength. This phenomenon can be employed to split images at varying image depth when light sources of different wavelength are employed. For example, focusing at different layer depth in multilayer CD/DVD discs can be achieved for different depth simultaneously with lasers of different wavelength. A multilayer DVD pick-up optical system which reads different layers simultaneously can thus be designed. Other applications involve consumer and technical cameras insensitive to defocus error, iris scanning cameras insensitive to the distance of the eye to the optics, and a multiple of homeland security camera applications. Also, automotive cameras can be designed which are not only insensitive to defocus, but also, for example, calculate distance and speed of chosen objects, parking aids, wave-front sensors in numerous military and medical applications. Availability of inexpensive wave-front sensors will only increase the number of applications.
As pointed out, the reconstruction method described above is highly dependent of the wavelength of light forming the image. So, the methods can be adapted to determine the wavelength of light when defocus is known precisely. Consequently, the image reconstructor can, alternatively be designed as a spectrometer.
Figure 1 shows a sequence of defocused intermediate images from the image side of the optical system from which intermediate images the final image can be reconstructed. An optical system with exit pupil, 1, provides, in this particular example, three photosensors (or sections/parts thereof, or as subsequent images in time, see various options
in the description of the invention in this document) with three intermediate images, 2, 3, 4, with the optical axis, 5, and which images have a precisely known distance, 6, 7, 8, compared to the exit pupil, 1 , and, alternatively, precisely known distances compared to each other, 9, 10. Note that a precisely known distance of an photo-sensor/image plane compared to the principle plane in such a system translates, via standard and traditional optical formulas, in a precisely known difference of defocus compared to each other.
Figure 2 shows a reconstructed image of a page from a textbook, 11, by reconstruction from three intermediate images into one final image, 12, defocused at the dimensionless value of φ = 40, a second image, 13, defocused Αt φ = 45, and another, 14, defocused at φ = 50. The reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14-16 bit/pixel. Note that all defocused images are distinctly unreadable to a degree that even the mathematical integral sign can not be recognized from any of the intermediate images.
Figure 3 shows a reconstructed image of a scene with building, 15, by reconstruction from three intermediate images into one final image, 16, defocused at the dimensionless value oϊφ = 50, a second image, 17, defocused Αt φ = 55, and another, 18, defocused at φ = 60. The reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14 bit/pixel.
Figure 4 shows a reconstructed image of the letters "PSF" on a page, 19, by reconstruction from three intermediate images into one final image, 20, defocused at the dimensionless value of φ = 95, a second image, 21, defocused at φ = 100, and another, 22, defocused at φ = 105. The reconstruction was carried out on intermediate images with digitally simulated defocus. The final image, 19, has a dynamic range of 14- bit/pixel, is reconstructed with a three-step defocus correction, with the final defocus deviation from exact value: δ^~0.8.
Figure 5 shows an example of an embodiment of the imaging system employing two intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a beam splitter, 26, into two light signals. The light signals are finally detected by two photo-sensors, 27 and 28, positioned in the image planes shifted, one
with respect to another, by a specified distance along the optical axis. Photo-sensors 27 and 28 provide simultaneously two intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
Figure 6 shows an example of an embodiment of the imaging system employing three intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a first beam splitter, 26, into two light signals. The reflected part of light is detected by a photo-sensor, 27, whereas the transmitted light is divided by a second beam splitter, 28. The light signals from the beam splitter 28 are, in turn, detected by two photo-sensors, 29 and 30, positioned in the image planes shifted, one with respect to another, and relative to the image plane of the sensor 27. Photo-sensors 27, 29 and 30 provide simultaneously three intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
Figure 7 illustrates the method, in this example for two intermediate images, to calculate an object image in an arbitrary image plane, i. e. at an arbitrary defocus, based on local ray- vector and intensity determined for a small sub-area of the whole (defocused) image. Two consecutive phase-diverse images, 2 and 3, with a predetermined defocus, or alternatively displacement, 9, along the optical axis, 5, are divided by a digital (software-based) procedure into a plurality of sub-images. Comparison of the spatial spectra calculated for a selected image area, 31, on phase- diverse images allows evaluating the ray-vector direction, 32, which characterizes light propagation, in geometric optics limit, along the optical axis 5. Using the integral intensity over the area 31 as a ray intensity, in combination with the ray- vector, a corresponding image point, 33, z. e. point intensity and position, located in an arbitrary image plane, 34, can be found by ray-tracing. In calculations, the distance, 35, from the new image plane 34 to one of the intermediate image planes is assumed to be specified.
Claims
New York, 1996, Chap. 6). The object is positioned in the "object plane" and the corresponding image is positioned in the "image plane". "EDF" is an abbreviation for Extended Depth of Field.
The term "in-focus" refers to in focus/optical sharpness/in optimal focus, and the term "defocus" to defocus/optical un-sharpness/blurring. An image is meant to be in-focus when the image plane is optically conjugate to the corresponding object plane.
This document merely, by the way of example, applies the invention to camera applications for image reconstruction resulting in a corrected in-focus image because defocus is, in practice, the most important aberration. The methods and algorithms described herein can be adapted to analyse and correct for any aberration of any order or combination of aberrations of different orders. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to other aberrations as well by adaptation of formulas presented as for the applications above.
This invention can, in principle, be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes. Colour imaging can be achieved by splitting white light into narrow spectral bands. White, visible light can be imaged when separated in, for example, red (R), blue (B) and green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer pattern filters providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images. The invention can be applied to infrared (IR) spectra. X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but the methods can be used for X-rays by application of, for example, crystalline monochromators to produce monochromacity.
For ultrasound and coherent radio frequency signals the formulas can be adapted for the coherent amplitude transfer function of the corresponding system. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to almost any wave process and to almost any aberration of choice by adaptation and derivation of the formulas and mathematical concepts presented in this document.
This document describes methods to obtain sharp, focused images in planes, slices, along the optical axis as well as optical sharpness in three-dimensional space, and EDF imaging in which all objects in the intended cubic space are sharp and in-focus. The traditional focusing process, i. e. changing the distance between imaging optics and image on film or photo-detector or, otherwise, changing the focal distance of the optics takes time, requires additional, generally mechanically moving components to the camera and, last but not least, knowledge of the distance to the object of interest. Such focusing shifts the plane of focus along the optical axis. Depth of field in a single image can, traditionally, only be extended by decreasing the diameter of the pupil of the optics, i. e. by using low-NA objectives or, alternatively, apodized optics. However, decreasing the diameter of the aperture reduces the light intensity reaching the photo-sensors or photographic film and significantly degrades the image resolution due to narrowing of the image spatial spectrum. Focusing and EDF at full aperture by using computational methods present a considerable interest in imaging systems and is clearly preferable to such traditional optical/mechanical methods.
Furthermore, a method to achieve this with no moving parts (as a solid state system) is generally preferable for both manufacturer and end-user because of low cost of equipment and ease of use.
Several methods have been proposed for digital reconstruction of in-focus images some of which will be summarized below in the context of the present invention described in this document.
Optical digital technologies regarding defocus correction and EDF started with a publication of Hausler (Optics Communications 6(1), pp. 38-42, 1972) which described a combination of multiple images into a single image in such a way that the final image results in EDF. This method does not reconstruct the final image from the set of defocused images but combines various in-focus areas of different images. The present invention differs from this approach because it reconstructs the final image from intermediate, defocused images that may not contain in-focus areas at all, and, automatically, combines these images into a sharp final EDF image.
Later, methods based on phase coding/decoding which include an optical mask in the optical system which is designed such that the incoherent optical transfer function
remains unchanged within a range of defocuses. Dowsky and co-workers (refer to, for example, US2005264886, WO9957599 and E.R. Dowski and W.T. Cathey, Applied Optics 34(11), pp. 1859-1866, 1995) developed methods and applications of EDF imaging systems based on wave front coding/decoding with a phase filter followed by a straightforward decoding algorithm to reconstruct the final EDF image from the phase encoded intermediate image.
The present invention described in this document does neither include coding of wave- fronts nor the use of phase filters.
Also, various phase-diversity methods determine the phase of an object by comparison of a precisely focused image with a defocused image, refer to, for example, US 6771422 and US2004/0052426.
US2004/0052426 describes non-iterative techniques for phase retrieval for estimating errors of an optical system, and includes capturing a sharp image of an object at a focal point and combining this image with a number of, intentionally, blurred unfocused images of the same object. This concept differs from the concept described in this document in that, firstly, the distance to the object must be known beforehand, or, alternatively, the camera be focused on the object, and, secondly, the method is designed and intends to estimate of optical errors of the optics employed in said imaging. This technique requires at least one focused image at a first focal point in combination with multiple unfocused images. These images are then used to calculate wave- front errors.
The present invention differs from US 2004/0052426 in that the present invention does not require a focused image, i. e. knowledge of the distance from an object to the first principal plane of the optical system, prior to capture of the intermediate images, and uses only a set of unfocused intermediate images with unknown degree of defocus relative to the object.
US6771422, describes a tracking system with EDF including a plurality of photo- sensors, a way of determining the defocus status of each sensor and to produce an enhanced final image. The defocus aberration is found by solving the transport equation derived from the parabolic equation for the complex field amplitude of a monochromatic and coherent light wave.
The present invention differs from US6771422 in that it does not intend to solve the transport equation. The present invention is based on the known a priori information on the incoherent optical transfer function (OTF) of the optical system to predict the evolution of intensity distribution for different image planes and, thus, the degree of defocus by direct calculations with non-iterative algorithms.
Other methods to reconstruct images based on a plurality of intermediate images/intensity distributions taken at different and known degrees of defocus employ iterative phase diversity algorithms (see, for example, JJ. Dolne et al., Applied Optics 42(26), pp. 5284-5289, 2003). Such iteration can take considerable computational power and computing time which is unlikely to be carried out in real-time. The present invention described in this document differs from the standard phase diversity algorithms in that it is an essentially non-iterative method.
WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), use an optical system designed such that it allows determination of the intensity and angle of propagation, by an array of micro lenses, of the light at different locations on the sensor plane resulting in a so-called "plenoptic" camera. The sharp images of the object points at different distances from the camera can be recalculated (for example, by ray-tracing). It must be noted that with the method described in the present document the intensity and angle of incidence of the light rays at different locations on the intermediate image plane can be derived and methods analogous to WO2006/039486, i. e. ray-tracing, can be applied to calculate sharp images of an extended object.
The present invention described in this document differs from WO2006039486 and related documents in that the present invention does not explicitly use such information on angle of incidence obtained with an array of microlenses, for example, a Shack- Hartman wave-front sensor, but instead the respective light ray direction is directly calculated by finding the relative lateral displacement for at least one pair of phase- diverse images and using the a priori known defocus distance between them.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP08152017 | 2008-02-27 | ||
EP08152017.3 | 2008-02-27 | ||
NL2001777 | 2008-07-07 | ||
NL2001777A NL2001777C2 (en) | 2008-02-27 | 2008-07-07 | Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions |
NL2002094 | 2008-10-14 | ||
NL2002094 | 2008-10-14 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2009108050A1 true WO2009108050A1 (en) | 2009-09-03 |
WO2009108050A9 WO2009108050A9 (en) | 2010-09-30 |
Family
ID=40786556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/NL2009/050084 WO2009108050A1 (en) | 2008-02-27 | 2009-02-25 | Image reconstructor |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2009108050A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011139150A1 (en) | 2010-05-03 | 2011-11-10 | Asmr Holding B.V. | Improved optical rangefinding and imaging apparatus |
US8466984B2 (en) | 2010-06-09 | 2013-06-18 | International Business Machines Corporation | Calibrating color for an image |
DE102012106584A1 (en) * | 2012-07-20 | 2014-01-23 | Carl Zeiss Ag | Method and apparatus for image reconstruction |
US8797450B2 (en) | 2010-06-09 | 2014-08-05 | International Business Machines Corporation | Real-time adjustment of illumination color temperature for digital imaging applications |
US9046739B2 (en) | 2012-06-28 | 2015-06-02 | International Business Machines Corporation | Digital image capture under conditions of varying light intensity |
WO2014083574A3 (en) * | 2012-11-30 | 2015-12-17 | L&T Technology Services Limited | A method and system for extended depth of field calculation for microscopic images |
WO2016036364A1 (en) * | 2014-09-03 | 2016-03-10 | Apple Inc. | Plenoptic cameras in manufacturing systems |
WO2017093227A1 (en) * | 2015-12-02 | 2017-06-08 | Carl Zeiss Ag | Method and device for image correction |
US9897792B2 (en) | 2012-11-30 | 2018-02-20 | L&T Technology Services Limited | Method and system for extended depth of field calculation for microscopic images |
CN112164001A (en) * | 2020-09-29 | 2021-01-01 | 南京理工大学智能计算成像研究院有限公司 | Digital microscope image rapid splicing and fusing method |
US11145033B2 (en) | 2017-06-07 | 2021-10-12 | Carl Zeiss Ag | Method and device for image correction |
CN115308129A (en) * | 2022-07-01 | 2022-11-08 | 江苏诺鬲生物科技有限公司 | Method and device for automatically determining focusing position of fluorescent dark field camera |
WO2023208496A1 (en) * | 2022-04-27 | 2023-11-02 | Asml Netherlands B.V. | System and method for improving image quality during inspection |
-
2009
- 2009-02-25 WO PCT/NL2009/050084 patent/WO2009108050A1/en active Application Filing
Non-Patent Citations (8)
Title |
---|
DOLNE J J ET AL: "Practical issues in wave-front sensing by use of phase diversity", APPLIED OPTICS OPT. SOC. AMERICA USA, vol. 42, no. 26, 10 September 2003 (2003-09-10), pages 5284 - 5289, XP002525073, ISSN: 0003-6935 * |
GONSALVES R A: "Phase retrieval and diversity in adaptive optics", OPTICAL ENGINEERING, SOC. OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, BELLINGHAM, vol. 21, no. 5, 1 September 1982 (1982-09-01), pages 829 - 832, XP008105528, ISSN: 0091-3286 * |
GONSALVES R A: "Small-phase solution to the phase-retrieval problem", OPTICS LETTERS OPT. SOC. AMERICA USA, vol. 26, no. 10, 15 May 2001 (2001-05-15), pages 684 - 685, XP002525072, ISSN: 0146-9592 * |
HAUSLER ET AL: "A method to increase the depth of focus by two step image processing", OPTICS COMMUNICATIONS, NORTH-HOLLAND PUBLISHING CO. AMSTERDAM, NL, vol. 6, no. 1, 1 September 1972 (1972-09-01), pages 38 - 42, XP024491435, ISSN: 0030-4018, [retrieved on 19720901] * |
KENDRICK R L ET AL: "PHASE-DIVERSITY WAVE-FRONT SENSOR FOR IMAGING SYSTEMS", APPLIED OPTICS, OSA, OPTICAL SOCIETY OF AMERICA, WASHINGTON, DC, vol. 33, no. 27, 20 September 1994 (1994-09-20), pages 6533 - 6546, XP000469298, ISSN: 0003-6935 * |
LANDESMAN B T ET AL: "Non-iterative methodology for obtaining a wavefront directly from phase diversity measurements", IR SPACE TELESCOPES AND INSTRUMENTS 24-28 AUG. 2002 WAIKOLOA, HI, USA, vol. 4850, 2003, Proceedings of the SPIE - The International Society for Optical Engineering SPIE-Int. Soc. Opt. Eng USA, pages 461 - 468, XP002525071, ISSN: 0277-786X * |
LEVOY M ET AL: "LIGHT FIELD RENDERING", COMPUTER GRAPHICS PROCEEDINGS 1996 (SIGGRAPH). NEW ORLEANS, AUG. 4 - 9, 1996; [COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH)], NEW YORK, NY : ACM, US, 4 August 1996 (1996-08-04), pages 31 - 42, XP000682719 * |
YASUHIRO OHNEDA ET AL: "Multiresolution Approach to Image Reconstruction with Phase-Diversity Technique", OPTICAL REVIEW, SPRINGER, BERLIN, DE, vol. 8, no. 1, 1 January 2001 (2001-01-01), pages 32 - 36, XP019353857, ISSN: 1349-9432 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011139150A1 (en) | 2010-05-03 | 2011-11-10 | Asmr Holding B.V. | Improved optical rangefinding and imaging apparatus |
US8817128B2 (en) | 2010-06-09 | 2014-08-26 | International Business Machines Corporation | Real-time adjustment of illumination color temperature for digital imaging applications |
US8466984B2 (en) | 2010-06-09 | 2013-06-18 | International Business Machines Corporation | Calibrating color for an image |
US8797450B2 (en) | 2010-06-09 | 2014-08-05 | International Business Machines Corporation | Real-time adjustment of illumination color temperature for digital imaging applications |
US9538090B2 (en) | 2012-06-28 | 2017-01-03 | International Business Machines Corporation | Digital image capture under conditions of varying light intensity |
US9046738B2 (en) | 2012-06-28 | 2015-06-02 | International Business Machines Corporation | Digital image capture under conditions of varying light intensity |
US9046739B2 (en) | 2012-06-28 | 2015-06-02 | International Business Machines Corporation | Digital image capture under conditions of varying light intensity |
DE102012106584B4 (en) * | 2012-07-20 | 2021-01-07 | Carl Zeiss Ag | Method and device for image reconstruction |
US9516242B2 (en) | 2012-07-20 | 2016-12-06 | Carl Zeiss Ag | Method and apparatus for image reconstruction |
DE102012106584A1 (en) * | 2012-07-20 | 2014-01-23 | Carl Zeiss Ag | Method and apparatus for image reconstruction |
WO2014083574A3 (en) * | 2012-11-30 | 2015-12-17 | L&T Technology Services Limited | A method and system for extended depth of field calculation for microscopic images |
US9897792B2 (en) | 2012-11-30 | 2018-02-20 | L&T Technology Services Limited | Method and system for extended depth of field calculation for microscopic images |
WO2016036364A1 (en) * | 2014-09-03 | 2016-03-10 | Apple Inc. | Plenoptic cameras in manufacturing systems |
WO2017093227A1 (en) * | 2015-12-02 | 2017-06-08 | Carl Zeiss Ag | Method and device for image correction |
US10748252B2 (en) | 2015-12-02 | 2020-08-18 | Carl Zeiss Ag | Method and device for image correction |
US11145033B2 (en) | 2017-06-07 | 2021-10-12 | Carl Zeiss Ag | Method and device for image correction |
CN112164001A (en) * | 2020-09-29 | 2021-01-01 | 南京理工大学智能计算成像研究院有限公司 | Digital microscope image rapid splicing and fusing method |
CN112164001B (en) * | 2020-09-29 | 2024-06-07 | 南京理工大学智能计算成像研究院有限公司 | Digital microscope image rapid splicing and fusion method |
WO2023208496A1 (en) * | 2022-04-27 | 2023-11-02 | Asml Netherlands B.V. | System and method for improving image quality during inspection |
CN115308129A (en) * | 2022-07-01 | 2022-11-08 | 江苏诺鬲生物科技有限公司 | Method and device for automatically determining focusing position of fluorescent dark field camera |
Also Published As
Publication number | Publication date |
---|---|
WO2009108050A9 (en) | 2010-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2009108050A1 (en) | Image reconstructor | |
US7705970B2 (en) | Method and system for optical imaging and ranging | |
US7646549B2 (en) | Imaging system and method for providing extended depth of focus, range extraction and super resolved imaging | |
JP5328165B2 (en) | Apparatus and method for acquiring a 4D light field of a scene | |
JP6091176B2 (en) | Image processing method, image processing program, image processing apparatus, and imaging apparatus | |
CN103363924B (en) | A kind of three-dimensional computations ghost imaging system of compression and method | |
US9530213B2 (en) | Single-sensor system for extracting depth information from image blur | |
US7889903B2 (en) | Systems and methods for minimizing aberrating effects in imaging systems | |
US8305485B2 (en) | Digital camera with coded aperture rangefinder | |
JP2017005380A (en) | Control device, imaging device, control method, program and storage medium | |
KR20030028553A (en) | Method and apparatus for image mosaicing | |
JP2012514749A (en) | Optical distance meter and imaging device with chiral optical system | |
JP2017208641A (en) | Imaging device using compression sensing, imaging method, and imaging program | |
US20120044320A1 (en) | High resolution 3-D holographic camera | |
Chen et al. | Light field compressed sensing over a disparity-aware dictionary | |
CN115546285B (en) | Large-depth-of-field stripe projection three-dimensional measurement method based on point spread function calculation | |
US5350911A (en) | Wavefront error estimation derived from observation of arbitrary unknown extended scenes | |
KR20210124041A (en) | Apparatus, Method and System For Generating Three-Dimensional Image Using a Coded Phase Mask | |
Amin et al. | Active depth from defocus system using coherent illumination and a no moving parts camera | |
JP2017208642A (en) | Imaging device using compression sensing, imaging method, and imaging program | |
WO2021099761A1 (en) | Imaging apparatus | |
NL2001777C2 (en) | Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions | |
Neuner et al. | Digital adaptive optical imaging for oceanic turbulence mitigation | |
CN114208145A (en) | Image pickup apparatus and method | |
KR20110042936A (en) | Apparatus and method for processing image using light field data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09716082 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09716082 Country of ref document: EP Kind code of ref document: A1 |