WO2009108050A9 - Image reconstructor - Google Patents

Image reconstructor Download PDF

Info

Publication number
WO2009108050A9
WO2009108050A9 PCT/NL2009/050084 NL2009050084W WO2009108050A9 WO 2009108050 A9 WO2009108050 A9 WO 2009108050A9 NL 2009050084 W NL2009050084 W NL 2009050084W WO 2009108050 A9 WO2009108050 A9 WO 2009108050A9
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
defocus
degree
focus
Prior art date
Application number
PCT/NL2009/050084
Other languages
French (fr)
Other versions
WO2009108050A1 (en
Inventor
Aleksey Nikolaevich Simonov
Michiel Christiaan Rombach
Original Assignee
Aleksey Nikolaevich Simonov
Michiel Christiaan Rombach
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from NL2001777A external-priority patent/NL2001777C2/en
Application filed by Aleksey Nikolaevich Simonov, Michiel Christiaan Rombach filed Critical Aleksey Nikolaevich Simonov
Publication of WO2009108050A1 publication Critical patent/WO2009108050A1/en
Publication of WO2009108050A9 publication Critical patent/WO2009108050A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to imaging and metering techniques. Firstly, the invention provides methods, systems and embodiments of these for estimating aberration errors of an image and reconstruction of said image based on a set of multiple intermediate images by non-iterative algorithms and, secondly, provides methods to reconstruct wave-fronts.
  • An apparatus based on the invention can be either a dedicated camera or wave-front sensor, or these functions can be combined.
  • the invention has a broad scope of embodiments and applications including, image reconstruction for one or more focal distances, image reconstruction for EDF, speed, distance and direction measurement device and wave- front sensors for various applications. Reconstruction of images independent from the defocus aberration has most practical applications. Therefore, the device or derivates thereof can be applied for digital imaging insensitive to defocus (in cameras), digital imaging for extended depth of field ("EDF", in cameras), as optical distance, speed and direction measurement device (in measuring and metering devices).
  • EDF extended depth of field
  • Camera units and wave-front sensors according to the methods and embodiments set forth in this document can be designed to be entirely solid state, with no moving parts, to be constructed from only very few components, for example, in a basic embodiment: simple optics, for selected application even only one lens, one beam splitter (or other beam splitting element, for example, phase grating) and two sensors and to be combined with dedicated data processing units/processing chips, with all these components in, for example, one solid polymer assembly.
  • intermediate image refers to a phase-diverse intermediate image which has an unknown defocus compared to the in-focus image plane but a known a priori diversity defocus in respect of any other intermediate image in multiple intermediate images.
  • the "in-focus image” plane is a plane optically conjugate to an object plane and thus having zero defocus error.
  • object and image conform to the notations of Goodman for a generalized imaging system (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6).
  • the object is positioned in the "object plane” and the corresponding image is positioned in the "image plane”.
  • EDF is an abbreviation for Extended Depth of Field.
  • in- focus refers to in focus/optical sharpness/in optimal focus
  • defocus to defocus/optical un-sharpness/blurring.
  • An image is meant to be in- focus when the image plane is optically conjugate to the corresponding object plane.
  • This invention can, in principle, be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes.
  • Colour imaging can be achieved by splitting white light into narrow spectral bands.
  • White, visible light can be imaged when separated in, for example, red (R), blue (B) and green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer pattern filters providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images.
  • the invention can be applied to infrared (IR) spectra.
  • X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but the methods can be used for X-rays by application of, for example, crystalline monochromators to produce monochromacity.
  • Optical digital technologies regarding defocus correction and EDF started with a publication of Hausler (Optics Communications 6(1), pp. 38-42, 1972) which described a combination of multiple images into a single image in such a way that the final image results in EDF.
  • This method does not reconstruct the final image from the set of defocused images but combines various in-focus areas of different images.
  • the present invention differs from this approach because it reconstructs the final image from intermediate, defocused images that may not contain in-focus areas at all, and, automatically, combines these images into a sharp final EDF image.
  • the present invention described in this document does neither include coding of wave- fronts nor the use of phase filters.
  • phase-diversity methods determine the phase of an object by comparison of a precisely focused image with a defocused image, refer to, for example, US 6771422 and US2004/0052426.
  • US2004/0052426 describes non-iterative techniques for phase retrieval for estimating errors of an optical system, and includes capturing a sharp image of an object at a focal point and combining this image with a number of, intentionally, blurred unfocused images of the same object.
  • This concept differs from the concept described in this document in that, firstly, the distance to the object must be known beforehand, or, alternatively, the camera be focused on the object, and, secondly, the method is designed and intends to estimate of optical errors of the optics employed in said imaging.
  • This technique requires at least one focused image at a first focal point in combination with multiple unfocused images. These images are then used to calculate wave- front errors.
  • the present invention differs from US 2004/0052426 in that the present invention does not require a focused image, i. e. knowledge of the distance from an object to the first principal plane of the optical system, prior to capture of the intermediate images, and uses only a set of unfocused intermediate images with unknown degree of defocus relative to the object.
  • US6771422 describes a tracking system with EDF including a plurality of photo- sensors, a way of determining the defocus status of each sensor and to produce an enhanced final image.
  • the defocus aberration is found by solving the transport equation derived from the parabolic equation for the complex field amplitude of a monochromatic and coherent light wave.
  • the present invention differs from US6771422 in that it does not intend to solve the transport equation.
  • the present invention is based on the known a priori information on the incoherent optical transfer function (OTF) of the optical system to predict the evolution of intensity distribution for different image planes and, thus, the degree of defocus by direct calculations with non-iterative algorithms.
  • OTF incoherent optical transfer function
  • WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), use an optical system designed such that it allows determination of the intensity and angle of propagation, by an array of micro lenses, of the light at different locations on the sensor plane resulting in a so-called "plenoptic" camera.
  • the sharp images of the object points at different distances from the camera can be recalculated (for example, by ray-tracing).
  • the intensity and angle of incidence of the light rays at different locations on the intermediate image plane can be derived and methods analogous to WO2006/039486, i. e. ray-tracing, can be applied to calculate sharp images of an extended object.
  • the present invention described in this document differs from WO2006039486 and related documents in that the present invention does not explicitly use such information on angle of incidence obtained with an array of microlenses, for example, a Shack- Hartman wave-front sensor, but instead the respective light ray direction is directly calculated by finding the relative lateral displacement for at least one pair of phase- diverse images and using the a priori known defocus distance between them. Additionally, the intermediate phase-diverse images described in this document can also be used for determining the angle and intensity of individual rays and to compose an EDF image by ray-tracing.
  • the present invention relates to imaging techniques. From the single invention a number of applications can be derived:
  • the invention provides a method for estimation of defocus in the optical system without prior knowledge of the distance to the object; the method is based on digital processing of multiple intermediate defocused images, and,
  • EDF images can reconstruct EDF images by either combining images from various focal planes (for example "image stacking"), or, by combining in-focus sub-images (for example “image stitching”), or, alternatively, by correction of wave-fronts, or, alternatively, by ray-tracing to project an image in a plane of choice.
  • various focal planes for example "image stacking”
  • in-focus sub-images for example “image stitching”
  • Fifthly provides methods to calculate speed and distance of an object by analyzing subsequent images of the object including speed in all directions, X, Y and Z based on a multiple of intermediate images and consequently the acquired information on focal planes, and, Sixthly, can be used to estimate the shape of a wave-front by reconstruction of tilt of individual rays by calculating the relative lateral displacement for at least one pair of phase-diverse images and using the a priori known defocus distance between them, and,
  • Ninthly can be adapted to many non-optical applications, for example, tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system (i. e. an intermediate image degradation can be attributed to a convolution with the system response function), of which the response function is known a priori, the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori.
  • tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system (i. e. an intermediate image degradation can be attributed to a convolution with the system response function), of which the response function is known a priori, the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori.
  • a focused final image of an object is derived, by digital reconstruction, from at least two, defocused intermediate images having an unknown degree of defocus compared to an ideal focal plane (or, alternatively, the distance from the object to the principal planes of imaging system), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image.
  • the method starts with at least two defocused, i. e. phase-diverse, intermediate images from which a final in-focus image can be reconstructed by a non- iterative algorithm and an optical system having an optical transfer function which is a priori known.
  • each intermediate image has a different and a priori unknown degree of defocus in relation to the in-focus image plane of the object, but the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known.
  • a generating function comprising a combination of the spatial spectra of said intermediate images and a combination of their corresponding optical transfer functions is composed. said combinations of spatial spectra and optical transfer functions are adjusted such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in- focus image plane.
  • This adjustment can take the form of adjustment of coefficients or adjustment of functional dependencies or a combination thereof, so the relationship between the combination of spatial spectra and their corresponding optical transfer functions can be designed as linear, non-linear or functional relationships, depending on the intended application.) the final in- focus image is reconstructed by a non-iterative algorithm based on said combinations of spatial spectra and corresponding optical transfer functions.
  • An apparatus to carry out the tasks set forth above must include the necessary imaging means and processing means.
  • Such method includes an equation based on the generating function/functional satisfying
  • ⁇ ⁇ (known a priori from the system configuration) is the diversity defocus between the n-th intermediate image plane and a chosen reference image plane.
  • the spatial spectrum of the object i. e. final image
  • i °° °°° I 0 ( ⁇ x , ⁇ ) — - f
  • x and y are the transverse coordinates in the object plane.
  • I n ( ⁇ x , ⁇ y ) H( ⁇ x , ⁇ j ,, ⁇ 0 + ⁇ - ⁇ B )/ 0 ( ⁇ x , ⁇ j ,) , where H( ⁇ x , ⁇ j ,, ⁇ ) denotes the defocused incoherent optical transfer function (OTF) of the optical system; the unknown defocus ⁇ is substituted by a sum of the defocus estimate ⁇ 0 and the deviation ⁇ ⁇ ⁇ - ⁇ 0 ,
  • « 1 : ⁇ ⁇ 0 + ⁇ .
  • the series coefficients B p ( ⁇ x , ⁇ , ⁇ 0 , ⁇ t K , ⁇ , [I 0 ( ⁇ x , ⁇ )] ) functionally dependent on the spatial spectrum of the object I 0 ( ⁇ x , ⁇ y ) can be found from ⁇ by decomposing the defocused OTFs H( ⁇ x , ⁇ j ,, ⁇ 0 + ⁇ - ⁇ ⁇ ) into Taylor series in ⁇ .
  • the generating function/functional ⁇ is chosen to have zero first- and higher-order derivatives up to the K -order with respect to unknown ⁇ :
  • An important example of the generating function is a linear combination of the spatial spectra / B ( ⁇ x , ⁇ ) of the intermediate phase-diverse images
  • OTF incoherent optical transfer function
  • MSE J j ⁇ I 0 ( ⁇ x , ⁇ y ) -I 0 ( ⁇ x , ⁇ y ) ⁇ 2 d ⁇ x d ⁇ y (9)
  • Eq. 10 describes the non- iterative algorithm for the object reconstruction with the generating function chosen as a linear combination of the spatial spectra of the phase- diverse images.
  • the defocus estimate ⁇ 0 can be found by many ways, for example, from a pair of phase diverse images. If / 1 ( ⁇ x , ⁇ j ,) is the spatial spectrum of the first image that is characterized by unknown defocus ⁇ and I 2 (( ⁇ x ,( ⁇ y ) is the spatial spectrum of the second image with defocus ⁇ + ⁇ , here ⁇ being the difference in defocus predetermined by the system configuration, then the estimate of defocus is given by an appropriate expression:
  • the estimate ⁇ 0 of the is unknown defocus ⁇ can be found from three consecutive phase-diverse images I 1 (CH x W y ) with defocus ⁇ — ⁇ 1 ? / 2 ( ⁇ x , ⁇ j ,) with defocus ⁇ and I 3 (Qd x , ⁇ y ) with defocus ⁇ + ⁇ 2 (Aq) 1 and ⁇ 2 are specified by the system arrangement):
  • the coefficient ⁇ is the ratio of images spectra
  • an estimate of defocus ( ⁇ 0 in Eq. 1) is necessary to start these computations, the estimate is automatically provided by the formulas specifying the reconstruction algorithm above.
  • Such an estimate can also be provided by other analytical methods, for example, by determining the first zero-crossing in the spatial spectrum of the defocused image as described by I. Raveh et al. (I. Raveh, et ah, Optical Engineering 38(10), pp. 1620-1626, 1999).
  • calculations according to Eq. 16 together with Eq. 14 can be used for an apparatus to determine degree of defocus with, at least, two photo-sensors having only one photo-sensitive spot, for example photo-diodes or photo -resistors.
  • a construction for such apparatus likely includes a photo-sensor but also an amplitude mask, focusing optics and processing means which are adapted to calculate the degree of defocus of, at least, one intermediate image.
  • the advantage of such system is that no Fourier transformations required for calculations which significantly reduces calculation time. This could be achieved by, for example, simplification of Eq. 16 to a derivate of the Parseval's theorem, for example:
  • U(x,y) defines the amplitude mask in one or multiple image planes.
  • photo-diodes and photo-resistors are significantly less expensive compared to photo-sensor arrays and are more easily assembled.
  • a Fourier transformation can be achieved by processing methods as described above, but can also be achieved by optical methods, for example, by optical means, for example, by an additional optical element between the beam splitter and imaging photosensor. Using such optical Fourier transformation will significantly reduce digital processing time which might be advantageous for specific applications.
  • Such apparatus can be applied as, for example, a precise and inexpensive optical range meter, camera component, distance meter.
  • Such apparatus differs from existing range finders with multiple discrete photo-sensors which all use phase-detection methods.
  • the distance of the object to the camera can be estimated once the degree of defocus is known via a simple optical calculation, so the methods can be applied to a distance metering device.
  • the speed and direction of an object in X, Y and Z directions (also: 3D space) can be estimated with additional computation means and information on at least two subsequent final images and the time in between capture of the intermediate images for these final images.
  • Such inexpensive component for solid state image reconstruction will increase consumer, military (sensing and targeting, with or without the camera function and with or without wave-front sensing functions) and technical applications.
  • the estimate can be obtained by an additional device, for example, an optical or ultrasound distance measuring system.
  • an additional device for example, an optical or ultrasound distance measuring system.
  • the estimate is provided by the algorithm itself without the aid of any additional measuring device.
  • the invention also provides an apparatus for providing at least two, phase-diverse intermediate images of said object wherein each of the intermediate images has a different degree of defocus compared to an ideal focal plane (i. e. an image plane of the same system with no defocus error), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image.
  • the apparatus includes processing means for reconstructing a focused image of the object by an algorithm expressed by Eq. 6.
  • each phase-diverse image is characterized by an unknown absolute magnitude of an aberration but a known a priori difference in the aberration magnitude relative to any other phase-diverse image
  • the processing functions mentioned above can be applied to any set of set of images or signals which are blurred, but of which the transfer (blurring) function is known.
  • the processing function can be used to reconstruct images/signals with motion blur or Gaussian blur in addition to said out- of-focus blur.
  • an additional generating function to provide the degree of defocus of at least one of said intermediate images compared to the in- focus image plane is provided here, and the degree of defocus can be calculated by additional processing by an apparatus.
  • An improved estimate for unknown defocus can be directly calculated from at least two phase-diverse, intermediate images obtained with the optical system by a non-iterative algorithm according to: ⁇ -B' ⁇ ⁇ ⁇ -A , (18)
  • I n (( ⁇ x ,( ⁇ y ) being the unshifted spectrum.
  • the exit pupil of an optical system is a symmetrical region, for example, square or circle, and the defocused OTF H( ⁇ x , ⁇ j ,, ⁇ ) e Re (real value).
  • the formulas above give an example of a method for correcting the lateral images shift and there are also other methods to obtain shift-corrected spectra, for example, correlation technique, analysis of moments in intensity distribution.
  • the degree of defocus of the intermediate image compared to the in- focus image plane can be included in the non-iterative algorithm and the processing means of an apparatus for such image reconstruction adapted accordingly.
  • At least two intermediate images are required for the reconstruction algorithm specified by Eq. 6, but any number of intermediate images can be used providing higher quality of restoration and weaker sensitivity to the initial defocus estimate ⁇ 0 since the generating function ⁇ gives the (M - 1) - th order approximation to B 0 ( ⁇ x , G) 3 , , ⁇ 0 , Aq) 1 K , ⁇ , [I 0 ( ⁇ x ,( ⁇ y )]) defined by Eq. 1 with respect to the unknown value ⁇ .
  • the resolution and overall quality of the final image will increase with increasing the number M of intermediate images, at the expense of implementation of a larger number of photo-sensors or increasingly complex optical/mechanical arrangement, increasing computation time. Reconstruction via three intermediate images is used as an example in this document.
  • the degrees of defocus of the multiple intermediate images relatively to the ideal focal plane i. e. an image plane of the same system with no defocus error
  • the difference in degree of defocus ⁇ ⁇ of the multiple intermediate images relatively to each other (or any chosen image plane) must be known with great precision. This imposes no problems in practice, because the relative difference in defocus is specified in the design of the camera and its optics. Note that these relative differences vary in different camera designs, the type of photosensors ⁇ ) used and intended applications of the image reconstructor.
  • the differences in defocus ⁇ ⁇ can be found and accounted in further computations by performing calibration measurements with well-defined objects.
  • the degree of defocus of the image can be estimated using non- iterative calculations using fixed and straightforward formulas given above and the information provided by the intermediate images. Such non- iterative calculations are of low computational cost, provide stable and precise results. Furthermore, such non- iterative calculations can be performed by relatively simple dedicated electronic circuits, further expanding the possible applications of the invention.
  • the reconstruction of a final sharply focused image is independent from the degree of defocus of any of the intermediate images relative to the object.
  • the precision of the measurement of the absolute defocus (and, therefore, the precision of the range which is calculated from defocus values), is fundamentally limited by the combination of the entrance aperture (D ) of the primary optics and the distance ( z ) from the primary optics to an object of interest. In the case when a diffraction- limited spot implies the "circle of confusion" of an optical system, the depth of field becomes
  • a high precision for defocus and range estimates requires, by definition, a large aperture of an optical system. This can be achieved by fitting, for example, a very large lens to the apparatus. However, such lens may require a lens-diameter of one meter, a size likely not practical for the majority of applications which require small camera units.
  • An effectively large aperture can also be obtained by optically combining light signals from multiple, at least two, optical elements, for example, relatively small reflective or refractive elements positioned outside of the optical axis. Such optical elements must be positioned in the direction perpendicular to the optical axis, but not necessarily so.
  • the theoretical depth of focus i. e.
  • axial resolution corresponds to the resolution of the whole refractive surface of which the dimension is characterized by the distance between the optical elements.
  • the optical elements can be regarded as small individual sectors at the periphery of a large refractive surface.
  • the total light intensity received by an image sensor depends on the combined apertures of the multiple optical elements.
  • Such system with multiple apertures can be made flat and, in the case of only two light sources, also linear.
  • an apparatus for, for example, range finding applications, can be constructed which combines, at least two, light signals from, at least two, optical elements which are positioned opposite at a distances perpendicular to the optical axis.
  • the final image can be restored digitally by subsequent processing of, at least one, spatially modulated intermediate image according to existing and well-known decoding algorithms, or, alternatively, by algorithms adapted from procedures described in this document which adaptations to formulas above will be set forth below.
  • Said modulations preferably includes defocus, but not necessarily so.
  • Such wave-front encoding can be achieved by, for example, including, at least one, phase mask, or, at least one, amplitude mask, or a combination of any number of phase and amplitude masks having a precisely known modulation function.
  • the system embodiment implies that, at least one, phase and /or amplitude mask is located in the exit pupils of the imaging system.
  • ⁇ « I P n ( ⁇ , ⁇ )
  • the function P ⁇ (O) ( ⁇ , ⁇ ) , P ⁇ (O) ( ⁇ , ⁇ )e R is the amplitude transmission function corresponding to n -th amplitude mask
  • i3- B ( ⁇ , ⁇ ) is the phase function representing the n -th phase mask in the exit pupil.
  • i3- ⁇ ( ⁇ , ⁇ ) ⁇ ( ⁇ 2 + ⁇ 2 ) .
  • a new generating function/functional ⁇ ' can be constructed by properly combining i3- ⁇ ( ⁇ , ⁇ ) and/or P ⁇ (0) ( ⁇ , ⁇ ) to retain only linear terms in ⁇ in the right-hand side of Eq. 19.
  • Unknown defocus ⁇ can be subsequently found from Eq. 21 by substituting ⁇ ' .
  • an imaging apparatus can be designed which includes, in addition to the basic image forming optics described elsewhere in this document, at least one, optical mask to spatially modulate the incoming light signal.
  • Either the phase or the intensity of said signal of, at least one, intermediate image can be modulated.
  • Both phase and intensity of, at least one, intermediate image can be spatially modulated by, at least one, phase mask, or separate phase masks are included for separate and independent modulation functions.
  • the resulting modulation results in, at least one, spatially modulated light signal which can be subsequently reconstructed in accordance with the method described above by digital means to diminish sensitivity of the imaging apparatus to, at least one, selected optical aberrations which can be defocus aberration.
  • Image reconstruction an example with three intermediate images At least two intermediate images are required for a reconstruction as described above but any number can be used as starting point for such reconstruction.
  • the reconstruction algorithm set forth in the present document we now consider an example with three intermediate images. Assume that the spatial spectra of three consecutive phase-diverse images are
  • the optimum difference in defocus ⁇ between the intermediate images is related to the specific dynamic range of the image photo-sensors, i. e. their pixel depth, as well as optical features of the object of interest. Depending on defocus magnitude, the difference in distance between the photo-sensors must exceed at least one wavelength of light to produce a detectable difference in intensity of images.
  • Various embodiments of a device can be designed, which include, but which are not restricted to, various embodiments described below.
  • a preferred embodiment provides a method and apparatus wherein the intermediate images depict more than one object, each of the depicted objects having a different degree of focus in each of the intermediate images, and before the execution of said method one of those objects is selected.
  • the image reconstructor with its providing means of intermediate images must have at least one optical component (to project an image) and at least one photo-sensor (to capture the image/light). Additionally, the reconstructor requires digital processing means, displays and all other components required for digital imaging.
  • a preferred embodiment for providing means includes one image photo-sensor which can move mechanically, for example the device can be designed including optics to form an image on one sensor, which image photo-sensor or, alternatively, the whole camera assembly moves a predetermined and precise distance along the optical axis in between the subsequent intermediate exposures.
  • the simplicity of such device is the need for only one photo-sensor, the complexity is mechanical needs for precise movement. Such precise movement is most effectively reached for only two images because of only need for two alternative stopping positions of the device.
  • another embodiment has mechanical moving parts is a system with optics and one sensor, but with a spinning disc with, stepwise, sectors with different optical thickness. An image is taken each time a sector of different and known thickness is in front of the photo-sensor. The thickness of the material provides for a precisely known delay of the wave-front for each image separately and, thus, a set of intermediate images can be provided for subsequent reconstruction by the image reconstruction means.
  • a solid state device (with no mechanical parts/movement) can be employed.
  • the optics can be designed such that at least two independent intermediate images are provided to one fixed image photosensor. These images can be, for example, two large distinct sub-areas each covering approximately half of the photo-sensor and the required diversity defocus can be provided by, for example, a planar mask.
  • the device can be designed including optics to form an image which image is split in multiple images by, for example, at least one, beam splitter, or alternatively phase grating, with a sensor at the end of each splitted beam with a light path which is precisely known and which represents a known degree of defocus compared to at least one other intermediate image.
  • Such design for example, with mirror optics analogous to the optics of a Fabry-Perot interferometer
  • a scanning device can provide the intermediate images.
  • a line scanning arrangement is applied.
  • Line scanners with linear photo-sensors are well known and can be implemented without much technical difficulty as providing means for an image reconstructor.
  • the image can be sensed by a linear sensor scanning in the image plane.
  • Such sensors even at high pixel depth, are inexpensive and mechanical means to move such sensors are well known from a myriad of applications.
  • disadvantages of this embodiment are complex mechanics and increased time to capture intermediate images because scanning takes time.
  • a scanner configuration employing several line photo-sensors positioned in the intermediate image planes displaced along the optical axis can be used to take the intermediate images simultaneously.
  • the intermediate images can be produced by different light frequency ranges.
  • Pixels of the sensor can be fitted alternatively with a red, blue or a green filter in a pattern, for example, in a well known Bayer pattern.
  • Such image photo-sensors are commonplace in technical and consumer cameras.
  • the colour split provides a delay and subsequent difference in defocus of the pixel groups.
  • a disadvantage of this approach is that only grey-scale images will result as a final image.
  • the colour split is applied to the final image, and intermediate images for the different colours reconstructed separately prior to stacking of such images.
  • Arrangement for coloured images are well known, for example, Bayer pattern filters for the image photo-sensor or spinning discs with different colour filters in front of the optics of the providing means which disc is synchronized with the image capture process.
  • red (R), blue (B) and green (G) spectral bands (“RGB”), or any other combination of spectral bands can also be separated by prismatic methods, as is common in professional imaging systems.
  • a spatial light modulator for example, a liquid crystal device or an adaptive mirror
  • the adaptive mirror can be of a most simple design because only defocus alteration is required which greatly reduces the number of actuators in such mirror.
  • Such modulator can be of a planar design, i. e. "piston" phase filter, just to lengthen the path of the light, or such modulator can have any other phase modulating shape, for example, cubic filter. Using cubic filters allows for combinations of methods described in this document with wave-front coding/decoding technologies, to which references can be found in this document.
  • an image reconstructor adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in- focus sub-images can be constructed for EDF and wave-front applications.
  • Such reconstructor has at least one image photo-sensor (for an image/measuring light intensity) or multiple image photo-sensors (for measuring light intensity only) each divided in multiple sub- sensors with each sub-sensor producing an intermediate image independent of the other sub-sensors by projecting intermediate images on the sensor by, for example, a segmented input lens, or segmented input lens array.
  • small sub-areas of at least two intermediate images can be distributed over the photo-sensors in a pattern.
  • the sensor can be fitted with a device or optical layer including optical steps, which delays the incoming wave-front differently for sub-areas in the pattern of, for example, lines or dots.
  • the sub-areas can have the size of one photo-sensor pixel.
  • the sub-areas must, of course, be digitally read out separately to produce at least two intermediate images with different but known degrees of defocus (phase shift).
  • the final image quality is dependent on the number of pixels representing an intermediate image. From at least two adjacent final sub-images a composite final image can be made, for example, for EDF applications.
  • An image reconstructor which reconstructs sub-images of the total image, which sub- images can be adjacent, independent, randomly selected or overlapping can also be applied as a wave-front sensor, other words, it can detect differences in phase for each sub-image by estimation of the local defocus or, alternatively, estimate tilts per sub- image based on comparison of the spatial spectra of neighbouring images.
  • the apparatus should therefore include processing means to reconstruct a wave-front by combining defocus curvatures of, at least two, intermediate sub-images.
  • the method which determines defocus for a total final image, or a total object can be extended to a system which estimates the degree of defocus in a multiple of sub-intermediate-images (henceforth: sub-images) based on, at least, two intermediate full images.
  • sub-images sub-intermediate-images
  • the local curvature can be approximated by defocus curvature (degree of defocus), and at small sub-images any aberration of any order higher or equal to 2 can be approximated by local curvature, i. e. degree of defocus. Consequently, the wave-front can be reconstructed based on the local curvatures determined for the small sub-images and the image reconstruction device becomes effectively a wave-front sensor.
  • This approach is, albeit using local curvatures and not tilts, in essence an analogue to the workings of a Shack-Hartmann sensor which uses local tilt within each local sub-aperture to estimate the shape of a wave-front.
  • local curvatures are used for the same.
  • the well known Shack-Hartmann algorithms can be adapted to process information on curvatures rather than tilts.
  • the sub-images can have, in principle, any shape and can be independent or partly overlapping depending on the required accuracy and application. For example, scanning the intermediate image by a linear photo-sensor, i. e. scanning can produce sub-images of lines.
  • For wave-front sensors applications are numerous, which applications will increase with less expensive wave-front sensors.
  • the intermediate images can also be used to estimate the angulation (from lateral displacements of sub-images) of light rays compared to the optical axis by comparison of the spatial spectra of the neighbouring intermediate images and then reconstruct the shape of the wave-front by applying methods developed for the analysis of so called heartmanngrams.
  • the apparatus should therefore include means adapted to reconstruct a wave-front by combining lateral shifts of at least two intermediate sub- images.
  • a new image of the object can be calculated as it projected on the plane perpendicular to the optical axis at any distance from the exit pupil, i.e. reconstruction of final in- focus images by ray-tracing.
  • the spatial spectrum of the first image is Z 1 (CO x , CO 3 ,) and the spectrum of the second image taken in the plane displaced by ⁇ z along the Z-axis is I 2 (CO x , C ⁇ y ) .
  • Lateral shift of the second image by Ax and Ay results in following change in the spatial spectrum of the second image
  • H(co x ,co ,z) is the system OTF with defocus expressed in terms of the displacement z with respect to the exit pupil plane.
  • the intermediate images specified by Z 1 (co x , CO 3 ,) and / 2 ( ⁇ x , ⁇ j ,) are supposed to be displaced longitudinally by a small
  • Eq. 45 The integration in Eq. 45 is performed over the image/sub-image area.
  • the procedure described above can be applied to each sub-area separately, resulting in a final image as it is projected on the image plane at any given distance from the exit pupil and having the number of "pixels" equal to the number of sub-areas.
  • This function is close to the principle described in WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), but the essential difference is that the method described in the present document does not require an additional array of microlenses.
  • the information on local tilts i. e. ray directions, is recalculated from the comparison of the spatial spectra of the intermediate defocused images. It should be noted that the estimated computational cost for the described method is significantly lower than those given in WO2006/039486, in other words, the described method can provide real-time capability.
  • Images with EDF can be obtained by correction of a single wave-front in a single final image.
  • the non-iterative computation methods described in this document will allow for rapid computations on, for example, dedicated electronic circuits. Extended computation time on powerful computers has been a drawback of various EDF imaging techniques to date.
  • EDF images can also be obtained by dividing a total image in sub- images of a much less number than the wave-front application, requiring likely thousands of sub-images, described above.
  • the degree of defocus is determined per sub-image (which can be small number of sub-images, say, only a dozen or so sub- images per total image, or very large numbers with each sub-image represented by only a number of pixels.
  • the desired number of sub-images depends on required accuracy, specifications of the device and its application), and the sub-images corrected accordingly followed by reconstruction of a final image by combination of corrected sub-images. This procedure results in a final image in which all extended (three- dimensional) objects are sharply in focus.
  • EDF images can also be obtained by stacking at least two final images each reconstructed to correct for defocus for at least one focal plane of the same objects in cubic space. Such digital stacking procedures are well known.
  • Non-iteration is most simple and save computing time. In our prototypes we reach reconstruction times ⁇ 50 ms allowing real-time imaging. However, two or three iterations of calculations can improve estimate of defocus in selected cases and improve image quality. Whether iterations should be applied depends on the application and likely need for real-time imaging. Also, for example, two intermediate images combined with re-iteration of calculations can be preferred by the user to three intermediate images combined with non-iterative calculations. The embodiments and methods of reconstruction are dependent on the intended application.
  • Image scanning is a well known technology and can hereby be extended for camera applications.
  • images with an EDF can be reconstructed by dividing the intermediate images in a multiple of sub-sectors. For each sub-area the degree of defocus can be determined and, consequently, the optical sharpness of the sub-sector reconstructed. So, the final image will be composed of a multiple of optical focused images and have an EDF, even at full aperture camera settings.
  • Linear scanning can be employed to define such linear sub-areas.
  • Pattern recognition and object tracking is extremely sensitive to a variety of distortions including defocus.
  • This invention provides a single sharp image of the object by single exposures as well as additional information on speed, distance and angle of travelling by multiple exposures.
  • Applications can be military tracking and targeting systems, but also medical, for example, endoscopy with added information of distances.
  • Methods described in this document are sensitive to wavelength. This phenomenon can be employed to split images at varying image depth when light sources of different wavelength are employed. For example, focusing at different layer depth in multilayer CD/DVD discs can be achieved for different depth simultaneously with lasers of different wavelength. A multilayer DVD pick-up optical system which reads different layers simultaneously can thus be designed.
  • Other applications involve consumer and technical cameras insensitive to defocus error, iris scanning cameras insensitive to the distance of the eye to the optics, and a multiple of homeland security camera applications.
  • automotive cameras can be designed which are not only insensitive to defocus, but also, for example, calculate distance and speed of chosen objects, parking aids, wave-front sensors in numerous military and medical applications. Availability of inexpensive wave-front sensors will only increase the number of applications.
  • the reconstruction method described above is highly dependent of the wavelength of light forming the image. So, the methods can be adapted to determine the wavelength of light when defocus is known precisely. Consequently, the image reconstructor can, alternatively be designed as a spectrometer.
  • Figure 1 shows a sequence of defocused intermediate images from the image side of the optical system from which intermediate images the final image can be reconstructed.
  • An optical system with exit pupil, 1 provides, in this particular example, three photosensors (or sections/parts thereof, or as subsequent images in time, see various options in the description of the invention in this document) with three intermediate images, 2, 3, 4, with the optical axis, 5, and which images have a precisely known distance, 6, 7, 8, compared to the exit pupil, 1 , and, alternatively, precisely known distances compared to each other, 9, 10.
  • a precisely known distance of an photo-sensor/image plane compared to the principle plane in such a system translates, via standard and traditional optical formulas, in a precisely known difference of defocus compared to each other.
  • the reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14-16 bit/pixel. Note that all defocused images are distinctly unreadable to a degree that even the mathematical integral sign can not be recognized from any of the intermediate images.
  • the reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14 bit/pixel.
  • the reconstruction was carried out on intermediate images with digitally simulated defocus.
  • the final image, 19, has a dynamic range of 14- bit/pixel, is reconstructed with a three-step defocus correction, with the final defocus deviation from exact value: ⁇ > ⁇ 0.8.
  • Figure 5 shows an example of an embodiment of the imaging system employing two intermediate images to reconstruct a sharp final image.
  • Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a beam splitter, 26, into two light signals.
  • the light signals are finally detected by two photo-sensors, 27 and 28, positioned in the image planes shifted, one with respect to another, by a specified distance along the optical axis.
  • Photo-sensors 27 and 28 provide simultaneously two intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
  • Figure 6 shows an example of an embodiment of the imaging system employing three intermediate images to reconstruct a sharp final image.
  • Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a first beam splitter, 26, into two light signals.
  • the reflected part of light is detected by a photo-sensor, 27, whereas the transmitted light is divided by a second beam splitter, 28.
  • the light signals from the beam splitter 28 are, in turn, detected by two photo-sensors, 29 and 30, positioned in the image planes shifted, one with respect to another, and relative to the image plane of the sensor 27.
  • Photo-sensors 27, 29 and 30 provide simultaneously three intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
  • Figure 7 illustrates the method, in this example for two intermediate images, to calculate an object image in an arbitrary image plane, i. e. at an arbitrary defocus, based on local ray- vector and intensity determined for a small sub-area of the whole (defocused) image.
  • Two consecutive phase-diverse images, 2 and 3, with a predetermined defocus, or alternatively displacement, 9, along the optical axis, 5, are divided by a digital (software-based) procedure into a plurality of sub-images.
  • Comparison of the spatial spectra calculated for a selected image area, 31, on phase- diverse images allows evaluating the ray- vector direction, 32, which characterizes light propagation, in geometric optics limit, along the optical axis 5.
  • a corresponding image point, 33, /. e. point intensity and position, located in an arbitrary image plane, 34 can be found by ray-tracing.
  • the distance, 35, from the new image plane 34 to one of the intermediate image planes is assumed to be specified.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)

Abstract

A method and apparatus to reconstruct a sharp image from multiple phase diverse intermediate images is described. The degree of defocus of all intermediate images is unknown, but the diversity defocus is known. Images can be processed real-time because of intrinsically non- iterative algorithms. Such an apparatus is insensitive to defocus and can be included in imaging systems for extended depth of field (EDF) imaging, range finding and 3D-imaging. Additionally, wave-front sensors can be constructed by processing sub-areas of images. Applications include digital imaging, distance, speed and direction measurement and wave-front sensing, which function can be combined with the camera function.

Description

Image reconstructor
Background of the invention
The present invention relates to imaging and metering techniques. Firstly, the invention provides methods, systems and embodiments of these for estimating aberration errors of an image and reconstruction of said image based on a set of multiple intermediate images by non-iterative algorithms and, secondly, provides methods to reconstruct wave-fronts. An apparatus based on the invention can be either a dedicated camera or wave-front sensor, or these functions can be combined.
The invention has a broad scope of embodiments and applications including, image reconstruction for one or more focal distances, image reconstruction for EDF, speed, distance and direction measurement device and wave- front sensors for various applications. Reconstruction of images independent from the defocus aberration has most practical applications. Therefore, the device or derivates thereof can be applied for digital imaging insensitive to defocus (in cameras), digital imaging for extended depth of field ("EDF", in cameras), as optical distance, speed and direction measurement device (in measuring and metering devices). Camera units and wave-front sensors according to the methods and embodiments set forth in this document can be designed to be entirely solid state, with no moving parts, to be constructed from only very few components, for example, in a basic embodiment: simple optics, for selected application even only one lens, one beam splitter (or other beam splitting element, for example, phase grating) and two sensors and to be combined with dedicated data processing units/processing chips, with all these components in, for example, one solid polymer assembly.
In this document "intermediate image" refers to a phase-diverse intermediate image which has an unknown defocus compared to the in-focus image plane but a known a priori diversity defocus in respect of any other intermediate image in multiple intermediate images. The "in-focus image" plane is a plane optically conjugate to an object plane and thus having zero defocus error.
The terms "object" and "image" conform to the notations of Goodman for a generalized imaging system (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6). The object is positioned in the "object plane" and the corresponding image is positioned in the "image plane". "EDF" is an abbreviation for Extended Depth of Field.
The term "in- focus" refers to in focus/optical sharpness/in optimal focus, and the term "defocus" to defocus/optical un-sharpness/blurring. An image is meant to be in- focus when the image plane is optically conjugate to the corresponding object plane.
This document merely, by the way of example, applies the invention to camera applications for image reconstruction resulting in a corrected in- focus image because defocus is, in practice, the most important aberration. The methods and algorithms described herein can be adapted to analyse and correct for any aberration of any order or combination of aberrations of different orders. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to other aberrations as well by adaptation of formulas presented as for the applications above.
This invention can, in principle, be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes. Colour imaging can be achieved by splitting white light into narrow spectral bands. White, visible light can be imaged when separated in, for example, red (R), blue (B) and green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer pattern filters providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images. The invention can be applied to infrared (IR) spectra. X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but the methods can be used for X-rays by application of, for example, crystalline monochromators to produce monochromacity.
For ultrasound and coherent radio frequency signals the formulas can be adapted for the coherent amplitude transfer function of the corresponding system. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to almost any wave process and to almost any aberration of choice by adaptation and derivation of the formulas and mathematical concepts presented in this document. This document describes methods to obtain sharp, focused images in planes, slices, along the optical axis as well as optical sharpness in three-dimensional space, and EDF imaging in which all objects in the intended cubic space are sharp and in-focus. The traditional focusing process, i. e. changing the distance between imaging optics and image on film or photo-detector or, otherwise, changing the focal distance of the optics takes time, requires additional, generally mechanically moving components to the camera and, last but not least, knowledge of the distance to the object of interest. Such focusing shifts the plane of focus along the optical axis. Depth of field in a single image can, traditionally, only be extended by decreasing the diameter of the pupil of the optics, i. e. by using low-NA objectives or, alternatively, apodized optics. However, decreasing the diameter of the aperture reduces the light intensity reaching the photo-sensors or photographic film and significantly degrades the image resolution due to narrowing of the image spatial spectrum. Focusing and EDF at full aperture by using computational methods present a considerable interest in imaging systems and is clearly preferable to such traditional optical/mechanical methods.
Furthermore, a method to achieve this with no moving parts (as a solid state system) is generally preferable for both manufacturer and end-user because of low cost of equipment and ease of use.
Several methods have been proposed for digital reconstruction of in-focus images some of which will be summarized below in the context of the present invention described in this document.
Optical digital technologies regarding defocus correction and EDF started with a publication of Hausler (Optics Communications 6(1), pp. 38-42, 1972) which described a combination of multiple images into a single image in such a way that the final image results in EDF. This method does not reconstruct the final image from the set of defocused images but combines various in-focus areas of different images. The present invention differs from this approach because it reconstructs the final image from intermediate, defocused images that may not contain in-focus areas at all, and, automatically, combines these images into a sharp final EDF image.
Later, methods based on phase coding/decoding which include an optical mask in the optical system which is designed such that the incoherent optical transfer function remains unchanged within a range of defocuses. Dowsky and co-workers (refer to, for example, US2005264886, WO9957599 and E.R. Dowski and W.T. Cathey, Applied Optics 34(11), pp. 1859-1866, 1995) developed methods and applications of EDF imaging systems based on wave front coding/decoding with a phase filter followed by a straightforward decoding algorithm to reconstruct the final EDF image from the phase encoded intermediate image.
The present invention described in this document does neither include coding of wave- fronts nor the use of phase filters.
Also, various phase-diversity methods determine the phase of an object by comparison of a precisely focused image with a defocused image, refer to, for example, US 6771422 and US2004/0052426.
US2004/0052426 describes non-iterative techniques for phase retrieval for estimating errors of an optical system, and includes capturing a sharp image of an object at a focal point and combining this image with a number of, intentionally, blurred unfocused images of the same object. This concept differs from the concept described in this document in that, firstly, the distance to the object must be known beforehand, or, alternatively, the camera be focused on the object, and, secondly, the method is designed and intends to estimate of optical errors of the optics employed in said imaging. This technique requires at least one focused image at a first focal point in combination with multiple unfocused images. These images are then used to calculate wave- front errors.
The present invention differs from US 2004/0052426 in that the present invention does not require a focused image, i. e. knowledge of the distance from an object to the first principal plane of the optical system, prior to capture of the intermediate images, and uses only a set of unfocused intermediate images with unknown degree of defocus relative to the object.
US6771422, describes a tracking system with EDF including a plurality of photo- sensors, a way of determining the defocus status of each sensor and to produce an enhanced final image. The defocus aberration is found by solving the transport equation derived from the parabolic equation for the complex field amplitude of a monochromatic and coherent light wave. The present invention differs from US6771422 in that it does not intend to solve the transport equation. The present invention is based on the known a priori information on the incoherent optical transfer function (OTF) of the optical system to predict the evolution of intensity distribution for different image planes and, thus, the degree of defocus by direct calculations with non-iterative algorithms.
Other methods to reconstruct images based on a plurality of intermediate images/intensity distributions taken at different and known degrees of defocus employ iterative phase diversity algorithms (see, for example, J.J. Dolne et ah, Applied Optics 42(26), pp. 5284-5289, 2003). Such iteration can take considerable computational power and computing time which is unlikely to be carried out in real-time. The present invention described in this document differs from the standard phase diversity algorithms in that it is an essentially non- iterative method.
WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), use an optical system designed such that it allows determination of the intensity and angle of propagation, by an array of micro lenses, of the light at different locations on the sensor plane resulting in a so-called "plenoptic" camera. The sharp images of the object points at different distances from the camera can be recalculated (for example, by ray-tracing). It must be noted that with the method described in the present document the intensity and angle of incidence of the light rays at different locations on the intermediate image plane can be derived and methods analogous to WO2006/039486, i. e. ray-tracing, can be applied to calculate sharp images of an extended object.
The present invention described in this document differs from WO2006039486 and related documents in that the present invention does not explicitly use such information on angle of incidence obtained with an array of microlenses, for example, a Shack- Hartman wave-front sensor, but instead the respective light ray direction is directly calculated by finding the relative lateral displacement for at least one pair of phase- diverse images and using the a priori known defocus distance between them. Additionally, the intermediate phase-diverse images described in this document can also be used for determining the angle and intensity of individual rays and to compose an EDF image by ray-tracing.
All documents mentioned in the sections above are included in this document by reference.
Description of the invention
The present invention relates to imaging techniques. From the single invention a number of applications can be derived:
Firstly, the invention provides a method for estimation of defocus in the optical system without prior knowledge of the distance to the object; the method is based on digital processing of multiple intermediate defocused images, and,
Secondly, provides means to digitally reconstruct a final in-focus image of the object based on digital processing of multiple intermediate defocused images, and,
Thirdly, can be used for wave-front sensing by analyzing local curvature of sub-images from which an estimated wave-front can be reconstructed, and,
Fourthly, can reconstruct EDF images by either combining images from various focal planes (for example "image stacking"), or, by combining in-focus sub-images (for example "image stitching"), or, alternatively, by correction of wave-fronts, or, alternatively, by ray-tracing to project an image in a plane of choice.
Fifthly, provides methods to calculate speed and distance of an object by analyzing subsequent images of the object including speed in all directions, X, Y and Z based on a multiple of intermediate images and consequently the acquired information on focal planes, and, Sixthly, can be used to estimate the shape of a wave-front by reconstruction of tilt of individual rays by calculating the relative lateral displacement for at least one pair of phase-diverse images and using the a priori known defocus distance between them, and,
Seventhly, provides methods to calculate by ray-tracing a new image of an extended object in any image plane of the optical system (for example, approximating a "digital lens" device), and,
Eighthly, can be adapted to determine the wavelength of light when defocus is known precisely, providing the basis for a spectrometer, and,
Ninthly, can be adapted to many non-optical applications, for example, tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system (i. e. an intermediate image degradation can be attributed to a convolution with the system response function), of which the response function is known a priori, the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori.
With the methods described in this document a focused final image of an object is derived, by digital reconstruction, from at least two, defocused intermediate images having an unknown degree of defocus compared to an ideal focal plane (or, alternatively, the distance from the object to the principal planes of imaging system), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image.
Firstly, a method of reconstruction and which method can be included in an apparatus will be described. The method starts with at least two defocused, i. e. phase-diverse, intermediate images from which a final in-focus image can be reconstructed by a non- iterative algorithm and an optical system having an optical transfer function which is a priori known. Note that each intermediate image has a different and a priori unknown degree of defocus in relation to the in-focus image plane of the object, but the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known.
To digitally process the images obtained above the method includes the following steps: a generating function comprising a combination of the spatial spectra of said intermediate images and a combination of their corresponding optical transfer functions is composed. said combinations of spatial spectra and optical transfer functions are adjusted such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in- focus image plane. (This adjustment can take the form of adjustment of coefficients or adjustment of functional dependencies or a combination thereof, so the relationship between the combination of spatial spectra and their corresponding optical transfer functions can be designed as linear, non-linear or functional relationships, depending on the intended application.) the final in- focus image is reconstructed by a non-iterative algorithm based on said combinations of spatial spectra and corresponding optical transfer functions.
An apparatus to carry out the tasks set forth above must include the necessary imaging means and processing means.
Such method includes an equation based on the generating function/functional satisfying
Figure imgf000009_0001
≡ Ψ[H(ωxj,,φ - Δφ1)x/0xj,),K ,H(ωxj,,φ - ΔφΛ/)x/0xj,)] = (1) = £^(ωxj,,φ0,Δφ1 K ,ΔφΛ/,[/0xj,)])δφ^, p≥0
where
/(COx , CO3,, φ - ΔφB ) = In (cox , CO3, ) = — J J /„ (x, y) exp[-z(coxx + co^)] dxdy (2)
is the spatial spectrum of the n-th intermediate phase-diverse image, 1 < n ≤ M ; x and y are the transverse coordinates in the intermediate image plane; M is the total number of intermediate images, M ≥ 2. Value Δφκ (known a priori from the system configuration) is the diversity defocus between the n-th intermediate image plane and a chosen reference image plane. Analogously, the spatial spectrum of the object (i. e. final image) is i °° °° I0x , ω ) = — - f |70 (x , /) eχp[-z(ωxx' + ω /)] dx'dy (3)
here x and y are the transverse coordinates in the object plane. In the right-side of Eq. 1 the spatial spectra of phase-diverse images are substituted with Inxy) = H(ωxj,,φ0 + δφ - ΔφB)/0xj,) , where H(ωxj,,φ) denotes the defocused incoherent optical transfer function (OTF) of the optical system; the unknown defocus φ is substituted by a sum of the defocus estimate φ0 and the deviation δφ ≡ φ -φ0 , | δφ /φ0 |« 1 : φ = φ0 +δφ . The series coefficients Bpx , ω^ , φ0 , Δφt K , Δφ^ , [I0x , ω^ )] ) functionally dependent on the spatial spectrum of the object I0xy) can be found from Ψ by decomposing the defocused OTFs H(ωxj,,φ0 +δφ - Δφκ) into Taylor series in δφ .
The generating function/functional Ψ is chosen to have zero first- and higher-order derivatives up to the K -order with respect to unknown δφ :
d'Ψ υ Υ = 0, i = \K K. (4)
3(δφ)!
Thus, 5!xj,,φ0,Δφ1K ,ΔφΛ/,[/oxj,)]) = O for / = 1K K and Eq. 1 simplifies to
Ψ[I(ωxy,φ -A%),K ,I(ωxy,φ -Δςv)] = = 50xj,,φ0,Δφ1K ,ΔφΛ/,[/0xj,)]) + O(δφj:+1).
Finally, neglecting the residual term O(δφ^+1) in Eq. 5, the object spatial spectrum I0((ύx,(ύy) can be found by solving the approximate equation
Ψ[/(ωxj,,φ -Δφt),K ,/(ωxj,,φ -Δqv)] ≡
(6) ≡ 50xj,,φ0,Δφ1K ,ΔφΛ/,[/oxj,)]).
So, having two or more intermediate images In((ύx,(ύy) , n = 1,2... , and knowing a priori the system optical transfer function H(ωxj,,φ) a generating function Ψ according to Eq. 1 independent from the unknown defocus φ (or δφ ), as required by Eq. 4, can be composed by an appropriate choice of functional relation between In((ύx, (ύy) and, subsequently between H(ωxj,,φ0 +δφ - Δφκ) corresponding to said spatial spectra In((ύx,(ύy) . Reconstruction of the object spectrum I0((ύx,(ύy) , which is the basis for the final in- focus image or in- focus picture, by a non-iterative algorithm based on Eq. 6 which includes, on the one hand, the combination of the spatial spectra In((ύx,(ύy) and, on the other hand, the combination of incoherent OTFs
H(ωxj,,φ0 +δφ - ΔφB) which are substituted by the corresponding Taylor expansions in δφ .
An important example of the generating function is a linear combination of the spatial spectra /Bx,ω ) of the intermediate phase-diverse images
M
Ψ = L ^(ωxj,,φ0, Δφ1 K ΔφΛ/)/κxj,) =
(7) = /0xj,)£^(ωxj,,φ0, Δφ1 K ,ΔφΛ/)δφ^,
where the coefficients gBxj,,φ0,Δφ1K Δφ^) with n = lK M are chosen to comply with Eq. 4. In this case Eq. 5 results in
Figure imgf000011_0001
= /0xj,)x {50xj,,φ0,Δφ1K ,ΔφΛ/)+ O(δφj:+1)}.
The coefficients gBxj,,φ0,Δφ1K Δφ^) can be found from Eq. 8 by making substitutions In((ύxy) = H(ωxj,,φ0 +δφ -ΔφB)/0xj,) , where an explicit expressions for the incoherent optical transfer function (OTF) H(ωxj,,φ) of the optical system is used. In such a way, gBxj,,φ0,Δφ1K Δφ^) are known a priori functions depending only on the optical system configuration. The analytical expression for the system OTF H(ωxj,,φ) can be found by many ways including fitting of the calculated OTF, general formulas are given, for example, by Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996). The "least-mean-square" solution /0ij,) of Eq. 8 that minimizes the mean square error (MSE)
MSE = J j\ I0xy) -I0xy) \2xy (9)
takes the form
Figure imgf000012_0001
where the constant ε -1 , by analogy with the least-mean-square-error filter (Wiener filter), denotes the signal-to-noise ratio. "Noise" in the algorithm is caused by the residual term O(δφ^+1) in Eq. 8 depending on δφ . When | B0 | has no zeros within the spatial frequency range of interest Ω , the constant 8 can be defined from Eq. 8 as follows:
ε = min Ω | 50 txj,,φ0,Δφ1K ,ΔφΛ/)xO(δφj:+1)| . (11)
So, Eq. 10 describes the non- iterative algorithm for the object reconstruction with the generating function chosen as a linear combination of the spatial spectra of the phase- diverse images.
The defocus estimate φ0 can be found by many ways, for example, from a pair of phase diverse images. If /1xj,) is the spatial spectrum of the first image that is characterized by unknown defocus φ and I2((ύx,(ύy) is the spatial spectrum of the second image with defocus φ + Δφ , here Δφ being the difference in defocus predetermined by the system configuration, then the estimate of defocus is given by an appropriate expression:
Figure imgf000012_0002
where the OTF expansion H((ύx,(ύ ,φ) = γ0tφ2 + ... in the vicinity of φ = 0 valid at ωx + (ύy 2 |« 1 is used. The coefficient A denotes the ratio A =< *2<Α,*,) -Iι<P.,*,) > t (13)
and the averaging is carried out over low spatial frequencies | (ύx + ω2 |« 1 . In addition, the estimate φ0 of the is unknown defocus φ can be found from three consecutive phase-diverse images I1(CHxWy) with defocus φ — Δφ1 ? /2xj,) with defocus φ and I3(Qdx, ωy) with defocus φ + Δφ2 (Aq)1 and Δφ2 are specified by the system arrangement):
l χΔφ^Δφ[ _ 2 χΔφi 2 -Δφ2 2
The coefficient χ is the ratio of images spectra
I2(COxWy) - I1
Figure imgf000013_0001
averaged over low spatial frequencies | (ύx2 1« 1 . Note that in practice the best estimates of defocus according Eq. 14 were achieved when the numerator and the denominator in Eq. 15 were averaged independently, i. e.
Figure imgf000013_0002
'
Note that an estimate of defocus (φ0 in Eq. 1) is necessary to start these computations, the estimate is automatically provided by the formulas specifying the reconstruction algorithm above. Such an estimate can also be provided by other analytical methods, for example, by determining the first zero-crossing in the spatial spectrum of the defocused image as described by I. Raveh et al. (I. Raveh, et ah, Optical Engineering 38(10), pp. 1620-1626, 1999).
In practice, calculations according to Eq. 16 together with Eq. 14 can be used for an apparatus to determine degree of defocus with, at least, two photo-sensors having only one photo-sensitive spot, for example photo-diodes or photo -resistors. A construction for such apparatus likely includes a photo-sensor but also an amplitude mask, focusing optics and processing means which are adapted to calculate the degree of defocus of, at least, one intermediate image. The advantage of such system is that no Fourier transformations required for calculations which significantly reduces calculation time. This could be achieved by, for example, simplification of Eq. 16 to a derivate of the Parseval's theorem, for example:
J ]u(x,y)x{I3(x,y)-I2(x,y)}dxdy
Figure imgf000014_0001
where U(x,y) defines the amplitude mask in one or multiple image planes.
Also, photo-diodes and photo-resistors are significantly less expensive compared to photo-sensor arrays and are more easily assembled.
Note that a Fourier transformation can be achieved by processing methods as described above, but can also be achieved by optical methods, for example, by optical means, for example, by an additional optical element between the beam splitter and imaging photosensor. Using such optical Fourier transformation will significantly reduce digital processing time which might be advantageous for specific applications.
Such apparatus can be applied as, for example, a precise and inexpensive optical range meter, camera component, distance meter. Such apparatus differs from existing range finders with multiple discrete photo-sensors which all use phase-detection methods. The distance of the object to the camera can be estimated once the degree of defocus is known via a simple optical calculation, so the methods can be applied to a distance metering device. Also, the speed and direction of an object in X, Y and Z directions (also: 3D space) can be estimated with additional computation means and information on at least two subsequent final images and the time in between capture of the intermediate images for these final images. Such inexpensive component for solid state image reconstruction will increase consumer, military (sensing and targeting, with or without the camera function and with or without wave-front sensing functions) and technical applications.
As an alternative, the estimate can be obtained by an additional device, for example, an optical or ultrasound distance measuring system. However, most simply, in the embodiments described in this document, the estimate is provided by the algorithm itself without the aid of any additional measuring device.
Note that an estimate on the precision of the degree of defocus can also be obtained by Cramer-Rao analysis as described in (DJ. Lee et al, J. Opt. Am. A 16(5), pp. 1005- 1015, 1999) which document is a part of this document by reference.
Apart from the method described above the invention also provides an apparatus for providing at least two, phase-diverse intermediate images of said object wherein each of the intermediate images has a different degree of defocus compared to an ideal focal plane (i. e. an image plane of the same system with no defocus error), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image. The apparatus includes processing means for reconstructing a focused image of the object by an algorithm expressed by Eq. 6.
Note that a man skilled in the art will conclude that: (a) - said Fourier-based processing with spatial spectra of images can also be carried out by processing the corresponding amplitudes of wavelets to the same effect, (b) - the described method of image restoration can be adapted to optical wave-front aberrations different from defocus. In this case each phase-diverse image is characterized by an unknown absolute magnitude of an aberration but a known a priori difference in the aberration magnitude relative to any other phase-diverse image, and, (c) - the processing functions mentioned above can be applied to any set of set of images or signals which are blurred, but of which the transfer (blurring) function is known. For example, the processing function can be used to reconstruct images/signals with motion blur or Gaussian blur in addition to said out- of-focus blur.
Secondly, an additional generating function to provide the degree of defocus of at least one of said intermediate images compared to the in- focus image plane is provided here, and the degree of defocus can be calculated by additional processing by an apparatus. An improved estimate for unknown defocus can be directly calculated from at least two phase-diverse, intermediate images obtained with the optical system by a non-iterative algorithm according to: Ψ-B' δφ ≡ ^-A , (18)
Bl
and thus an improved estimate becomes φ = φ0 + δφ . The generating function Ψ' in this case obeys
Ψ'[I(ωxy,φ -Δφ^K ,I(ωxy,φ -Δφ^ )] =
(19) = £5;(ωxj,,φ0,Δφ1K ,ΔφΛ/,[/0xj,)])δφ^, p≥O
and
d'Ψ'
= 0, i = 2K K. (20)
3(δφy
In compliance with Eq. 20, 5'(ωxj,,φ0,Δφ1K ,AφM,[I0xy)]) = 0 for i = 2K K and Eq. 19 reduces to
Ψ'[/(ωxj,,φ - Δφi),K ,I(ωxy,φ - AyM)] =Bo' + i?;δφ + O(δφκ+ι). (21)
The latter formula yields directly Eq. 18. Note that the coefficients Bp' are, in general, functional dependencies of the object spectrum /0xj,) which, in turn, can be found from Eq. 6.
It can be necessary to correct the spatial spectrum of at least one of said intermediate images to lateral shift of said image compared to any other intermediate image because the image reconstruction as described in this document is sensitive to lateral shifts. A method to do such correction is given below which method can be included in processing means of an apparatus carrying out such image reconstruction. The general algorithm according to Eq. 6 requires a set of defocused images as input data. However, due to, for example, mis-alignments of the optical system some intermediate images can be shifted in the plane perpendicular to the optical axis resulting in incorrect restoration of the final image.
Using shift-invariance property of the Fourier power spectra and the Hermitian redundancy in the image spectrum, i. e. image intensity is a real value, in combination with the Hermitian symmetry of the OTF, i. e. H(ωxy,φ) = H*(-ωx,-ωy,φ) , the spectrum of the shifted intermediate image can be recalculated to exclude the unknown shift. An example of the method for excluding shift dependence is described below. Assuming that the n - th intermediate image is shifted by Ax , Ay , in compliance with Eq. 3 its spectrum becomes
1 °r r Inx,ω ) = — \ \ In(X- Ax,y - Ay =)exp[-i(ωxx + ω y)]dxdy = 2π _J _J , (22)
= /Bxj,)eχp[-/(ωxΔx + ωj,Δy)]
In((ύx,(ύy) being the unshifted spectrum. In many practical cases the exit pupil of an optical system is a symmetrical region, for example, square or circle, and the defocused OTF H(ωxj,,φ) e Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have in agreement with Eq. 22
/Bxj,) = H(ωxj,,φB)/0xj,)eχp[-/(ωxΔx+ωj,Δj)], /, (ωx , ωy ) = H(ωx , Co , φ, )/0 (COx , ωΛ
From Eq. 23, [Inxy)/ I,(ωxy)f ~ exp[-2i(ωxAx +ωyAy)] ≡ exp(-2zθ) , where
i = V— 1 and the shift-dependent factor can be obviously excluded from /κxj,) .
Thus, the shift-corrected spectrum takes the form /κxj,) = /κxj,) exp(zi!>) and it can be further used in the calculations according to Eq. 6. Note that the formulas above give an example of a method for correcting the lateral images shift and there are also other methods to obtain shift-corrected spectra, for example, correlation technique, analysis of moments in intensity distribution.
The quality of reconstruction of an object /0xj,) according to the non- iterative algorithm given by Eq. 6 can thus be significantly improved by replacing the initial defocus estimate φ0 with the improved estimate φ = φ0 + δφ , where δφ is provided by
Eq. 21. The degree of defocus of the intermediate image compared to the in- focus image plane can be included in the non-iterative algorithm and the processing means of an apparatus for such image reconstruction adapted accordingly.
At least two intermediate images are required for the reconstruction algorithm specified by Eq. 6, but any number of intermediate images can be used providing higher quality of restoration and weaker sensitivity to the initial defocus estimate φ0 since the generating function Ψ gives the (M - 1) - th order approximation to B0x , G)3, ,φ0 , Aq)1 K , Δφ^ , [I0x ,(ύy)]) defined by Eq. 1 with respect to the unknown value δφ . The resolution and overall quality of the final image will increase with increasing the number M of intermediate images, at the expense of implementation of a larger number of photo-sensors or increasingly complex optical/mechanical arrangement, increasing computation time. Reconstruction via three intermediate images is used as an example in this document.
The degrees of defocus of the multiple intermediate images relatively to the ideal focal plane (i. e. an image plane of the same system with no defocus error) differ. In Eq. 1 the defocus of the n - th intermediate image φκ = φ - Δφκ (n = lK M and M is the total number of intermediate images) is unknown prior to provision of the intermediate images. However, as mentioned earlier, the difference in degree of defocus Δφκ of the multiple intermediate images relatively to each other (or any chosen image plane) must be known with great precision. This imposes no problems in practice, because the relative difference in defocus is specified in the design of the camera and its optics. Note that these relative differences vary in different camera designs, the type of photosensors^) used and intended applications of the image reconstructor. Moreover, the differences in defocus Δφκ can be found and accounted in further computations by performing calibration measurements with well-defined objects.
The degree of defocus of the image can be estimated using non- iterative calculations using fixed and straightforward formulas given above and the information provided by the intermediate images. Such non- iterative calculations are of low computational cost, provide stable and precise results. Furthermore, such non- iterative calculations can be performed by relatively simple dedicated electronic circuits, further expanding the possible applications of the invention. Thus, the reconstruction of a final sharply focused image is independent from the degree of defocus of any of the intermediate images relative to the object. The precision of the measurement of the absolute defocus (and, therefore, the precision of the range which is calculated from defocus values), is fundamentally limited by the combination of the entrance aperture (D ) of the primary optics and the distance ( z ) from the primary optics to an object of interest. In the case when a diffraction- limited spot implies the "circle of confusion" of an optical system, the depth of field becomes
~ (z/ D)2 and represents the defocus uncertainty. For a high aperture aplanatic lens, an explicit expression was derived by Sheppard (C.J.R. Sheppard, J. Microsc. 149, 73-75, 1988).
So, a high precision for defocus and range estimates requires, by definition, a large aperture of an optical system. This can be achieved by fitting, for example, a very large lens to the apparatus. However, such lens may require a lens-diameter of one meter, a size likely not practical for the majority of applications which require small camera units. An effectively large aperture can also be obtained by optically combining light signals from multiple, at least two, optical elements, for example, relatively small reflective or refractive elements positioned outside of the optical axis. Such optical elements must be positioned in the direction perpendicular to the optical axis, but not necessarily so. The theoretical depth of focus, i. e. axial resolution, corresponds to the resolution of the whole refractive surface of which the dimension is characterized by the distance between the optical elements. The optical elements can be regarded as small individual sectors at the periphery of a large refractive surface. Clearly, the total light intensity received by an image sensor depends on the combined apertures of the multiple optical elements. Such system with multiple apertures can be made flat and, in the case of only two light sources, also linear.
So, according to the above, an apparatus, for, for example, range finding applications, can be constructed which combines, at least two, light signals from, at least two, optical elements which are positioned opposite at a distances perpendicular to the optical axis.
The procedures described in this document so far require that the distances between the image planes is known precisely because the generating function, or functional (see Ψ , as in Eq. 1) combines spatial spectra of intermediate images with a priori known diversity defocus. However, a man skilled in the art may conclude that, alternatively, such procedure can be adapted to process intermediate images that are spatially modulated by a priori known phase and amplitude phase masks. Such masks spatially modulate the phase and/or amplitude of the light waves on their way to image sensors, and result in spatially phase and/or amplitude modulated intermediate images. The final image can be restored digitally by subsequent processing of, at least one, spatially modulated intermediate image according to existing and well-known decoding algorithms, or, alternatively, by algorithms adapted from procedures described in this document which adaptations to formulas above will be set forth below. Said modulations preferably includes defocus, but not necessarily so. Such wave-front encoding can be achieved by, for example, including, at least one, phase mask, or, at least one, amplitude mask, or a combination of any number of phase and amplitude masks having a precisely known modulation function. The system embodiment implies that, at least one, phase and /or amplitude mask is located in the exit pupils of the imaging system.
For a set of intermediate images Inx ,(ύy ) , 1 < n < M , obtained with phase and/or amplitude masks, Eq. 1 can be rewritten
Ψ[Iιxy),K ,IMxy)] ≡ Ψ[Hιxy)xI0xy),K ,HMxy)xI0xy)] =
Figure imgf000020_0001
(24) where, the OTFs are (see, for example, H.H. Hopkins, Proc. Roy. Soc. of London, A231, 91-103, 1955)
Hmxy) = -±- ] J>B(ξ +^,η +^)P; "(ξ -^,η -^)<*ξtfl , (25)
Ω« = I Pn (ξ,η) |2 dt,ch\ being the area of the n -th pupil in canonical coordinates
(ξ ,η ) and the pupil function is given by
Pn (ξ ,η ) = Pn (0) (ξ ,η ) exp[rθB (ξ ,η )] . (26)
In Eq. 26, the function Pκ (O)(ξ,η) , Pκ (O)(ξ,η)e R , is the amplitude transmission function corresponding to n -th amplitude mask, and i3-B (ξ ,η ) is the phase function representing the n -th phase mask in the exit pupil. In case of defocus φκ , for example, i3-κ (ξ ,η ) = φ (ξ 22 ) . It is important to note that Eq. 25 , or Eq. 26 in its phase function, implicitly contains unknown defocus φ , which alternatively can be expressed as φ =φo +δφ (φ0 defocus estimate).
Consider now reconstruction of the object spectrum /0x,ω ) from Eq. 24. The objective of the method is to properly choose combinations of i3-B (ξ ,η ) and/or Pκ (O)(ξ,η) for all intermediate images which combinations ensure validity of Eq. 4. Finally, the object spectrum I0(COx, CO y) can be recalculated from Eq. 6 with an alternative generating function/functional Ψ given by Eq.24 and invariant to defocus φ (up to terms ~ δφ^).
Analogously to Eqs. 19-20, a new generating function/functional Ψ' can be constructed by properly combining i3-κ (ξ ,η ) and/or Pκ (0) (ξ ,η ) to retain only linear terms in δφ in the right-hand side of Eq. 19. Unknown defocus φ can be subsequently found from Eq. 21 by substituting Ψ' .
So, an imaging apparatus can be designed which includes, in addition to the basic image forming optics described elsewhere in this document, at least one, optical mask to spatially modulate the incoming light signal. Either the phase or the intensity of said signal of, at least one, intermediate image can be modulated. Both phase and intensity of, at least one, intermediate image, can be spatially modulated by, at least one, phase mask, or separate phase masks are included for separate and independent modulation functions. The resulting modulation results in, at least one, spatially modulated light signal which can be subsequently reconstructed in accordance with the method described above by digital means to diminish sensitivity of the imaging apparatus to, at least one, selected optical aberrations which can be defocus aberration.
Image reconstruction: an example with three intermediate images At least two intermediate images are required for a reconstruction as described above but any number can be used as starting point for such reconstruction. As an illustration of the reconstruction algorithm set forth in the present document, we now consider an example with three intermediate images. Assume that the spatial spectra of three consecutive phase-diverse images are
/1xj,) , I2xy) and I3xy) , their defocuses are φ -Δφ , φ and φ +Δφ , respectively. In agreement with Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6) the reduced magnitude of defocus is specified as
Figure imgf000022_0001
where D is the exit pupil size, λ is the wavelength, Z1 is the position of the ideal image plane along the optical axis, za is the position of the shifted image plane. The defocus estimate (for the second image) can be found from Eq. 14
where, in agreement with Eq. 16,
= \\[I3(β>x,β>y)-I2(β>x,β>y)]dωxy ^[I2xy)-Iιxyy\dωxy '
and the integration is performed over low spatial frequencies | ω2 + ω2 |« 1 . With φ0 in hand, and following to Eq. 7 the generating function satisfying Eq. 4 becomes
Ψ ≡ I0xy)x (A0 +V (A1 + A3Δφ2) + 2μ(A2 + A4Δφ2) + O(δφ3)} , (30)
and
5oxj,,φo,Δφ) = A0 +V (A1 + /z3Δφ2) + 2μ(/z2 + /z4Δφ2) . (31)
the coefficient v and μ are
v = KK -iKK (32)
4A2A4 -3A2 + 8A2Δφ2 '
Figure imgf000023_0001
and A1 (z = 0,4 ) are the Taylor series coefficients of the defocused OTF H(ωxj,,φ = φ0 +δφ) in the neighbourhood of φ0 , i. e.
H(ωxy0 +δφ) = Ji0 + Ji1 δφ + Ji2 δφ2 + Ji3 δφ3 + Ji4 δφ4 + O(δφ5) . (34)
Finally, the spectrum of the reconstructed image, in concordance with Eq. 10, can be rewritten as
Figure imgf000023_0002
An improved estimate of defocus φ = φ0 + δφ complies with Eq. 18 for the generating function specified by Eq. 7
D D D' δφ = ^-^ , (36)
Bl
where B0 is given by Eq. 31 and
Bo' = h0 +X (Ji1 + Ji3Ay2) + 2σ (h2 + Ji4Ay2) , (37)
B[ = hι + 2τ h2 + 6σh3 + 4τ Ji4Ay 2 , (38)
Figure imgf000023_0003
(39)
The coefficients coefficient V and μ are specified by Eqs. 32-33, the coefficients τ and σ in Eqs. 37-39 satisfy the following equations
τ = --\ (40)
4 A4 σ = _J_ 4AA -3A3 2 (41)
48 A4 2
The optimum difference in defocus Δφ between the intermediate images is related to the specific dynamic range of the image photo-sensors, i. e. their pixel depth, as well as optical features of the object of interest. Depending on defocus magnitude, the difference in distance between the photo-sensors must exceed at least one wavelength of light to produce a detectable difference in intensity of images. The right-hand terms in Eqs. 35 and 39 are, in fact, the finite-difference approximations of the corresponding derivatives of the defocus-dependent image spectrum 7(ωxj,,φ) = H(ωxj,,φ)/Oxj,) with respect to defocus φ . By reducing the difference in defocus between the intermediate images or, other words, by reducing the distance between the intermediate image planes the precision of approximation can be increased. High pixel-depth or, alternatively, high dynamic range allows for sensing small intensity variations and, thus, small difference in defocus between the intermediate images can be implemented which results in increased quality of the final image.
Various embodiments of a device can be designed, which include, but which are not restricted to, various embodiments described below.
Apart from the method and apparatus which are adapted to provide an image wherein a single object is depicted in-focus, a preferred embodiment provides a method and apparatus wherein the intermediate images depict more than one object, each of the depicted objects having a different degree of focus in each of the intermediate images, and before the execution of said method one of those objects is selected.
Clearly, the image reconstructor with its providing means of intermediate images must have at least one optical component (to project an image) and at least one photo-sensor (to capture the image/light). Additionally, the reconstructor requires digital processing means, displays and all other components required for digital imaging.
Firstly, a preferred embodiment for providing means includes one image photo-sensor which can move mechanically, for example the device can be designed including optics to form an image on one sensor, which image photo-sensor or, alternatively, the whole camera assembly moves a predetermined and precise distance along the optical axis in between the subsequent intermediate exposures. The simplicity of such device is the need for only one photo-sensor, the complexity is mechanical needs for precise movement. Such precise movement is most effectively reached for only two images because of only need for two alternative stopping positions of the device. Alternatively, another embodiment has mechanical moving parts is a system with optics and one sensor, but with a spinning disc with, stepwise, sectors with different optical thickness. An image is taken each time a sector of different and known thickness is in front of the photo-sensor. The thickness of the material provides for a precisely known delay of the wave-front for each image separately and, thus, a set of intermediate images can be provided for subsequent reconstruction by the image reconstruction means.
Secondly, a solid state device (with no mechanical parts/movement) can be employed. In a preferred embodiment of the providing means the optics can be designed such that at least two independent intermediate images are provided to one fixed image photosensor. These images can be, for example, two large distinct sub-areas each covering approximately half of the photo-sensor and the required diversity defocus can be provided by, for example, a planar mask.
Also, at least two independent image photo-sensors (for example, three in the example set forth throughout this document) which independent image photo-sensors each produce separate intermediate images, likely, but not strictly necessary, simultaneously. The device can be designed including optics to form an image which image is split in multiple images by, for example, at least one, beam splitter, or alternatively phase grating, with a sensor at the end of each splitted beam with a light path which is precisely known and which represents a known degree of defocus compared to at least one other intermediate image. Such design (for example, with mirror optics analogous to the optics of a Fabry-Perot interferometer) has, for example, beam splitters to which a large number of sensors or independent sectors on one sensor, for example, three, can be added. The simplicity of such device is the absence of mechanical movement and its proven construction for other, for example said, interferometer, applications. Thirdly, a scanning device can provide the intermediate images. Preferably, a line scanning arrangement is applied. Line scanners with linear photo-sensors are well known and can be implemented without much technical difficulty as providing means for an image reconstructor. The image can be sensed by a linear sensor scanning in the image plane. Such sensors, even at high pixel depth, are inexpensive and mechanical means to move such sensors are well known from a myriad of applications. Clearly, disadvantages of this embodiment are complex mechanics and increased time to capture intermediate images because scanning takes time. Alternatively, a scanner configuration employing several line photo-sensors positioned in the intermediate image planes displaced along the optical axis can be used to take the intermediate images simultaneously.
Fourthly, the intermediate images can be produced by different light frequency ranges. Pixels of the sensor can be fitted alternatively with a red, blue or a green filter in a pattern, for example, in a well known Bayer pattern. Such image photo-sensors are commonplace in technical and consumer cameras. Firstly, the colour split provides a delay and subsequent difference in defocus of the pixel groups. A disadvantage of this approach is that only grey-scale images will result as a final image. Alternatively, for colour images the colour split is applied to the final image, and intermediate images for the different colours reconstructed separately prior to stacking of such images.
Arrangement for coloured images are well known, for example, Bayer pattern filters for the image photo-sensor or spinning discs with different colour filters in front of the optics of the providing means which disc is synchronized with the image capture process. Alternatively, red (R), blue (B) and green (G) spectral bands ("RGB"), or any other combination of spectral bands, can also be separated by prismatic methods, as is common in professional imaging systems.
Fifthly, a spatial light modulator, for example, a liquid crystal device or an adaptive mirror, can be included in the light path, of the light path of at least one sensor, to modulate the light in between the taking of the intermediate images. Note that the adaptive mirror can be of a most simple design because only defocus alteration is required which greatly reduces the number of actuators in such mirror. Such modulator can be of a planar design, i. e. "piston" phase filter, just to lengthen the path of the light, or such modulator can have any other phase modulating shape, for example, cubic filter. Using cubic filters allows for combinations of methods described in this document with wave-front coding/decoding technologies, to which references can be found in this document.
Lastly, an image reconstructor adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in- focus sub-images can be constructed for EDF and wave-front applications. Such reconstructor has at least one image photo-sensor (for an image/measuring light intensity) or multiple image photo-sensors (for measuring light intensity only) each divided in multiple sub- sensors with each sub-sensor producing an intermediate image independent of the other sub-sensors by projecting intermediate images on the sensor by, for example, a segmented input lens, or segmented input lens array.
It should be noted that increasing the number of intermediate images with consequently decreasing sensor area per intermediate image increases the precision of the estimate of defocus but decreases the image quality/resolution per intermediate image. So, for example, an application requiring high image quality the number of sub-sensors should be reduced whereas for applications requiring precise estimation of distance and speed the number of sub-sensors should be increased. Methods for calculating the optimum for such segmented lenses and lens arrays are known and summarized in, for example, Ng Ren et al. , 2005, Stanford Tech Report CTSR 2005-02 and technologies and methods related to Shack-Hartmann lens arrays. A man skilled in the art will recognize that undesired effects such a parallax between the intermediate images on the sub- sensors can be corrected for by calibration of the device during manufacturing, or, alternatively, digitally during image reconstruction, or by increasing the number of sub- sensors and their distribution on the photo-sensor.
Alternatively, small sub-areas of at least two intermediate images can be distributed over the photo-sensors in a pattern. For example, the sensor can be fitted with a device or optical layer including optical steps, which delays the incoming wave-front differently for sub-areas in the pattern of, for example, lines or dots. Theoretically, the sub-areas can have the size of one photo-sensor pixel. The sub-areas must, of course, be digitally read out separately to produce at least two intermediate images with different but known degrees of defocus (phase shift). Clearly, the final image quality is dependent on the number of pixels representing an intermediate image. From at least two adjacent final sub-images a composite final image can be made, for example, for EDF applications.
An image reconstructor which reconstructs sub-images of the total image, which sub- images can be adjacent, independent, randomly selected or overlapping can also be applied as a wave-front sensor, other words, it can detect differences in phase for each sub-image by estimation of the local defocus or, alternatively, estimate tilts per sub- image based on comparison of the spatial spectra of neighbouring images. The apparatus should therefore include processing means to reconstruct a wave-front by combining defocus curvatures of, at least two, intermediate sub-images.
For wave-front sensing applications the method which determines defocus for a total final image, or a total object, can be extended to a system which estimates the degree of defocus in a multiple of sub-intermediate-images (henceforth: sub-images) based on, at least, two intermediate full images. For small areas the local curvature can be approximated by defocus curvature (degree of defocus), and at small sub-images any aberration of any order higher or equal to 2 can be approximated by local curvature, i. e. degree of defocus. Consequently, the wave-front can be reconstructed based on the local curvatures determined for the small sub-images and the image reconstruction device becomes effectively a wave-front sensor. This approach is, albeit using local curvatures and not tilts, in essence an analogue to the workings of a Shack-Hartmann sensor which uses local tilt within each local sub-aperture to estimate the shape of a wave-front. In the method described in this document local curvatures are used for the same. The well known Shack-Hartmann algorithms can be adapted to process information on curvatures rather than tilts. The sub-images can have, in principle, any shape and can be independent or partly overlapping depending on the required accuracy and application. For example, scanning the intermediate image by a linear photo-sensor, i. e. scanning can produce sub-images of lines. For wave-front sensors applications are numerous, which applications will increase with less expensive wave-front sensors.
However, the intermediate images can also be used to estimate the angulation (from lateral displacements of sub-images) of light rays compared to the optical axis by comparison of the spatial spectra of the neighbouring intermediate images and then reconstruct the shape of the wave-front by applying methods developed for the analysis of so called hartmanngrams. The apparatus should therefore include means adapted to reconstruct a wave-front by combining lateral shifts of at least two intermediate sub- images.
Moreover, a new image of the object can be calculated as it projected on the plane perpendicular to the optical axis at any distance from the exit pupil, i.e. reconstruction of final in- focus images by ray-tracing. Assuming, for example, in the optical system using two intermediate images, the spatial spectrum of the first image is Z1 (COx, CO3,) and the spectrum of the second image taken in the plane displaced by Δz along the Z-axis is I2(COx, Cύy) . Lateral shift of the second image by Ax and Ay , in conformity with Eq. 22, results in following change in the spatial spectrum of the second image
/2(cox,co ) =
Figure imgf000029_0001
= /2xj,)eχρ[-/(ωxΔx+ωj,Δy)]
/2xj,) being the unshifted spectrum. In many practical cases the exit pupil of an optical system is a symmetrical region, for example, square or circle, and, thus, the defocused OTF H(cox,co ,φ) e Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have by analogy with Eq. 23
T2xy) = H(ωxy,z + Az)I0xy)QxV[-i(ωxAx+ωyAy)] I1 (COx , CO3, ) = H(COx , CO3, , z)I0 (COx , CO3, )
where H(cox,co ,z) is the system OTF with defocus expressed in terms of the displacement z with respect to the exit pupil plane. The intermediate images specified by Z1 (cox, CO3,) and /2xj,) are supposed to be displaced longitudinally by a small
distance | Δz |« z to prevent significant lateral shift of Z2(COx, CO y) . From Eq. 43, it follows that
[I2 (COx , CO3, ) / 11 (COx , CO3, )]2 ~ exp[-2z-(coxΔx + ωyAy)] ≡ exp(-2ifl) , (44)
where i3- = C0xΔx + a>yAy . The lateral shifts Δx and Ay can be obviously found from
Eq. 44. Note that other mathematical methods applicable to Fourier transforms of the images or/and their intensity distributions can be implemented to get information on lateral displacements Ax and Ay , for example, correlation method, analysis of moments in intensity distributions. From the formulas above, the ray- vector characterizing the whole image specified by the spatial spectra /1xj,) becomes v = {Ax, Ay, Az) and a new image (rather a point of image) at any displaced plane with the coordinate z perpendicular to the optical axis Z can be conveniently calculated by ray-tracing (for example, D. Malacara and M. Malacara, Handbook of optical design, Marcel Dekker, Inc., New York, 2004). Note that the ray intensity /v is given by the integral intensity of the whole image/sub-image
/., = Wlfafϊdxdy. (45)
{x,y)Ξ sub-image
The integration in Eq. 45 is performed over the image/sub-image area. By splitting the images into a large number of non-overlapping or even overlapping sub-areas depending on the application requirements, the procedure described above can be applied to each sub-area separately, resulting in a final image as it is projected on the image plane at any given distance from the exit pupil and having the number of "pixels" equal to the number of sub-areas. This function is close to the principle described in WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), but the essential difference is that the method described in the present document does not require an additional array of microlenses. The information on local tilts, i. e. ray directions, is recalculated from the comparison of the spatial spectra of the intermediate defocused images. It should be noted that the estimated computational cost for the described method is significantly lower than those given in WO2006/039486, in other words, the described method can provide real-time capability.
Images with EDF can be obtained by correction of a single wave-front in a single final image. The non-iterative computation methods described in this document will allow for rapid computations on, for example, dedicated electronic circuits. Extended computation time on powerful computers has been a drawback of various EDF imaging techniques to date. EDF images can also be obtained by dividing a total image in sub- images of a much less number than the wave-front application, requiring likely thousands of sub-images, described above. The degree of defocus is determined per sub-image (which can be small number of sub-images, say, only a dozen or so sub- images per total image, or very large numbers with each sub-image represented by only a number of pixels. The desired number of sub-images depends on required accuracy, specifications of the device and its application), and the sub-images corrected accordingly followed by reconstruction of a final image by combination of corrected sub-images. This procedure results in a final image in which all extended (three- dimensional) objects are sharply in focus.
EDF images can also be obtained by stacking at least two final images each reconstructed to correct for defocus for at least one focal plane of the same objects in cubic space. Such digital stacking procedures are well known.
The list of embodiments above includes examples for possible embodiments, and other designs to the same effect can be implemented, albeit of likely increasing complexity. The choice of embodiment clearly depends on the specifics of the application.
It should be noted that the preferred methods above imply a non-iterative method for image reconstruction. Non-iteration is most simple and save computing time. In our prototypes we reach reconstruction times ~50 ms allowing real-time imaging. However, two or three iterations of calculations can improve estimate of defocus in selected cases and improve image quality. Whether iterations should be applied depends on the application and likely need for real-time imaging. Also, for example, two intermediate images combined with re-iteration of calculations can be preferred by the user to three intermediate images combined with non-iterative calculations. The embodiments and methods of reconstruction are dependent on the intended application.
Applications of devices employing image reconstruction as described in this document are basically in nearly any optical camera system and are too numerous to list in full. Some, but not all, applications are listed below.
Scanning is an important application. Image scanning is a well known technology and can hereby be extended for camera applications. Note that images with an EDF can be reconstructed by dividing the intermediate images in a multiple of sub-sectors. For each sub-area the degree of defocus can be determined and, consequently, the optical sharpness of the sub-sector reconstructed. So, the final image will be composed of a multiple of optical focused images and have an EDF, even at full aperture camera settings. Linear scanning can be employed to define such linear sub-areas.
Pattern recognition and object tracking is extremely sensitive to a variety of distortions including defocus. This invention provides a single sharp image of the object by single exposures as well as additional information on speed, distance and angle of travelling by multiple exposures. Applications can be military tracking and targeting systems, but also medical, for example, endoscopy with added information of distances.
Methods described in this document are sensitive to wavelength. This phenomenon can be employed to split images at varying image depth when light sources of different wavelength are employed. For example, focusing at different layer depth in multilayer CD/DVD discs can be achieved for different depth simultaneously with lasers of different wavelength. A multilayer DVD pick-up optical system which reads different layers simultaneously can thus be designed. Other applications involve consumer and technical cameras insensitive to defocus error, iris scanning cameras insensitive to the distance of the eye to the optics, and a multiple of homeland security camera applications. Also, automotive cameras can be designed which are not only insensitive to defocus, but also, for example, calculate distance and speed of chosen objects, parking aids, wave-front sensors in numerous military and medical applications. Availability of inexpensive wave-front sensors will only increase the number of applications.
As pointed out, the reconstruction method described above is highly dependent of the wavelength of light forming the image. So, the methods can be adapted to determine the wavelength of light when defocus is known precisely. Consequently, the image reconstructor can, alternatively be designed as a spectrometer.
Figure 1 shows a sequence of defocused intermediate images from the image side of the optical system from which intermediate images the final image can be reconstructed. An optical system with exit pupil, 1, provides, in this particular example, three photosensors (or sections/parts thereof, or as subsequent images in time, see various options in the description of the invention in this document) with three intermediate images, 2, 3, 4, with the optical axis, 5, and which images have a precisely known distance, 6, 7, 8, compared to the exit pupil, 1 , and, alternatively, precisely known distances compared to each other, 9, 10. Note that a precisely known distance of an photo-sensor/image plane compared to the principle plane in such a system translates, via standard and traditional optical formulas, in a precisely known difference of defocus compared to each other.
Figure 2 shows a reconstructed image of a page from a textbook, 11, by reconstruction from three intermediate images into one final image, 12, defocused at the dimensionless value of φ = 40, a second image, 13, defocused at φ = 45, and another, 14, defocused at φ = 50. The reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14-16 bit/pixel. Note that all defocused images are distinctly unreadable to a degree that even the mathematical integral sign can not be recognized from any of the intermediate images.
Figure 3 shows a reconstructed image of a scene with building, 15, by reconstruction from three intermediate images into one final image, 16, defocused at the dimensionless value of φ = 50, a second image, 17, defocused at φ = 55, and another, 18, defocused at ςø = 60. The reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14 bit/pixel.
Figure 4 shows a reconstructed image of the letters "PSF" on a page, 19, by reconstruction from three intermediate images into one final image, 20, defocused at the dimensionless value of φ = 95, a second image, 21, defocused at φ = 100, and another, 22, defocused at φ = 105. The reconstruction was carried out on intermediate images with digitally simulated defocus. The final image, 19, has a dynamic range of 14- bit/pixel, is reconstructed with a three-step defocus correction, with the final defocus deviation from exact value: δζε>~0.8.
Figure 5 shows an example of an embodiment of the imaging system employing two intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a beam splitter, 26, into two light signals. The light signals are finally detected by two photo-sensors, 27 and 28, positioned in the image planes shifted, one with respect to another, by a specified distance along the optical axis. Photo-sensors 27 and 28 provide simultaneously two intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
Figure 6 shows an example of an embodiment of the imaging system employing three intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a first beam splitter, 26, into two light signals. The reflected part of light is detected by a photo-sensor, 27, whereas the transmitted light is divided by a second beam splitter, 28. The light signals from the beam splitter 28 are, in turn, detected by two photo-sensors, 29 and 30, positioned in the image planes shifted, one with respect to another, and relative to the image plane of the sensor 27. Photo-sensors 27, 29 and 30 provide simultaneously three intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
Figure 7 illustrates the method, in this example for two intermediate images, to calculate an object image in an arbitrary image plane, i. e. at an arbitrary defocus, based on local ray- vector and intensity determined for a small sub-area of the whole (defocused) image. Two consecutive phase-diverse images, 2 and 3, with a predetermined defocus, or alternatively displacement, 9, along the optical axis, 5, are divided by a digital (software-based) procedure into a plurality of sub-images. Comparison of the spatial spectra calculated for a selected image area, 31, on phase- diverse images allows evaluating the ray- vector direction, 32, which characterizes light propagation, in geometric optics limit, along the optical axis 5. Using the integral intensity over the area 31 as a ray intensity, in combination with the ray- vector, a corresponding image point, 33, /. e. point intensity and position, located in an arbitrary image plane, 34, can be found by ray-tracing. In calculations, the distance, 35, from the new image plane 34 to one of the intermediate image planes is assumed to be specified.

Claims

Claims
1. Method for providing at least one final in- focus image of at least one object in a plane, the method comprising the following steps: - providing at least two phase-diverse intermediate images of the object with an optical system having an optical transfer function, wherein each intermediate image has a different and a priori unknown degree of defocus in relation to the in-focus image plane of the at least one object, but wherein the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known, and wherein the optical transfer function of the optical system is a priori known; providing a generating function which combines spatial spectra of, at least two, intermediate images, and, independently from said combination of spatial spectra, combines the optical transfer functions corresponding to said spatial spectra,
- adapting the said combinations of spatial spectra and optical transfer functions such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in-focus image plane; - reconstructing the at least one final in-focus image by a non-iterative algorithm which algorithm is based on said generating function.
2. Method according to claim 1, characterized in that it is preceded by additional processing to adapt the spatial spectrum of at least one of said intermediate images to lateral shift of said image compared to any other intermediate image.
3. Method according to claim 1-2, characterized in that it includes additional processing based on an additional generating function to provide the degree of defocus of at least one of said intermediate images compared to the in-focus image plane.
4. Method according to claim 3, characterized in that the degree of defocus of the intermediate image compared to the in-focus image plane is included in the non- iterative algorithm
5. Apparatus for providing at least one final in- focus image of at least one object in a plane, the apparatus comprising: imaging means adapted to provide at least two, phase-diverse intermediate images of the object, wherein each intermediate image has a different and a priori unknown degree of defocus compared to the in- focus image plane of the object, but wherein the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known; at least one optical system adapted to depict the object on the imaging means, wherein the optical transfer function of the at least one optical system is a priori known; processing means,
- adapted to provide a generating function which combines spatial spectra of, at least two, intermediate images, and, independently from said combination of spatial spectra, combines the optical transfer functions corresponding to said spatial spectra;
- adapted to combine said spatial spectra and optical transfer functions such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in- focus image plane;
- adapted to apply a non- iterative algorithm including said generating function to obtain a reconstruction of the final image.
6. Apparatus according to claim 5, characterized in that the processing means are adapted to convert the spatial spectrum of at least one intermediate image to lateral-shift of said intermediate image compared to any other intermediate image.
7. Apparatus according to claim 5-6, characterized in that the processing means are adapted to determine an additional generating function providing the degree of defocus of, at least one, intermediate image compared to the in- focus image plane.
8. Apparatus according to claim 7, characterized in that the processing means are adapted to adapt the non- iterative algorithm to the degree of defocus of the intermediate image compared to the in- focus image plane.
9. Apparatus as claimed in claim 5, characterized in that the imaging means comprise at least one image photo-sensor.
10. Apparatus according to claim 9, characterized in that the apparatus is adapted to execute at least two exposures on the same imaging photo-sensor which is adapted to provide phase-diverse intermediate images.
11. Apparatus according to claim 9, characterized in that the apparatus is adapted to execute at least two subsequent exposures on the same imaging photo-sensor which is adapted to assume at least two predetermined different positions along the optical axis for subsequent exposures.
12. Apparatus as claimed in claim 9, characterized in that, the apparatus is adapted to execute at least two intermediate exposures on at least two independent areas of at least two imaging photo-sensors.
13. Apparatus as claimed in any of the claims 5-12, characterized in that the processing means are adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in- focus sub-images.
14. Apparatus according to claim 13 characterized in that the apparatus includes processing means to compose one final image by combination of at least two final in- focus sub-images.
15. Apparatus according to claims 13, characterized in that it includes processing means to reconstruct a wave-front by combining defocus curvatures of, at least two, intermediate sub-images.
16. Apparatus according to claims 6, characterized in that the apparatus includes processing means adapted to reconstruct a wave-front by combining lateral shifts of at least two intermediate sub-images.
17. Apparatus according to claim 6 and 13, characterized in that the apparatus includes processing means to reconstruct by ray-tracing at least one final in- focus image.
18. Apparatus according to claim 5, characterized in that the apparatus includes at least two, sensors each comprising a photo-sensor with a single photo-sensitive spot.
19. Apparatus according to claim 18, characterized in that the apparatus includes at least two amplitude masks each combined with a focusing optical element.
20. Apparatus according to claim 19, characterized in that the processing means are adapted to range finding.
PCT/NL2009/050084 2008-02-27 2009-02-25 Image reconstructor WO2009108050A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
EP08152017.3 2008-02-27
EP08152017 2008-02-27
NL2001777A NL2001777C2 (en) 2008-02-27 2008-07-07 Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions
NL2001777 2008-07-07
NL2002094 2008-10-14
NL2002094 2008-10-14

Publications (2)

Publication Number Publication Date
WO2009108050A1 WO2009108050A1 (en) 2009-09-03
WO2009108050A9 true WO2009108050A9 (en) 2010-09-30

Family

ID=40786556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2009/050084 WO2009108050A1 (en) 2008-02-27 2009-02-25 Image reconstructor

Country Status (1)

Country Link
WO (1) WO2009108050A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011139150A1 (en) 2010-05-03 2011-11-10 Asmr Holding B.V. Improved optical rangefinding and imaging apparatus
US8466984B2 (en) 2010-06-09 2013-06-18 International Business Machines Corporation Calibrating color for an image
US8817128B2 (en) 2010-06-09 2014-08-26 International Business Machines Corporation Real-time adjustment of illumination color temperature for digital imaging applications
US9046738B2 (en) 2012-06-28 2015-06-02 International Business Machines Corporation Digital image capture under conditions of varying light intensity
DE102012106584B4 (en) 2012-07-20 2021-01-07 Carl Zeiss Ag Method and device for image reconstruction
WO2014083574A2 (en) * 2012-11-30 2014-06-05 Larsen & Toubro Limited A method and system for extended depth of field calculation for microscopic images
US9897792B2 (en) 2012-11-30 2018-02-20 L&T Technology Services Limited Method and system for extended depth of field calculation for microscopic images
US20160063691A1 (en) * 2014-09-03 2016-03-03 Apple Inc. Plenoptic cameras in manufacturing systems
DE102015120967A1 (en) * 2015-12-02 2017-06-08 Carl Zeiss Ag Method and device for image correction
DE102017112484A1 (en) 2017-06-07 2018-12-13 Carl Zeiss Ag Method and device for image correction
WO2023208496A1 (en) * 2022-04-27 2023-11-02 Asml Netherlands B.V. System and method for improving image quality during inspection

Also Published As

Publication number Publication date
WO2009108050A1 (en) 2009-09-03

Similar Documents

Publication Publication Date Title
WO2009108050A9 (en) Image reconstructor
US7705970B2 (en) Method and system for optical imaging and ranging
JP5328165B2 (en) Apparatus and method for acquiring a 4D light field of a scene
US7646549B2 (en) Imaging system and method for providing extended depth of focus, range extraction and super resolved imaging
US10291894B2 (en) Single-sensor system for extracting depth information from image blur
JP6091176B2 (en) Image processing method, image processing program, image processing apparatus, and imaging apparatus
US7319783B2 (en) Methods for minimizing aberrating effects in imaging systems
JP5173665B2 (en) Image capturing apparatus, distance calculation method thereof, and focused image acquisition method
US8305485B2 (en) Digital camera with coded aperture rangefinder
JP2012514749A (en) Optical distance meter and imaging device with chiral optical system
JP2017208641A (en) Imaging device using compression sensing, imaging method, and imaging program
Chen et al. Light field compressed sensing over a disparity-aware dictionary
Amin et al. Active depth from defocus system using coherent illumination and a no moving parts camera
RU2733822C1 (en) Method and an optical system for obtaining tomographic distribution of wave fronts of electromagnetic fields
CN106663312B (en) System and method for improved computational imaging
CN115546285B (en) Large-depth-of-field stripe projection three-dimensional measurement method based on point spread function calculation
KR20210124041A (en) Apparatus, Method and System For Generating Three-Dimensional Image Using a Coded Phase Mask
WO2021099761A1 (en) Imaging apparatus
JP2017208642A (en) Imaging device using compression sensing, imaging method, and imaging program
NL2001777C2 (en) Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions
JP7183429B2 (en) Imaging device and method
KR20110042936A (en) Apparatus and method for processing image using light field data
WO2022158957A1 (en) Coded diffraction pattern wavefront sensing device and method
JP2022160861A (en) Digital hologram signal processing device and digital hologram imaging and reproducing device
NL2002406C2 (en) Optical range finder and imaging apparatus.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09716082

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09716082

Country of ref document: EP

Kind code of ref document: A1