NL2001777C2 - Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions - Google Patents

Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions Download PDF

Info

Publication number
NL2001777C2
NL2001777C2 NL2001777A NL2001777A NL2001777C2 NL 2001777 C2 NL2001777 C2 NL 2001777C2 NL 2001777 A NL2001777 A NL 2001777A NL 2001777 A NL2001777 A NL 2001777A NL 2001777 C2 NL2001777 C2 NL 2001777C2
Authority
NL
Netherlands
Prior art keywords
image
images
focus
degree
sharp
Prior art date
Application number
NL2001777A
Other languages
Dutch (nl)
Inventor
Michiel Christiaan Rombach
Aleksey Nikolaevich Simonov
Original Assignee
Michiel Christiaan Rombach
Aleksey Nikolaevich Simonov
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michiel Christiaan Rombach, Aleksey Nikolaevich Simonov filed Critical Michiel Christiaan Rombach
Priority to PCT/NL2009/050084 priority Critical patent/WO2009108050A1/en
Application granted granted Critical
Publication of NL2001777C2 publication Critical patent/NL2001777C2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/58Optics for apodization or superresolution; Optical synthetic aperture systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • G02B7/38Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals measured at different points on the optical axis, e.g. focussing on two or more planes and comparing image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)

Abstract

The method involves generating a combination of spatial spectra of intermediate images (2-4) and a combination of corresponding optical transfer functions, and adjusting spatial spectra and optical transfer functions, such that a generating function becomes independent from a degree defocus of an intermediate image compared to an in-focus image plane. A final in-focus image is reconstructed by a non-iterative algorithm based on the combinations of spatial spectra and optical transfer functions. (Incomplete specification, abstract based on available information published by the patent office).

Description

Image restorator Background of the invention 5 The present invention relates to imaging and metering techniques. Firstly, the invention provides methods, systems and embodiments of these for estimating aberration errors of an image and reconstruction of said image based on a set of multiple intermediate images by non-iterative algorithms and, secondly, provides methods to reconstruct wave-fronts. An apparatus based on the invention can be either a dedicated camera or 10 wave-front sensor, or these functions can be combined.
The invention has a broad scope of embodiments and applications including, image reconstmction for one or more focal distances, image reconstruction for EDF, speed, distance and direction measurement device and wave-front sensors for various applications.
15 Reconstruction of images independent from the de-focus aberration has most practical applications. Therefore, the device or derivates thereof can be applied for digital imaging insensitive to de-focus (in cameras), digital imaging for extended depth of field (“EDF”, in cameras), as optical distance, speed and direction measurement device (in measuring and metering devices).
20 Camera units and wave-front sensors according to the methods and embodiments set forth in this document can be designed to be entirely solid state, with no moving parts, to be constructed from only very few components, for example, in a basic embodiment: simple optics, for selected application even only one lens, one beam splitter (or other beam splitting element, for example, phase grating) and two sensors and to be combined 25 with dedicated data processing units/processing chips, with all these components in, for example, one solid polymer assembly.
In this document “intermediate image” refers to a phase-diverse intermediate image which has an unknown de-focus compared to the in-focus image plane but a known a 30 priori diversity de-focus in respect of any other intermediate image in multiple intermediate images. The “in-focus image” plane is a plane optically conjugate to an object plane and thus having zero de-focus error.
The terms “object” and “image” conform to the notations of Goodman for a generalized imaging system (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., 2
New York, 1996, Chap. 6). The object is positioned in the “object plane” and the corresponding image is positioned in the “image plane”. “EDF” is an abbreviation for Extended Depth of Field.
The term “in-focus” refers to in focus/optical sharpness/in optimal focus, and the term 5 “de-focus” to defocus/optical un-sharpness/blurring. An image is meant to be in-focus when the image plane is optically conjugate to the corresponding object plane.
This document merely, by the way of example, applies the invention to camera applications for image reconstruction resulting in a corrected in-focus image because 10 de-focus is, in practice, the most important aberration. The methods and algorithms described herein can be adapted to analyse and correct for any aberration of any order or combination of aberrations of different orders. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to other aberrations as well by adaptation of formulas presented as for the applications above.
15
This invention can, in principle, be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes. Colour imaging can be achieved by splitting white light into narrow spectral bands. White, visible light can be imaged when separated in, for example, red (R), blue (B) and 20 green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer pattern filters providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images.
The invention can be applied to infrared (IR) spectra. X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but 25 the methods can be used for X-rays by application of, for example, crystalline monochromators to produce monochromacity.
For ultrasound and coherent radio frequency signals the formulas can be adapted for the coherent amplitude transfer function of the corresponding system.
A man skilled in the arts will conclude that the concepts set forth in this document can 30 be extended to almost any wave process and to almost any aberration of choice by adaptation and derivation of the formulas and mathematical concepts presented in this document.
3
This document describes methods to obtain sharp, focused images in planes, slices, along the optical axis as well as optical sharpness in three-dimensional space, and EDF imaging in which all objects in the intended cubic space are sharp and in-focus.
The traditional focusing process, i. e. changing the distance between imaging optics and 5 image on film or photo-detector or, otherwise, changing the focal distance of the optics takes time, requires additional, generally mechanically moving components to the camera and, last but not least, knowledge of the distance to the object of interest. Such focusing shifts the plane of focus along the optical axis.
Depth of field in a single image can, traditionally, only be extended by decreasing the 10 diameter of the pupil of the optics, i. e. by using low-NA objectives or, alternatively, apodized optics. However, decreasing the diameter of the aperture reduces the light intensity reaching the photo-sensors or photographic film and significantly degrades the image resolution due to narrowing of the image spatiai spectmm. Focusing and EDF at full aperture by using computational methods present a considerable interest in imaging 15 systems and is clearly preferable to such traditional optical/mechanical methods. Furthermore, a method to achieve this with no moving parts (as a solid state system) is generally preferable for both manufacturer and end-user because of low cost of equipment and ease of use.
20 Several methods have been proposed for digital reconstruction of in-focus images some of which will be summarized below in the context of the present invention described in this document.
Optical digital technologies regarding de-focus correction and EDF started with a 25 publication of Hausler (Optics Communications 6(1), pp. 38-42, 1972) which described a combination of multiple images into a single image in such a way that the final image results in EDF. This method does not reconstruct the final image from the set of de-focused images but combines various in-focus areas of different images.
The present invention differs from this approach because it reconstructs the final image 30 from intermediate, de-focused images that may not contain in-focus areas at all, and, automatically, combines these images into a sharp final EDF image.
Later, methods based on phase coding/decoding which include an optical mask in the optical system which is designed such that the incoherent optical transfer function 4 remains unchanged within a range of de-focuses. Dowsky and co-workers (refer to, for example, US2005264886, W09957599 and E.R. Dowski and W.T. Cathey, Applied Optics 34(11), pp. 1859-1866, 1995) developed methods and applications of EDF imaging systems based on wave front coding/decoding with a phase filter followed by a 5 straightforward decoding algorithm to reconstruct the final EDF image from the phase encoded intermediate image.
The present invention described in this document does neither include coding of wave-fronts nor the use of phase filters.
10 Also, various phase-diversity methods determine the phase of an object by comparison of a precisely focused image with a de-focused image, refer to, for example, US 6771422 and US2004/0052426.
US2004/0052426 describes non-iterative techniques for phase retrieval for estimating errors of an optical system, and includes capturing a sharp image of an object at a focal 15 point and combining this image with a number of, intentionally, blurred unfocused images of the same object. This concept differs from the concept described in this document in that, firstly, the distance to the object must be known beforehand, or, alternatively, the camera be focused on the object, and, secondly, the method is designed and intends to estimate of optical errors of the optics employed in said 20 imaging. This technique requires at least one focused image at a first focal point in combination with multiple unfocused images. These images are then used to calculate wave-front errors.
The present invention differs from US 2004/0052426 in that the present invention does not require a focused image, i. e. knowledge of the distance from an object to the first 25 principal plane of the optical system, prior to capture of the intermediate images, and uses only a set of unfocused intermediate images with unknown degree of de-focus relative to the object.
US6771422, describes a tracking system with EDF including a plurality of photo-30 sensors, a way of determining the de-focus status of each sensor and to produce an enhanced final image. The de-focus aberration is found by solving the transport equation derived from the parabolic equation for the complex field amplitude of a monochromatic and coherent light wave.
5
The present invention differs from US6771422 in that it does not intend to solve the transport equation. The present invention is based on the known a priori information on the incoherent optical transfer function (OTF) of the optical system to predict the evolution of intensity distribution for different image planes and, thus, the degree of de-5 focus by direct calculations with non-iterative algorithms.
Other methods to reconstruct images based on a plurality of intermediate images/intensity distributions taken at different and known degrees of de-focus employ iterative phase diversity algorithms (see, for example, J.J. Dolne et al., Applied Optics 10 42(26), pp. 5284-5289, 2003). Such iteration can take considerable computational power and computing time which is unlikely to be carried out in real-time.
The present invention described in this document differs from the standard phase diversity algorithms in that it is an essentially non-iterative method.
15 W02006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al., 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), use an optical system designed such that it allows determination of the intensity and angle of propagation, by an array of microlenses, of the light at different locations on the sensor plane resulting in a so-called “plenoptic” 20 camera. The sharp images of the object points at different distances from the camera can be recalculated (for example, by ray-tracing). It must be noted that with the method described in the present document the intensity and angle of incidence of the light rays at different locations on the intermediate image plane can be derived and methods analogous to W02006/039486, i. e. ray-tracing, can be applied to calculate sharp 25 images of an extended object.
The present invention described in this document differs from W02006039486 and related documents in that the present invention does not explicitly use such information on angle of incidence obtained with an array of micro lenses, for example, a Shack-Hartman wave-front sensor, but instead the respective light ray direction is directly 30 calculated by finding the relative lateral displacement for at least one pair of phase-diverse images and using the a priori known de-focus distance between them.
Additionally, the intermediate phase-diverse images described in this document can also be used for determining the angle and intensity of individual rays and to compose an EDF image by ray-tracing.
6 5 All documents mentioned in the sections above are included in this document by reference.
Description of the invention 10 The present invention relates to imaging techniques. From the single invention a number of applications can be derived:
Firstly, the invention provides a method for estimation of de-focus in the optical system without prior knowledge of the distance to the object; the method is based on digital 15 processing of multiple intermediate de-focused images, and,
Secondly, provides means to digitally reconstruct a final in-focus image of the object based on digital processing of multiple intermediate de-focused images, and, 20 Thirdly, can be used for wave-front sensing by analyzing local curvature of sub-images from which an estimated wave-front can be reconstructed, and,
Fourthly, can reconstruct EDF images by either combining images from various focal planes (for example “image stacking”), or, by combining in-focus sub-images (for 25 example “image stitching”), or, alternatively, by correction of wave-fronts, or, alternatively, by ray-tracing to project an image in a plane of choice.
Fifthly, provides methods to calculate speed and distance of an object by analyzing subsequent images of the object including speed in all directions, X, Y and Z based on a 30 multiple of intermediate images and consequently the acquired information on focal planes, and,
Sixthly, can be used to estimate the shape of a wave-front by reconstruction of tilt of individual rays by calculating the relative lateral displacement for at least one pair of 7 phase-diverse images and using the a priori known de-focus distance between them, and,
Seventhly, provides methods to calculate by ray-tracing a new image of an extended 5 object in any image plane of the optical system (for example, approximating a “digital lens” device), and,
Eighthly, can be adapted to determine the wavelength of light when de-focus is known precisely, providing the basis for a spectrometer, and, 10
Ninthly, can be adapted to many non-optical applications, for example, tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system (/. e. an intermediate image degradation can be attributed to a 15 convolution with the system response function), of which the response function is known a priori, the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori.
20 With the methods described in this document a focused final image of an object is derived, by digital reconstruction, from at least two, de-focused intermediate images having an unknown degree of de-focus compared to an ideal focal plane (or, alternatively, the distance from the object to the principal planes of imaging system), but having a precisely known degree of de-focus of each intermediate image compared 25 to any other intermediate image.
Firstly, a method of reconstruction and which method can be included in an apparatus will be described. The method starts with at least two de-focused, i. e. phase-diverse, intermediate images from which a final in-focus image can be reconstructed by a non-30 iterative algorithm and an optical system having an optical transfer function which is a priori known. Note that each intermediate image has a different and a priori unknown degree of de-focus in relation to the in-focus image plane of the object, but the degree of de-focus of any intermediate image in relation to any other intermediate image is a priori known.
8
To digitally process the images obtained above the method includes the following steps: a generating function comprising a combination of the spatial spectra of said intermediate images and a combination of their corresponding optical transfer functions is composed.
5 - said combinations of spatial spectra and optical transfer functions are adjusted such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in-focus image plane. (This adjustment can take the form of adjustment of coefficients or adjustment of functional dependencies or a combination thereof, so the relationship between the combination of 10 spatial spectra and their corresponding optical transfer functions can be designed as linear, non-linear or functional relationships, depending on the intended application.) the final in-focus image is reconstructed by a non-iterative algorithm based on said combinations of spatial spectra and corresponding optical transfer functions.
15 An apparatus to carry out the tasks set forth above must include the necessary imaging means and processing means.
Such method includes an equation based on the generating function/functional satisfying Ψ[/(ωτ,ων,φ -Δφ,),Κ ,/(ων,ων,φ-ΔφΜ)]ξ 20 = Ψ[Η(ωχ,ω, ,φ - Δφ,)χ Ι0(ων,ων),Κ ,Η(ων,ων,φ -ΔφΜ)χΙ0(ω, ,ω,)] = = ΣΒΡ (ω* ’ ω> ’Φο,Δφ,Κ, Δφ^, [/0 (ωΑ ,ω ν)] βφρ, ρ> Ο (1) where /(ω,,ω, ,φ -Δφ„) Ξ /„(ω,,ω,) = -~- j j/»(x,r)exp[-i(ro,x+ro,y)]dxdy —οο —οο (2) 25 is the spatial spectrum of the /z-th intermediate phase-diverse image, 1 < n<M ; x and V are the transverse coordinates in the intermediate image plane; M is the total number of intermediate images, M > 2 . Value Δφη (known a priori from the system configuration) is the diversity defocus between the /7-th intermediate image plane and a 9 chosen reference image plane. Analogously, the spatial spectrum of the object (/. e. final image) is I oo oo /0(ωχ,ω^) = — j \l0{x\y')Qxv[-i{toxx +®yy')]dx'dy' (3) —oo —oo here x and y are the transverse coordinates in the object plane.
5 In the right-side of Eq. 1 the spatial spectra of phase-diverse images are substituted with /„(ω,,ω,) = Η(ωχ,ωγ, φ0 +δφ-Δφ„)/0(ωχ,ω>,), where Η(ωχ,ωγ,φ) denotes the defocused incoherent optical transfer function (OTF) of the optical system; the unknown defocus Cp is substituted by a sum of the de-focus estimate φ0 and the deviation δφ ξ φ -φ0, | δφ /φ0 |« 1: φ = φ0 + 5φ .
10 The series coefficients /?ρ(ωχ.,(Ον,φ0,Δφ1 Κ ,ΔφΜ jf/^ö)^,(0^,)]) functionally dependent on the spatial spectrum of the object 70((ϋχ>(ϋ ) can be found from Ψ by decomposing the defocused OTFs //(¢0^,φ0 +δφ — Δφη)ϊηίο Taylor series in δφ .
15 The generating function/functional Ψ is chosen to have zero first- and higher-order derivatives up to the K - order with respect to unknown δφ : δ'Ψ —— = 0,ι = 1Κ*. (4) 3(δφ)'
Thus, ^(ω,,ω,,φο,Δφ,Κ ,ΔφΜ,[/0(ω„ω},)]) = 0 for i = lKK and Eq. 1 simplifies to Ψ[/(ωΛ,ω,,φ - Δφ,),K ,Ι(ωχ,ω φ - ΑφΜ)] = 20 (5) = «ο(ω„ω,,φ0)Δφ,Κ ,Δφ^,μ^ω,,ω^+Οίδφ**').
Finally, neglecting the residual term 0(δφΛΓ+1) in Eq. 5, the object spatial spectrum Ι0(ωχ,ωγ) can be found by solving the approximate equation W(P>X,ω,,φ - Δφ,),K ,/(ω,,ω,,φ -ΔφΜ)] = = Βο(ω,5ων,φ0,Δφ1Κ ,ΔφΜ,[Ι0(ωχ,ω ν)]).
(6) 10
So, having two or more intermediate images /„((0,,(0,), n = 1,2..., and knowing a priori the system optical transfer function //((0,,(Ο,.,φ) a generating function Ψ according to Eq. 1 independent from the unknown de-focus φ (or δφ ), as required by Eq. 4, can be composed by an appropriate choice of functional relation between 5 /,,((0,,(0,.) and, subsequently between H((0,.,(0,.,φ0 +δφ — Δφ„) corresponding to said spatial spectra /,,((0, ,(0, ). Reconstruction of the object spectrum /0 ((0, ,(0,.), which is the basis for the final in-focus image or in-focus picture, by a non-iterative algorithm based on Eq. 6 which includes, on the one hand, the combination of the spatial spectra /,,((0, ,(0,.) and, on the other hand, the combination of incoherent OTFs 10 //((0,,(0,.,φ0 +δφ — Δφ„) which are substituted by the corresponding Taylor expansions in δφ.
An important example of the generating function is a linear combination of the spatial spectra /„(ω,,ω,.) of the intermediate phase-diverse images
M
Ψ =Ε?.(ω„ω, ,φ0,Δφ,Κ Δφ„,)/„(ω„ω,) = n=l 15 (7) = h (»,.ω. )ΣΧ>1q>„, Δφ, Κ ,.ΔφΜ )δφ", ρ> 0 where the coefficients {/„((Ο,,ω,.,φο,Δφ, Κ ΔφΜ) with « = 1Κ Μ are chosen to comply with Eq. 4. In this case Eq. 5 results in
M
ψ = Σ<1η (ωχ ,ων ,Φο, Δφ, Κ ΔφΜ) /„ (ω, ,ω,) = (8) = 10 (ω,, ω,) χ {Β0 (ωτ, ω ν, φ0, Δφ, Κ ,ΔφΜ)+0(δφλ+1)}.
The coefficients ^„(ωΛ.,ων,φ0,Δφ, Κ ΔφΛ/) can be found from Eq. 8 by making 20 substitutions /,,((0,,(0,.) = //((0, ,(0, ,φ0 +δφ-Δφ„)/0((0*,(Ον), where an explicit expressions for the incoherent optical transfer function (OTF) //(ω, ,ω,.,φ) of the optical system is used. In such a way, qn ((0,, (0, ,φ0, Δφ, Κ ΔφΛ/) are known 11 a priori functions depending only on the optical system configuration. The analytical expression for the system OTF H((üx,(üy,φ) can be found by many ways including fitting of the calculated OTF, general formulas are given, for example, by Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 5 1996).
The “least-mean-square” solution 70(ωχ,(Ο ) of Eq. 8 that minimizes the mean square error (MSE) oo oo MSE = J ||/0(ω„ω,)-/0(ω„ω,)|2d(oxda)y (9) — 00 —oo 10 takes the form r, #0(0),,ω .φ,,,Δφ,Κ ,Δφ„) /η(ω.ωυ) =--— χ |βο(ω„ω,,φ(,,Δφ,Κ,ΔφΜ)|ί+ε (]ο) Μ χΣ?»(ω,,ω,,φ0,Δφ,Κ Δφ„)/„(ω,,ω,), η=1 where the constant 8 by analogy with the least-mean-square-error filter (Wiener filter), denotes the signal-to-noise ratio. “Noise” in the algorithm is caused by the residual term 0(δφ^+1) in Eq. 8 depending on δφ . When | B0 | has no zeros within 15 the spatial frequency range of interest Ω,, the constant 8 can be defined from Eq. 8 as follows: ε= min |50*(ω„ω φο,Δφ,Κ ,ΔφΜ)χΟ(δφ*+1)|. (11)
So, Eq. 10 describes the non-iterative algorithm for the object reconstruction with the generating function chosen as a linear combination of the spatial spectra of the phase-20 diverse images.
The de-focus estimate φ0 can be found by many ways, for example, from a pair of phase diverse images. If is the spatial spectrum of the first image that is characterized by unknown de-focus φ and I2 ((0X,(£)y) is the spatial spectrum of the 12 second image with de-focus φ + Δφ, here Δφ being the difference in de-focus predetermined by the system configuration, then the estimate of de-focus is given by an appropriate expression: Δφ (Δφ Y γ0 Δφ2
<p0=—1!- + , —(12 Ψ“ A f A ) Ίι A
5 where the OTF expansion //((0^,0), ,φ) = γ() + γ,φ2 + ... in the vicinity of φ =0 valid at | ft)2 + Ο)2 |« 1 is used. The coefficient A denotes the ratio ^/2(ωγ,ων)-/,(ωλ,ων) A = <-—---->, (13) Λ(ω,,ων.) and the averaging is carried out over low spatial frequencies | CO' + ft)' |« 1.
In addition, the estimate φ0 of the is unknown de-focus φ can be found from three 10 consecutive phase-diverse images /,((0Λ,(0ν) with de-focus φ—Δφ,, /,((0^,(0,.) with de-focus φ and /3((0V,(0V) with de-focus φ+Δφ, (Δφ, and Δφ2 arc specified by the system arrangment): χΛφ^ (14) 2 χΔφ,' - Δφ;
The coefficient χ is the ratio of images spectra 15 (15) Ι,(ω,,ωι.)-/,(ωΛ,ωι) averaged over low spatial frequencies |(02 +(02 |« 1. Note that in practice the best estimates of de-focus according Eq. 14 were achieved when the numerator and the denominator in Eq. 15 were averaged independently, i. e.
< 1,((0,,(0,.)-1,((0,,(0,,) > < /, (ω,(ωΛ, ω,) > 20 Note that an estimate of de-focus (φ0 in Eq. 1) is necessary to start these computations.
13 the estimate is automatically provided by the formulas specifying the reconstruction algorithm above. Such an estimate can also be provided by other analytical methods, for example, by determining the first zero-crossing in the spatial spectrum of the de-focused image as described by I. Raveh et al. (I. Raveh, et al., Optical Engineering 38(10), pp. 5 1620-1626, 1999).
In practice, calculations according to Eq. 16 together with Eq. 14 can be used for an apparatus to determine degree of de-focus with, at least, two photo-sensors having only one photo-sensitive spot, for example photo-diodes or photo-resistors. A construction 10 for such apparatus likely includes a photo-sensor but also an amplitude mask, focusing optics and processing means which are adapted to calculate the degree of de-focus of, at least, one intermediate image. The advantage of such system is that no Fourier transformations required for calculations which significantly reduces calculation time. This could be achieved by, for example, simplification of Eq. 16 to a derivate of the 15 Parseval’s theorem, for example: oo oo ƒ jU(x,y)x{I3(x,y)-I2(x,y)}dxdy -, (17) J jU(x,y)x{I2(x,y)-I](x,y)}dxdy —OO —00 where U (x, y) defines the amplitude mask in one or multiple image planes.
Also, photo-diodes and photo-resistors are significantly less expensive compared to photo-sensor arrays and are more easily assembled.
20
Note that a Fourier transformation can be achieved by processing methods as described above, but can also be achieved by optical methods, for example, by optical means, for example, by an additional optical element between the beam splitter and imaging photosensor. Using such optical Fourier transformation will significantly reduce digital 25 processing time which might be advantageous for specific applications.
Such apparatus can be applied as, for example, a precise and inexpensive optical range meter, camera component, distance meter. Such apparatus differs from existing range finders with multiple discrete photo-sensors which all use phase-detection methods.
14
The distance of the object to the camera can be estimated once the degree of de-focus is known via a simple optical calculation, so the methods can be applied to a distance metering device. Also, the speed and direction of an object in X, Y and Z directions (also: 3D space) can be estimated with additional computation means and information 5 on at least two subsequent final images and the time in between capture of the intermediate images for these final images. Such inexpensive component for solid state image reconstruction will increase consumer, military (sensing and targeting, with or without the camera function and woth or without wave-front sensing functions) and technical applications.
10
As an alternative, the estimate can be obtained by an additional device, for example, an optical or ultrasound distance measuring system. However, most simply, in the embodiments described in this document, the estimate is provided by the algorithm itself without the aid of any additional measuring device, 15
Note that an estimate on the precision of the degree of defocus can also be obtained by Cramer-Rao analysis as described in (D.J. Lee et al., J. Opt. Am. A 16(5), pp. 1005-1015, 1999) which document is a part of this document by reference.
20 Apart from the method described above the invention also provides an apparatus for providing at least two, phase-diverse intermediate images of said object wherein each of the intermediate images has a different degree of de-focus compared to an ideal focal plane (/. e. an image plane of the same system with no de-focus error), but having a precisely known degree of de-focus of each intermediate image compared to any other 25 intermediate image. The apparatus includes processing means for reconstructing a focused image of the object by an algorithm expressed by Eq. 6.
Note that a man skilled in the art will conclude that: (a) - said Fourier-based processing with spatial spectra of images can also be carried out by processing the corresponding 30 amplitudes of wavelets to the same effect, (b) - the described method of image restoration can be adapted to optical wave-front aberrations different from de-focus. In this case each phase-diverse image is characterized by an unknown absolute magnitude of an aberration but a known a priori difference in the aberration magnitude relative to any other phase-diverse image, and, (c) - the processing functions mentioned above can 15 be applied to any set of set of images or signals which are blurred, but of which the transfer (blurring) fimction is known. For example, the processing function can be used to reconstruct images/signals with motion blur or Gaussian blur in addition to said out-of-focus blur.
5
Secondly, an additional generating function to provide the degree of de-focus of at least one of said intermediate images compared to the in-focus image plane is provided here, and the degree of de-focus can be calculated by additional processing by an apparatus. An improved estimate for unknown de-focus can be directly calculated from at least two 10 phase-diverse, intermediate images obtained with the optical system by a non-iterative algorithm according to: Ψ-Β' δφ = -ί-Α, (18) and thus an improved estimate becomes φ = φ0 + δφ . The generating function ψ/ in this case obeys Ψ'[I(cox, to,, φ - Δφ,),K , /(cox, ω, ,φ - ΛφΜ)] = 15 =£δρ(ωι,ω,,φ0>ΔφιΚ,ΔφΜ, ρ> 0 and δ'Ψ' — . =0,i = 2Κ Κ. (20) 3(δφ)'
In compliance with Eq. 20, Β'((ύχ, (0^,,φ0, Δφ1 Κ , Δφ^ , [Ι0 (ϋ)χ ,(0^)]) = 0 for ϊ = 2Κ Κ and Eq. 19 reduces to Ψ'[ ƒ (ω,, ω φ - Δφ! ),Κ , I(ωχ, ω φ - ΔφΜ )] = 20 (21) = Z?' + Ζ?'δφ + 0(δφ K+i).
The latter formula yields directly Eq. 18. Note that the coefficients B'p are, in general, functional dependencies of the object spectrum I0((Ox,(£)y) which, in turn, can be found from Eq. 6.
16
It can be necessary to correct the spatial spectrum of at least one of said intermediate images to lateral shift of said image compared to any other intermediate image because the image reconstruction as described in this document is sensitive to lateral shifts. A method to do such correction is given below which method can be included in 5 processing means of an apparatus carrying out such image reconstruction. The general algorithm according to Eq. 6 requires a set of de-focused images as input data. However, due to, for example, mis-alignments of the optical system some intermediate images can be shifted in the plane perpendicular to the optical axis resulting in incorrect restoration of the final image.
10 Using shift-invariance property of the Fourier power spectra and the Hermitian redundancy in the image spectrum, i. e. image intensity is a real value, in combination with the Hermitian symmetry of the OTF, i. e. H(0)Λ,0)ν,φ) = H (—ft),,—(ΰν,φ), the spectrum of the shifted intermediate image can be recalculated to exclude the unknown shift. An example of the method for excluding shift dependence is described 15 below.
Assuming that the n - th intermediate image is shifted by Δχ, Ay, in compliance with Eq. 3 its spectrum becomes J oo oo 7^(ω,,ων) = — [ f/„(x - Δχ, ƒ - Ay =)exp[-/(co,x + a> y)]dxdy ~ 2π i · , (22) = ln (ω, ,ω,.) εχρ[-/(ω, Δχ + co,Ay)] /„(ft),, ft),.) being the unshifted spectrum. In many practical cases the exit pupil of an 20 optical system is a symmetrical region, for example, square or circle, and the de-focused OTF //((0,,0)ν,φ)£ Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have in agreement with Eq. 22 7η(ωχ,ω?) = Η (ωγ,ων,φ„)/0 (ωΛ ,ων)εχρ[-ι'(ωχΔχ + ωνΔν)], /,((0,.,0),) = //(ωχ,ων,φ,)/0(ωΛ ,ων).
From Eq. 23, [7η(ft),,ft),)/ /,(0),,0), )]2 ~ exp[-2/(ft),Δ* + ω, Ay)] ξ exp(-2/i&), 25 where i — yj— 1 and the shift-dependent factor can be obviously excluded from /,,(ft)v,ft),). Thus, the shift-corrected spectrum takes the form 17 /Λ(ω,,ω ) =/,(<*>,,(ö )exp(ïd) and it can be further used in the calculations according to Eq. 6. Note that the formulas above give an example of a method for correcting the lateral images shift and there are also other methods to obtain shift-corrected spectra, for example, correlation technique, analysis of moments in intensity 5 distribution.
The quality of reconstruction of an object I0((dx,(x)y) according to the non-iterative algorithm given by Eq. 6 can thus be significantly improved by replacing the initial defocus estimate φ0 with the improved estimate φ = φ0 + δφ , where δφ is provided by 10 Eq. 21. The degree of de-focus of the intermediate image compared to the in-focus image plane can be included in the non-iterative algorithm and the processing means of an apparatus for such image reconstruction adapted accordingly.
At least two intermediate images are required for the reconstruction algorithm specified 15 by Eq. 6, but any number of intermediate images can be used providing higher quality of restoration and weaker sensitivity to the initial de-focus estimate φ0 since the generating function Ψ gives the (M — 1) - th order approximation to 50(ωχ,ων,φ0,Δφ,Κ ,Δφ^, [/(,(00,,0^)]) defined by Eq. 1 with respect to the unknown value δφ . The resolution and overall quality of the final image will increase 20 with increasing the number M of intermediate images, at the expense of implementation of a larger number of photo-sensors or increasingly complex optical/mechanical arrangement, increasing computation time. Reconstruction via three intermediate images is used as an example in this document.
25 The degrees of de-focus of the multiple intermediate images relatively to the ideal focal plane (/. e. an image plane of the same system with no de-focus error) differ. In Eq. 1 the de-focus of the n - th intermediate image φη =φ — Δφπ (η = IK Μ and Μ is the total number of intermediate images) is unknown prior to provision of the intermediate images. However, as mentioned earlier, the difference in degree of de-30 focus Δφη of the multiple intermediate images relatively to each other (or any chosen image plane) must be known with great precision. This imposes no problems in 18 practice, because the relative difference in de-focus is specified in the design of the camera and its optics. Note that these relative differences vary in different camera designs, the type of photo-sensors(s) used and intended applications of the image reconstructor. Moreover, the differences in de-focus Δφη can be found and accounted 5 in further computations by performing calibration measurements with well-defined objects.
The degree of de-focus of the image can be estimated using non-iterative calculations using fixed and straightforward formulas given above and the information provided by 10 the intermediate images. Such non-iterative calculations are of low computational cost, provide stable and precise results. Furthermore, such non-iterative calculations can be performed by relatively simple dedicated electronic circuits, further expanding the possible applications of the invention. Thus, the reconstruction of a final sharply focused image is independent from the degree of de-focus of any of the intermediate 15 images relative to the object.
Image reconstruction: an example with three intermediate images
At least two intermediate images are required for a reconstruction as described above 20 but any number can be used as starting point for such reconstruction. As an illustration of the reconstruction algorithm set forth in the present document, we now consider an example with three intermediate images.
Assume that the spatial spectra of three consecutive phase-diverse images are /,((0,,(0, ), 12((0,,(0,) and /3((0,,(0, ), their de-focuses are φ-Δφ, φ and 25 φ + Δφ, respectively. In agreement with Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6) the reduced magnitude of de-focus is specified as nD2( 1 1 ) φ=—----, (24) 4λ [z, za ) 19 where D is the exit pupil size, λ is the wavelength, zf is the position of the ideal image plane along the optical axis, za is the position of the shifted image plane. The de-focus estimate (for the second image) can be found from Eq. 14 5 where, in agreement with Eq. 16, if[/,to,,ω )-/,to,,ω )]dtadu X = Tc-. (26) JJ ΙΜω,,ω,)-/,(ω„ω,)]ΛΜ«>, and the integration is performed over low spatial frequencies | (O2 + CO2 |« 1. With (p0 in hand, and following to Eq. 7 the generating function satisfying to Eq. 4 becomes Ψ = Ι0(ωχ,ωγ)χ{Ιι0 +v(Aj + Λ3Δφ2) + 2μ(/ι2 + /ί4Δφ2) + 0(δφ3)}, (27) 10 and B0 (ωχ,ωγ ,φ0, Δφ) = h0 +v (Λ, + /ι3Δφ2) + 2 μ(Λ2 + Α4Δφ2). (28) the coefficient V and (X are V =_h2h2-2hxh4- 4h2h4 -3h2 + 8λ4Δφ2 ’ = 1 3/1^3 -2h2 -4h2h4A<p2 μ 6 4h2h4 -3h2 + 8/ι4Δφ2 ’ 15 and hf (/ = 0,4) are the Taylor series coefficients of the de-focused OTF Η(ωχ,ων,φ =φ0 + δφ) in the neighbourhood of (p0, i. e.
Η(ωχ,ων,φ0 = + + Λ3δφ3 + /ι4δφ4 + 0(δφ5) (31)
Finally, the spectrum of the reconstructed image, in concordance with Eq. 10, can be 20 rewritten as 20 , (32) /3 (ων,ωι,)-2/,(ων,ων) + /1 (ων,ω,)1 + μ-;—:-: --:—
Δφ J
An improved estimate of de-focus φ =φο +δφ complies with Eq. 18 for the generating function specified by Eq. 7 δφ= °p7-· (33) 5 where B0 is given by Eq. 28 and B'0 = Α0+τ(Α, + Α3Δφ2) + 2σ(/?2 + Α4Δφ2), (34) B[ = hl+2zh2+ 6σ/*3 + 4τ Α4Δφ2, (35) Ι,(ωχ,ωγ)-Ιχ((ύχ,(άν) /3(ωΛ,ων)-2/,(ωϊ,ωι.) + /1(ωΤ,ων) /,(ων,ων)+χ---------------+σ----1-----— ------— — ρ_ _2Δφ_Δτ_ /,(ωλ,ων)-/1(ωι,ων) /3(ωΛ,ων)-2/2(ωϊ,ων) + /1(ωΛ,ων) 12(ωχ ,ω )+ν----------~ + μ------------Α- -^------ 2Δφ Δφ' • (36) 10 The coefficients coefficient V and μ are specified by Eqs. 29-30, the coefficients X and σ in Eqs. 34-36 satisfy the following equations 1 A, τ=---ρ, (37) 4 A4 4A,A4-3A3 σ =---2-^ζ—L. (38) 48 hi
The optimum difference in de-focus Δφ between the intermediate images is related to 15 the specific dynamic range of the image photo-sensors, i. e. their pixel depth. Depending on de-focus magnitude, the difference in distance between the photo-sensors must exceed at least one wavelength of light to produce a detectable difference in intensity of images. Right-hand terms in Eqs. 32 and 36 are, in fact, the finite-difference approximations of the corresponding derivatives of the de-focus-dependent image 21 spectrum /(ω^,ω^,φ) = H((öt,CO^,φ)/0(ίΰΛ,(üy) with respect to de-focus φ. By reducing the difference in de-focus between the intermediate images or, other words, by reducing the distance between the intermediate image planes the precision of approximation can be increased. High pixel-depth or, alternatively, high dynamic range 5 allows for sensing small intensity variations and, thus, small difference in de-focus between the intermediate images can be implemented which results in increased quality of the final image.
Various embodiments of a device can be designed, which include, but which are not 10 restricted to, various embodiments described below.
Apart from the method and apparatus which are adapted to provide an image wherein a single object is depicted in-focus, a preferred embodiment provides a method and apparatus wherein the intermediate images depict more than one object, each of the 15 depicted objects having a different degree of focus in each of the intermediate images, and before the execution of said method one of those objects is selected.
Clearly, the image reconstructor with its providing means of intermediate images must have at least one optical component (to project an image) and at least one photo-sensor 20 (to capture the image/light). Additionally, the reconstructor requires digital processing means, displays and all other components required for digital imaging.
Firstly, a preferred embodiment for providing means includes one image photo-sensor which can move mechanically, for example the device can be designed including optics 25 to form an image on one sensor, which image photo-sensor or, alternatively, the whole camera assembly moves a predetermined and precise distance along the optical axis in between the subsequent intermediate exposures. The simplicity of such device is the need for only one photo-sensor, the complexity is mechanical needs for precise movement. Such precise movement is most effectively reached for only two images 30 because of only need for two alternative stopping positions of the device. Alternatively, another embodiment has mechanical moving parts is a system with optics and one sensor, but with a spinning disc with, stepwise, sectors with different optical thickness. An image is taken each time a sector of different and known thickness is in front of the photo-sensor. The thickness of the material provides for a precisely known delay of the wave-front for each image separately and, thus, a set of intermediate images can be provided for subsequent reconstruction by the image reconstruction means.
22 5 Secondly, a solid state device (with no mechanical parts/movcment) can be employed. In a preferred embodiment of the providing means the optics can be designed such that at least two independent intermediate images are provided to one fixed image photosensor. These images can be, for example, two large distinct sub-areas each covering approximately half of the photo-sensor and the required diversity de-focus can be 10 provided by, for example, a planar mask.
Also, at least two independent image photo-sensors (for example, three in the example set forth throughout this document) which independent image photo-sensors each produce separate intermediate images, likely, but not strictly necessary, simultaneously. 15 The device can be designed including optics to form an image which image is split in multiple images by, for example, at least one, beam splitter, or alternatively phase grating, with a sensor at the end of each splitted beam with a light path which is precisely known and which represents a known degree of de-focus compared to at least one other intermediate image.
20 Such design (for example, with mirror optics analogous to the optics of a Fabry-Perot interferometer) has, for example, beam splitters to which a large number of sensors or independent sectors on one sensor, for example, three, can be added. The simplicity of such device is the absence of mechanical movement and its proven construction for other, for example said, interferometer, applications.
25
Thirdly, a scanning device can provide the intermediate images. Preferably, a line scanning arrangement is applied. Line scanners with linear photo-sensors are well known and can be implemented without much technical difficulty as providing means for an image reconstructor. The image can be sensed by a linear sensor scanning in the 30 image plane. Such sensors, even at high pixel depth, are inexpensive and mechanical means to move such sensors are well known from a myriad of applications. Clearly, disadvantages of this embodiment are complex mechanics and increased time to capture intermediate images because scanning takes time. Alternatively, a scanner configuration employing several line photo-sensors positioned in the intermediate image planes 23 displaced along the optical axis can be used to take the intermediate images simultaneously.
Fourthly, the intermediate images can be produced by different light frequency ranges.
5 Pixels of the sensor can be fitted alternatively with a red, blue or a green filter in a pattern, for example, in a well known Bayer pattern. Such image photo-sensors are commonplace in technical and consumer cameras. Firstly, the colour split provides a delay and subsequent difference in de-focus of the pixel groups. A disadvantage of this approach is that only grey-scale images will result as a final image. Alternatively, for 10 colour images the colour split is applied to the final image, and intermediate images for the different colours reconstructed separately prior to stacking of such images. Arrangement for coloured images are well known, for example, Bayer pattern filters for the image photo-sensor or spinning discs with different colour filters in front of the optics of the providing means which disc is synchronized with the image capture 15 process. Alternatively, red (R), blue (B) and green (G) spectral bands (“RGB”), or any other combination of spectral bands, can also be separated by prismatic methods, as is common in professional imaging systems.
Fifthly, a spatial light modulator, for example, a liquid crystal device or an adaptive 20 mirror, can be included in the light path, of the light path of at least one sensor, to modulate the light in between the taking of the intermediate images. Note that the adaptive mirror can be of a most simple design because only de-focus alteration is required which greatly reduces the number of actuators in such mirror. Such modulator can be of a planar design, i. e. “piston” phase filter, just to lengthen the path of the light, 25 or such modulator can have any other phase modulating shape, for example, cubic filter.
Using cubic filters allows for combinations of methods described in this document with wave-front coding/decoding technologies, to which references can be found in this document.
30 Lastly, an image reconstructor adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in-focus sub-images can be constructed for EDF and wave-front applications. Such reconstructor has at least one image photo-sensor (for an image/measuring light intensity) or multiple image photo-sensors (for measuring light intensity only) each divided in multiple sub- 24 sensors with each sub-sensor producing an intermediate image independent of the other sub-sensors by projecting intermediate images on the sensor by, for example, a segmented input lens, or segmented input lens array.
It should be noted that increasing the number of intermediate images with consequently 5 decreasing sensor area per intermediate image increases the precision of the estimate of de-focus but decreases the image quality/resolution per intermediate image. So, for example, an application requiring high image quality the number of sub-sensors should be reduced whereas for applications requiring precise estimation of distance and speed the number of sub-sensors should be increased. Methods for calculating the optimum 10 for such segmented lenses and lens arrays are known and summarized in, for example, Ng Ren et al., 2005, Stanford Tech Report CTSR 2005-02 and technologies and methods related to Shack-Hartmann lens arrays. A man skilled in the art will recognize that undesired effects such a parallax between the intermediate images on the subsensors can be corrected for by calibration of the device during manufacturing, or, 15 alternatively, digitally during image reconstruction, or by increasing the number of subsensors and their distribution on the photo-sensor.
Alternatively, small sub-areas of at least two intermediate images can be distributed over the photo-sensors in a pattern. For example, the sensor can be fitted with a device or optical layer including optical steps, which delays the incoming wave-front 20 differently for sub-areas in the pattern of, for example, lines or dots. Theoretically, the sub-areas can have the size of one photo-sensor pixel. The sub-areas must, of course, be digitally read out separately to produce at least two intermediate images with different but known degrees of de-focus (phase shift). Clearly, the final image quality is dependent on the number of pixels representing an intermediate image.
25 From at least two adjacent final sub-images a composite final image can be made, for example, for EDF applications.
An image reconstructor which reconstmcts sub-images of the total image, which subimages can be adjacent, independent, randomly selected or overlapping can also be 30 applied as a wave-front sensor, other words, it can detect differences in phase for each sub-image by estimation of the local de-focus or, alternatively, estimate tilts per subimage based on comparison of the spatial spectra of neighbouring images. The apparatus should therefore include processing means to reconstruct a wave-front by combining de-focus curvatures of, at least two, intermediate sub-images.
25
For wave-front sensing applications the method which determines de-focus for a total final image, or a total object, can be extended to a system which estimates the degree of de-focus in a multiple of sub-intermediate-images (henceforth: sub-images) based on, at 5 least, two intermediate full images. For small areas the local curvature can be approximated by de-focus curvature (degree of de-focus), and at small sub-images any aberration of any order higher or equal to 2 can be approximated by local curvature, i. e. degree of de-focus. Consequently, the wave-front can be reconstructed based on the local curvatures determined for the small sub-images and the image reconstruction 10 device becomes effectively a wave-front sensor. This approach is, albeit using local curvatures and not tilts, in essence an analogue to the workings of a Shack-Hartmann sensor which uses local tilt within each local sub-aperture to estimate the shape of a wave-front. In the method described in this document local curvatures are used for the same. The well known Shack-Hartmann algorithms can be adapted to process 15 information on curvatures rather than tilts. The sub-images can have, in principle, any shape and can be independent or partly overlapping depending on the required accuracy and application. For example, scanning the intermediate image by a linear photo-sensor, i. e. scanning can produce sub-images of lines. For wave-front sensors applications are numerous, which applications will increase with less expensive wave-front sensors.
20
However, the intermediate images can also be used to estimate the angulation (from lateral displacements of sub-images) of light rays compared to the optical axis by comparison of the spatial spectra of the neighbouring intermediate images and then reconstruct the shape of the wave-front by applying methods developed for the analysis 25 of so called hartmanngrams. The apparatus should therefore include means adapted to reconstruct a wave-front by combining lateral shifts of at least two intermediate subimages.
Moreover, a new image of the object can be calculated as it projected on the plane 30 perpendicular to the optical axis at any distance from the exit pupil, i.e. reconstruction of final in-focus images by ray-tracing. Assuming, for example, in the optical system using two intermediate images, the spatial spectrum of the first image is /, ((0^,(0^) and the spectrum of the second image taken in the plane displaced by Az along the Z- 26 axis is /2(ωΛ,ων). Lateral shift of the second image by Ax and Ay, in conformity with Eq. 22, results in following change in the spatial spectrum of the second image I OO oo /-, (ων ,ων ) = — f ί Λ {x - Ax,y - Av =)exp[-/(G)Tx + ωνy)]dxdy = 2π ' , (39) = 72(ωΛ,ω} )6χρ[-ί(ωγΔχ + ωνΔ>)] 5 /,(03Λ ,(0ν) being the unshifted spectrum. In many practical cases the exit pupil of an optical system is a symmetrical region, for example, square or circle, and the de-focused OTF Ζ/(ωΛ,ω,.,φ)Ε Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have by analogy with Eq. 23 Ζ,(ων,ων) = //(ωτ,ων,ζ + Δζ)/0(ων,ων)6χρ[-/(ωνΔχ + ω,.Δ/)] , (40) Λ(ωΑ5ων) = //(ωΛ,ω, ,ζ)/0(ωί,ω,) 10 where //((öv,(öv,z) is the system OTF with de-focus expressed in terms of the displacement z with respect to the exit pupil plane. The intermediate images specified by /1(ωΛ,ων) and I2((üx,(üv) are supposed to be displaced longitudinally by a small distance | Az |« z to prevent significant lateral shift of /2((dT,COv).
From Eq. 40, [72 ((0^,cov)//,(ωΛ ,<dv)]2 ~ εχρ[-2ζ'(ωΛΔχ + ωνΔ^)] ξ exp(-2/d). 15 The lateral shifts Ax and Δ_ν can be obviously calculated from $.
Note that other mathematical methods applicable to Fourier transforms of the images or/and their intensity distributions can be implemented to get information on lateral displacements Ax and Ay, for example, correlation method, analysis of moments in 20 intensity distributions. From the formulas above, the ray-vector characterizing the whole image specified by the spatial spectra /,((ör,(öv) becomes V = {Δχ,Δ^,Δζ} and a new image (rather a point of image) at any displaced plane with the coordinate z perpendicular to the optical axis Z can be conveniently calculated by ray-tracing (for example, D. Malacara and M. Malacara, Handbook of optical design, Marcel Dekker, 27
Inc., New York, 2004). Note that the ray intensity Iv is given by the integral intensity of the whole image/sub-image
Iy = ^Ix{x,y)dxdy. (41) (x,_y)e sub-image
The integration in Eq. 41 is performed over the image/sub-image area. By splitting the 5 images into a large number of non-overlapping or even overlapping sub-areas depending on the application requirements, the procedure described above can be applied to each sub-area separately, resulting in a final image as it is projected on the image plane at any given distance from the exit pupil and having the number of “pixels” equal to the number of sub-areas. This function is close to the principle described in 10 W02006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al., 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), but the essential difference is that the method described in the present document does not require an additional array of microlenses. The information on local tilts, i. e. ray directions, is recalculated from the comparison of the 15 spatial spectra of the intermediate de-focused images. It should be noted that the estimated computational cost for the described method is significantly lower than those given in W02006/039486, in other words, the described method can provide real-time capability.
20 Images with EDF can be obtained by correction of a single wave-front in a single final image. The non-iterative computation methods described in this document will allow for rapid computations on, for example, dedicated electronic circuits. Extended computation time on powerful computers has been a drawback of various EDF imaging techniques to date. EDF images can also be obtained by dividing a total image in sub-25 images of a much less number than the wave-front application, requiring likely thousands of sub-images, described above. The degree of de-focus is determined per sub-image (which can be small number of sub-images, say, only a dozen or so subimages per total image, or very large numbers with each sub-image represented by only a number of pixels. The desired number of sub-images depends on required accuracy, 30 specifications of the device and its application), and the sub-images corrected accordingly followed by reconstruction of a final image by combination of corrected 28 sub-images. This procedure results in a final image in which all extended (three-dimensional) objects are sharply in focus.
EDF images can also be obtained by stacking at least two final images each 5 reconstructed to correct for de-focus for at least one focal plane of the same objects in cubic space. Such digital stacking procedures are well known.
The list of embodiments above includes examples for possible embodiments, and other designs to the same effect can be implemented, albeit of likely increasing complexity. 10 The choice of embodiment clearly depends on the specifics of the application.
It should be noted that the preferred methods above imply a non-iterative method for image reconstruction. Non-iteration is most simple and save computing time. In our prototypes we reach reconstruction times ~50 ms allowing real-time imaging. However, 15 two or three iterations of calculations can improve estimate of de-focus in selected cases and improve image quality. Whether iterations should be applied depends on the application and likely need for real-time imaging. Also, for example, two intermediate images combined with re-iteration of calculations can be preferred by the user to three intermediate images combined with non-iterative calculations. The embodiments and 20 methods of reconstruction are dependent on the intended application.
Applications of devices employing image reconstruction as described in this document are basically in nearly any optical camera system and are too numerous to list in full. Some, but not all, applications are listed below.
25 Scanning is an important application. Image scanning is a well known technology and can hereby be extended for camera applications. Note that images with an EDF can be reconstructed by dividing the intermediate images in a multiple of sub-sectors. For each sub-area the degree of de-focus can be determined and, consequently, the optical sharpness of the sub-sector reconstructed. So, the final image will be composed of a 30 multiple of optical focused images and have an EDF, even at full aperture camera settings. Linear scanning can be employed to define such linear sub-areas.
Pattern recognition and object tracking is extremely sensitive to a variety of distortions including de-focus. This invention provides a single sharp image of the object by single 29 exposures as well as additional information on speed, distance and angle of travelling by multiple exposures. Applications can be military tracking and targeting systems, but also medical, for example, endoscopy with added information of distances.
5 Methods described in this document are sensitive to wavelength. This phenomenon can be employed to split images at varying image depth when light sources of different wavelength are employed. For example, focusing at different layer depth in multilayer CD/DVD discs can be achieved for different depth simultaneously with lasers of different wavelength. A multilayer DVD pick-up optical system which reads different 10 layers simultaneously can thus be designed. Other applications involve consumer and technical cameras insensitive to de-focus error, iris scanning cameras insensitive to the distance of the eye to the optics, and a multiple of homeland security camera applications. Also, automotive cameras can be designed which are not only insensitive to de-focus, but also, for example, calculate distance and speed of chosen objects, 15 parking aids, wave-front sensors in numerous military and medical applications. Availability of inexpensive wave-front sensors will only increase the number of applications.
As pointed out, the reconstruction method described above is highly dependent of the 20 wavelength of light forming the image. So, the methods can be adapted to determine the wavelength of light when de-focus is known precisely. Consequently, the image reconstructor can, alternatively be designed as a spectrometer.
Figure 1 shows a sequence of de-focused intermediate images from the image side of 25 the optical system from which intermediate images the final image can be reconstructed. An optical system with exit pupil, 1, provides, in this particular example, three photosensors (or sections/parts thereof, or as subsequent images in time, see various options in the description of the invention in this document) with three intermediate images, 2, 3, 4, with the optical axis, 5, and which images have a precisely known distance, 6, 7, 8, 30 compared to the exit pupil, 1, and, alternatively, precisely known distances compared to each other, 9, 10. Note that a precisely known distance of an photo-sensor/image plane compared to the principle plane in such a system translates, via standard and traditional optical formulas, in a precisely known difference of de-focus compared to each other.
30
Figure 2 shows a reconstructed image of a page from a textbook, 11, by reconstruction from three intermediate images into one final image, 12, de-focused at the dimensionless value of φ = 40, a second image, 13, de-focused at φ - 45, and another, 14, de-focused at φ = 50. The reconstruction was carried out on intermediate images 5 with digitally simulated de-focus, and a dynamic range of 14-16 bit/pixel. Note that all de-focused images are distinctly unreadable to a degree that even the mathematical integral sign can not be recognized from any of the intermediate images.
Figure 3 shows a reconstructed image of a scene with building, 15, by reconstruction 10 from three intermediate images into one final image, 16, de-focused at the dimensionless value of φ = 50, a second image, 17, de-focused at φ = 55, and another, 18, de-focused at φ = 60. The reconstruction was carried out on intermediate images with digitally simulated de-focus, and a dynamic range of 14 bit/pixel.
15 Figure 4 shows a reconstructed image of the letters “PSF” on a page, 19, by reconstruction from three intermediate images into one final image, 20, de-focused at the dimensionless value of φ = 95, a second image, 21, de-focused at φ = 100, and another, 22, de-focused at φ = 105. The reconstruction was carried out on intermediate images with digitally simulated de-focus. The final image, 19, has a dynamic range of 20 14-bit/pixel, is reconstructed with a three-step de-focus correction, with the final defocus deviation from exact value: δ^-0.8.
Figure 5 shows an example of an embodiment of the imaging system employing two intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected 25 by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a beam splitter, 26, into two light signals. The light signals are finally detected by two photo-sensors, 27 and 28, positioned in the image planes shifted, one with respect to another, by a specified distance along the optical axis. Photo-sensors 27 and 28 provide simultaneously two intermediate, for example, phase-diverse, images for 30 the reconstruction algorithm set forth in this document.
Figure 6 shows an example of an embodiment of the imaging system employing three intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is 31 divided by a first beam splitter, 26, into two light signals. The reflected part of light is detected by a photo-sensor, 27, whereas the transmitted light is divided by a second beam splitter, 28. The light signals from the beam splitter 28 are, in turn, detected by two photo-sensors, 29 and 30, positioned in the image planes shifted, one with respect 5 to another, and relative to the image plane of the sensor 27. Photo-sensors 27, 29 and 30 provide simultaneously three intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.
Figure 7 illustrates the method, in this example for two intermediate images, to 10 calculate an object image in an arbitrary image plane, i. e. at an arbitrary de-focus, based on local ray-vector and intensity determined for a small sub-area of the whole (de-focused) image. Two consecutive phase-diverse images, 2 and 3, with a predetermined de-focus, or alternatively displacement, 9, along the optical axis, 5, are divided by a digital (software-based) procedure into a plurality of sub-images. 15 Comparison of the spatial spectra calculated for a selected image area, 31, on phase-diverse images allows evaluating the ray-vector direction, 32, which characterizes light propagation, in geometric optics limit, along the optical axis 5. Using the integral intensity over the area 31 as a ray intensity, in combination with the ray-vector, a corresponding image point, 33, i. e. point intensity and position, located in an arbitrary 20 image plane, 34, can be found by ray-tracing. In calculations, the distance, 35, from the new image plane 34 to one of the intermediate image planes is assumed to be specified.

Claims (20)

1. Werkwijze voor het verschaffen van ten minste één definitief scherp beeld van ten minste één voorwerp in een vlak, waarbij de werkwijze de volgende stappen omvat: 5 - het met een optisch systeem met een optische overdrachtfunctie verschaffen van ten minste twee in fase verschoven tussenbedden van het voorwerp, waarbij elk tussenbeeld een afwijkende en een van te voren onbekende mate van onscherpte heeft in verhouding tot het scherpe beeldvlak van het ten minste ene voorwerp, maar waarbij de mate van onscherpte van elk 10 tussenbeeld in verhouding tot elk ander tussenbeeld van te voren bekend is en waarbij de optische overdrachtfunctie van het optische systeem van te voren bekend is; het verschaffen van een genererende functie welke ruimtelijke spectra van ten minste twee tussenbedden combineert en, onafhankelijk van de 15 combinatie van ruimtelijke spectra, de optische overdrachtfuncties van de corresponderende ruimtelijke spectra combineert; het zodanig aanpassen van de combinaties van ruimtelijke spectra en optische overdrachtfuncties dat de genererende functie onafhankelijk wordt van de mate van onscherpte van ten minste één tussenbeeld, vergeleken met het 20 scherpe beeldvlak; en het reconstrueren van het ten minste éne definitieve scherpe beeld door middel van een niet-iteratief algoritme, welk algoritme is gebaseerd op de genererende functie.Method for providing at least one definitive sharp image of at least one object in a plane, the method comprising the steps of: - providing an optical system with an optical transfer function of at least two phase-shifted intermediate beds of the object, wherein each intermediate image has a different and previously unknown degree of blur relative to the sharp image plane of the at least one object, but wherein the degree of blur of each intermediate image relative to any other intermediate image of is known in advance and wherein the optical transfer function of the optical system is known in advance; providing a generating function that combines spatial spectra of at least two interbeds and, independently of the combination of spatial spectra, combines the optical transfer functions of the corresponding spatial spectra; adjusting the combinations of spatial spectra and optical transfer functions such that the generating function becomes independent of the degree of blurring of at least one intermediate image compared to the sharp image plane; and reconstructing the at least one final sharp image by means of a non-iterative algorithm, which algorithm is based on the generating function. 2. Werkwijze volgens conclusie 1, met het kenmerk dat deze wordt voorafgegaan door additionele bewerking om het ruimtelijk spectrum aan te passen van ten minste één van de tussenbedden voor lateraal verschuiven van het beeld in vergelijking tot elk ander tussenbeeld.Method according to claim 1, characterized in that it is preceded by additional processing to adjust the spatial spectrum of at least one of the intermediate beds for laterally shifting the image compared to any other intermediate image. 3. Werkwijze volgens conclusie 1 -2, met het kenmerk dat deze additionele bewerking bevat gebaseerd op een additionele genererende functie voor het verschaffen van de mate van onscherpte van ten minste één van de tussenbedden in vergelijking tot het scherpe beeldvlak.Method according to claim 1-2, characterized in that it comprises additional processing based on an additional generating function for providing the degree of blurring of at least one of the intermediate beds compared to the sharp image surface. 4. Werkwijze volgens conclusie 3, met het kenmerk dat de mate van onscherpte van het tussenbeeld in vergelijking tot het scherpe beeldvlak in het niet-iteratieve algoritme vervat is.Method according to claim 3, characterized in that the degree of blurring of the intermediate image compared to the sharp image plane is included in the non-iterative algorithm. 5. Inrichting voor het verschaffen van ten minste één definitief scherp beeld van ten minste één voorwerp in een vlak, de inrichting omvattende: beeldvormingmiddelen ingericht voor het verschaffen van ten minste twee, faseverschoven tussenbedden van het voorwerp, waarbij elk tussenbeeld een verschillende en van te voren onbekende mate van onscherpte heeft in 10 vergelijking tot het scherpe beeldvlak van het voorwerp, maar waarin de mate van onscherpte van elk van de tussenbeelden in verhouding tot elk ander tussenbeeld van te voren onbekend is; ten minste één optisch systeem, dat is ingericht om het voorwerp op de beeldvormingmiddelen af te beelden, waarbij de optische overdrachtfunctie van 15 ten minste één optisch systeem van te voren bekend is; bewerkingsmiddelen, die zijn ingericht voor het verschaffen van een genererende functie welke ruimtelijke spectra combineert van ten minste twee tussenbeelden en die, onafhankelijk van de combinatie van ruimtelijke spectra, de optische 20 overdrachtfuncties van de corresponderende ruimtelijke spectra combineert; die zijn ingericht voor het zodanig combineren van de ruimtelijke spectra en optische overdrachtfuncties dat de genererende functie onafhankelijk wordt van de mate van onscherpte van, ten minste één, tussenbeeld vergeleken met het scherpe beeldvlak; en 25. die zijn ingericht voor het toepassen van een niet-iteratief algoritme met de genererende functie voor het verkrijgen van een reconstructie van het definitieve beeld.5. Device for providing at least one definitive sharp image of at least one object in a plane, the device comprising: image-forming means adapted to provide at least two phase-shifted intermediate beds of the object, each intermediate image having a different and has a previously unknown degree of blur compared to the sharp image plane of the object, but in which the degree of blur of each of the intermediate images in relation to any other intermediate image is previously unknown; at least one optical system adapted to image the object on the imaging means, the optical transfer function of at least one optical system being known in advance; processing means adapted to provide a generating function which combines spatial spectra of at least two intermediate images and which, independently of the combination of spatial spectra, combines the optical transfer functions of the corresponding spatial spectra; which are arranged to combine the spatial spectra and optical transfer functions such that the generating function becomes independent of the degree of blurring of, at least one, intermediate image compared to the sharp image plane; and 25. which are adapted to apply a non-iterative algorithm with the generating function to obtain a reconstruction of the final image. 6. Inrichting volgens conclusie 5, met het kenmerk dat de bewerkingsmiddelen 30 zijn ingericht om het ruimtelijke spectrum van ten minste één tussenbeeld te converteren naar een laterale verschuiving in vergelijking tot elk ander tussenbeeld.Device as claimed in claim 5, characterized in that the processing means 30 are adapted to convert the spatial spectrum of at least one intermediate image to a lateral shift in comparison to any other intermediate image. 7. Inrichting volgens conclusie 5-6, met het kenmerk dat de bewerkingsmiddelen zijn ingericht om een additionele genererende functie te bepalen voor het verschaffen van de mate van onscherpte van ten minste één tussenbeeld in vergelijking tot het scherpe beeldvlak.7. Device as claimed in claims 5-6, characterized in that the processing means are adapted to determine an additional generating function for providing the degree of blurring of at least one intermediate image compared to the sharp image surface. 8. Inrichting volgens conclusie 7, met het kenmerk dat de bewerkingsmiddelen 5 zijn ingericht om het niet-iteratief algoritme aan te passen aan de mate van onscherpte van het tussenbeeld in vergelijking tot het scherpe beeldvlak.Device as claimed in claim 7, characterized in that the processing means 5 are adapted to adjust the non-iterative algorithm to the degree of blurring of the intermediate image compared to the sharp image plane. 9. Inrichting volgens conclusie 5, met het kenmerk dat de beeldvormingmiddelen ten minste één beeldvormende fotosensor omvatten. 10Device as claimed in claim 5, characterized in that the image-forming means comprise at least one image-forming photo sensor. 10 10. Inrichting volgens conclusie 9, met het kenmerk dat de inrichting is ingericht om ten minste twee opnamen uit te voeren met dezelfde beeldvormende fotosensor welke is ingericht om in fase verschoven tussenbedden te verschaffen.Device as claimed in claim 9, characterized in that the device is adapted to perform at least two shots with the same image-forming photo sensor which is adapted to provide phase-shifted intermediate beds. 11. Inrichting volgens conclusie 9, met het kenmerk dat de inrichting is ingericht om ten minste twee opeenvolgende opnamen uit te voeren met dezelfde beeldvormende fotosensor welke is ingericht om ten minste twee vooraf bepaalde verschillende posities langs de optische as aan te nemen voor opeenvolgende opnamen.11. Device as claimed in claim 9, characterized in that the device is adapted to perform at least two consecutive images with the same image-forming photo sensor which is adapted to take at least two predetermined different positions along the optical axis for consecutive images. 12. Inrichting volgens conclusie 9, met het kenmerk dat de inrichting is ingericht voor het uitvoeren van ten minste twee tussenliggende opnames van ten minste twee onafhankelijke gebieden van ten minste twee beeldvormende fotosensors.Device as claimed in claim 9, characterized in that the device is adapted to perform at least two intermediate recordings of at least two independent areas of at least two image-forming photo sensors. 13. Inrichting volgens een der conclusies 5-12, met het kenmerk dat de 25 bewerkingsmiddelen zijn ingericht om tussen subbeelden van corresponderende subgebieden van ten minste twee tussenbedden te bewerken tot ten minste twee definitieve scherpe subbeelden.13. Device as claimed in any of the claims 5-12, characterized in that the processing means are adapted to process between sub-images of corresponding sub-regions of at least two intermediate beds into at least two definitive sharp sub-images. 14. Inrichting volgens conclusie 13, met het kenmerk dat de inrichting 30 bewerkingsmiddelen bevat voor het samenstellen van één definitief beeld door combinatie van ten minste twee definitieve scherpe subbeelden.14. Device as claimed in claim 13, characterized in that the device comprises processing means for compiling one final image by combining at least two final sharp sub-images. 15. Inrichting volgens conclusie 13, met het kenmerk dat deze bewerkingsmiddelen bevat voor het reconstrueren van een golffront door het combineren van onscherpe krommingen van ten minste twee tussensubbeelden.Device as claimed in claim 13, characterized in that it comprises processing means for reconstructing a wavefront by combining blurry curves of at least two intermediate sub-images. 16. Inrichting volgens conclusie 6, met het kenmerk dat de inrichting bewerkingsmiddelen bevat die zijn ingericht voor het reconstrueren van een frontgolf door het combineren van laterale verschuivingen van ten minste twee tussensubbeelden.Device according to claim 6, characterized in that the device comprises processing means which are arranged for reconstructing a front wave by combining lateral shifts of at least two intermediate sub-images. 17. Inrichting volgens conclusie 6 en 13, met het kenmerk dat de inrichting 10 bewerkingsmiddelen bevat voor het door middel van ray-tracing reconstrueren van ten minste één definitief scherp beeld.Device as claimed in claims 6 and 13, characterized in that the device 10 comprises processing means for reconstructing at least one definitive sharp image by means of ray-tracing. 18. Inrichting volgens conclusie 5, met het kenmerk dat de inrichting ten minste twee bevat sensoren, elk omvattende een fotosensor met een enkelvoudig lichtgevoelig 15 punt.18. Device as claimed in claim 5, characterized in that the device comprises at least two sensors, each comprising a photosensor with a single light-sensitive point. 19. Inrichting volgens conclusie 18, met het kenmerk dat de inrichting ten minste twee amplitudefilters bevat, die elk zijn gecombineerd met een focusserend optisch element. 20Device according to claim 18, characterized in that the device comprises at least two amplitude filters, each of which is combined with a focusing optical element. 20 20. Inrichting volgens conclusie 19, met het kenmerk dat de bewerkingsmiddelen zijn ingericht voor range finding.Device as claimed in claim 19, characterized in that the processing means are adapted for range finding.
NL2001777A 2008-02-27 2008-07-07 Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions NL2001777C2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/NL2009/050084 WO2009108050A1 (en) 2008-02-27 2009-02-25 Image reconstructor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08152017 2008-02-27
EP08152017 2008-02-27

Publications (1)

Publication Number Publication Date
NL2001777C2 true NL2001777C2 (en) 2009-08-31

Family

ID=40550235

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2001777A NL2001777C2 (en) 2008-02-27 2008-07-07 Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions

Country Status (1)

Country Link
NL (1) NL2001777C2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102162739A (en) * 2010-12-30 2011-08-24 中国科学院长春光学精密机械与物理研究所 Method and device for testing in-orbit dynamic transfer function of space camera

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
DOLNE J J ET AL: "Practical issues in wave-front sensing by use of phase diversity", APPLIED OPTICS OPT. SOC. AMERICA USA, vol. 42, no. 26, 10 September 2003 (2003-09-10), pages 5284 - 5289, XP002525073, ISSN: 0003-6935 *
GONSALVES R A: "Phase retrieval and diversity in adaptive optics", OPTICAL ENGINEERING, SOC. OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, BELLINGHAM, vol. 21, no. 5, 1 September 1982 (1982-09-01), pages 829 - 832, XP008105528, ISSN: 0091-3286 *
GONSALVES R A: "Small-phase solution to the phase-retrieval problem", OPTICS LETTERS OPT. SOC. AMERICA USA, vol. 26, no. 10, 15 May 2001 (2001-05-15), pages 684 - 685, XP002525072, ISSN: 0146-9592 *
HAUSLER ET AL: "A method to increase the depth of focus by two step image processing", OPTICS COMMUNICATIONS, NORTH-HOLLAND PUBLISHING CO. AMSTERDAM, NL, vol. 6, no. 1, 1 September 1972 (1972-09-01), pages 38 - 42, XP024491435, ISSN: 0030-4018, [retrieved on 19720901] *
KENDRICK R L ET AL: "PHASE-DIVERSITY WAVE-FRONT SENSOR FOR IMAGING SYSTEMS", APPLIED OPTICS, OSA, OPTICAL SOCIETY OF AMERICA, WASHINGTON, DC, vol. 33, no. 27, 20 September 1994 (1994-09-20), pages 6533 - 6546, XP000469298, ISSN: 0003-6935 *
LANDESMAN B T ET AL: "Non-iterative methodology for obtaining a wavefront directly from phase diversity measurements", IR SPACE TELESCOPES AND INSTRUMENTS 24-28 AUG. 2002 WAIKOLOA, HI, USA, vol. 4850, 2003, Proceedings of the SPIE - The International Society for Optical Engineering SPIE-Int. Soc. Opt. Eng USA, pages 461 - 468, XP002525071, ISSN: 0277-786X *
LEVOY M ET AL: "LIGHT FIELD RENDERING", COMPUTER GRAPHICS PROCEEDINGS 1996 (SIGGRAPH). NEW ORLEANS, AUG. 4 - 9, 1996; [COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH)], NEW YORK, NY : ACM, US, 4 August 1996 (1996-08-04), pages 31 - 42, XP000682719 *
YASUHIRO OHNEDA ET AL: "Multiresolution Approach to Image Reconstruction with Phase-Diversity Technique", OPTICAL REVIEW, SPRINGER, BERLIN, DE, vol. 8, no. 1, 1 January 2001 (2001-01-01), pages 32 - 36, XP019353857, ISSN: 1349-9432 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102162739A (en) * 2010-12-30 2011-08-24 中国科学院长春光学精密机械与物理研究所 Method and device for testing in-orbit dynamic transfer function of space camera
CN102162739B (en) * 2010-12-30 2012-11-07 中国科学院长春光学精密机械与物理研究所 Method and device for testing in-orbit dynamic transfer function of space camera

Similar Documents

Publication Publication Date Title
WO2009108050A1 (en) Image reconstructor
US7705970B2 (en) Method and system for optical imaging and ranging
CA2613443C (en) Image correction across multiple spectral regimes
US9530213B2 (en) Single-sensor system for extracting depth information from image blur
JP5328165B2 (en) Apparatus and method for acquiring a 4D light field of a scene
US9541635B2 (en) Laser phase diversity for beam control in phased laser arrays
JP6091176B2 (en) Image processing method, image processing program, image processing apparatus, and imaging apparatus
JP6570991B2 (en) Diversification of lenslet, beam walk (BEAMWALK), and tilt for forming non-coplanar (ANISOLANATIC) images in large aperture telescopes
WO2011158508A1 (en) Image processing device and image processing method
US20180122092A1 (en) Apparatus and Method for Capturing Images using Lighting from Different Lighting Angles
JP2012514749A (en) Optical distance meter and imaging device with chiral optical system
US20120044320A1 (en) High resolution 3-D holographic camera
JP2017208641A (en) Imaging device using compression sensing, imaging method, and imaging program
US5350911A (en) Wavefront error estimation derived from observation of arbitrary unknown extended scenes
CN115546285B (en) Large-depth-of-field stripe projection three-dimensional measurement method based on point spread function calculation
TWI687661B (en) Method and device for determining the complex amplitude of the electromagnetic field associated to a scene
Amin et al. Active depth from defocus system using coherent illumination and a no moving parts camera
US10855896B1 (en) Depth determination using time-of-flight and camera assembly with augmented pixels
NL2001777C2 (en) Sharp image reconstructing method for use in e.g. digital imaging, involves reconstructing final in-focus image by non-iterative algorithm based on combinations of spatial spectra and optical transfer functions
WO2021099761A1 (en) Imaging apparatus
JP5857887B2 (en) Wavefront measuring device
JP3584285B2 (en) Distortion image correction method and apparatus
Neuner et al. Digital adaptive optical imaging for oceanic turbulence mitigation
WO2020094484A1 (en) Wavefront curvature sensor involving temporal sampling of the image intensity distribution
US11689821B2 (en) Incoherent Fourier ptychographic super-resolution imaging system with priors

Legal Events

Date Code Title Description
PD2B A search report has been drawn up
V1 Lapsed because of non-payment of the annual fee

Effective date: 20130201