NL2002406C2 - Optical range finder and imaging apparatus. - Google Patents

Optical range finder and imaging apparatus. Download PDF

Info

Publication number
NL2002406C2
NL2002406C2 NL2002406A NL2002406A NL2002406C2 NL 2002406 C2 NL2002406 C2 NL 2002406C2 NL 2002406 A NL2002406 A NL 2002406A NL 2002406 A NL2002406 A NL 2002406A NL 2002406 C2 NL2002406 C2 NL 2002406C2
Authority
NL
Netherlands
Prior art keywords
image
optical
spectrum
degree
mask
Prior art date
Application number
NL2002406A
Other languages
Dutch (nl)
Inventor
Michiel Christiaan Rombach
Aleksey Nikolaevich Simonov
Original Assignee
Michiel Christiaan Rombach
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michiel Christiaan Rombach filed Critical Michiel Christiaan Rombach
Priority to NL2002406A priority Critical patent/NL2002406C2/en
Priority to US13/143,655 priority patent/US8941818B2/en
Priority to EP10700187.7A priority patent/EP2386053B1/en
Priority to JP2011545314A priority patent/JP2012514749A/en
Priority to PCT/NL2010/050007 priority patent/WO2010080030A2/en
Priority to CN2010800102452A priority patent/CN102356298A/en
Application granted granted Critical
Publication of NL2002406C2 publication Critical patent/NL2002406C2/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • G01C3/08Use of electric radiation detectors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/42Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect
    • G02B27/46Systems using spatial filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)

Description

Optical range finder and imaging apparatus Introduction
Range finding, being the estimation of the distance from an observer or observing 5 apparatus to an object of interest or an extended scene is important for, for example, sharp focusing (for example, for general imaging and photography), for accurately aiming a weapon and, more recently, for smart ammunition (for example, for military applications), for determining distance and speed of moving objects (for example, for automotive cameras) and for many other technical, medical and scientific applications 10 for which range finding, or, range finding in combination with imaging is important.
Terms and definitions
The term “in-focus image” plane is a plane optically conjugate to an object plane and thus having no defocus error. The term “in-focus” refers to in focus/optical sharpness/in 15 optimal focus, and the term “defocus” to defocus/optical un-sharpness/blurring. An image is meant to be in-focus when the image plane is optically conjugate to the corresponding object plane. The terms “object” and “image” conform to the notations of Goodman for a generalized imaging system (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6). The object is positioned in 20 the “object plane” and the corresponding image is positioned in the “image plane”. In mathematical description, the terms “object” and “image” refer to two-variable functions representing two-dimensional distributions of light in the object and image planes, respectively. The term “EDF” is an abbreviation for Extended Depth of Field. The “characteristic pattern” (for example, a “pattern of lines”) referred to in this 25 document results from the inherent spectral response of the optical mask. This pattern can be represented by the modulus of the optical transfer function, OTF, calculated with the mask amplitude and phase functions. In practice, the characteristic pattern is useful for measuring and evaluating the displacement of the spatial spectra. The term “pattern of lines” in the context of this document refers to a, usually periodic, spectral response 30 of the optical mask. Pattern of lines is an example of the characteristic pattern resulting from the optical mask with a chiral prismatic element, for example, a half-aperture prismatic optical mask. In many practical cases, the spectral response is represented by the incoherent OTF calculated with the mask amplitude and phase functions.
2
The “spectral response” is generally obtained by Fourier transformation from the intensity impulse response (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6), but other transformations (for example, 5 wavelet decomposition and other spectral decomposition methods) can also be used.
The term “spectral decomposition” means decomposition of the image into spatial frequency spectra by, for example, Fourier or wavelet transformation. Alternatively, spectral decomposition of the image can be performed into a complete (discrete or continuous) orthogonal basis in the Hilbert space defined by the eigenfunctions of an 10 appropriate operator.
The term “chiral optical element” denotes an optical clement with, at least one, optical surface resulting in chiral phase modulation of the light beam after passing the mask. The chiral phase modulation, in turn, is associated with the chiral phase function which can be depicted as a three-dimensional chiral surface. By definition of chirality, the 15 mirror image of the chiral surface cannot be mapped to the original surface by rotations and translations (for example, M. Petitjean, J. Math. Phys. 43, 4147-4157, 2002, and references therein). Chirality signs/directions (clockwise or counter-clockwise, or, alternatively, right-handed or left-handed, or, alternatively, but in less use, plus or minus) should be the same for each particular optical mask, i. e. there should be 20 preferably no mixing of chirality signs/directions within one mask. The degree of chirality, in simple cases (e. g. vortex), can be quantitatively measured in terms of topological charge, in other cases (e. g. complex three-dimensional surfaces with discontinuities), the degree of chirality can be calculated in terms of the continuous chirality measure (for example, Salomon et al, J. Mater. Chem. 25, 295-308, 1999).
25
The current state of the art
Optical range finding methods, implemented in various optical-mechanical systems, are in use since the nineteenth century, and generally apply some variation on trigonometry as in stadiametric range finders and parallax/coincidence range finders. Generally, light 30 from an object enters the optical system through two windows spaced wide apart. The range finder operates as an angle-measuring device by measuring the triangle comprising the range finder base length and the line from each window to the target point. Such passive range finders arc cumbersome and likely relatively inaccurate, but 3 they have an important advantage of passivity over more modem, active, range finders (for example, transmitting sonar, laser and radar signals). Passive range finding does not send out any signal and therefore is difficult to detect by, for example, the object of which the range is being determined.
5
Most modem cameras employ range finding to determine the correct focus distance for sharp imaging. Range finding can be active, e. g. transmitting sound or light signals and measuring their delay on receiving to calculate the distance. Also passive range finding is employed, firstly, phase detection methods, and, secondly, contrast measurement 10 methods.
Firstly, phase detection methods (for example, W02005098501 and US2008205872, by Secondary Image Registration, Through The main Lens) includes splitting the image by a small beam splitter, for example, as part of the mirror in single lens reflex cameras, to direct light to a dedicated autofocus sensor, which is independent from the main image 15 taking sensor which, with the addition of an optical setup which directs light from opposite sides of said objective lens, creates a simple range finder with a base equivalent to the diameter of the objective lens by analyzing two images for differences in light intensity patterns. The range finder and methods described in this document are passive by definition and require only one image (or alternatively, the corresponding 20 spatial spectrum) from only one photosensor to evaluate the distance to an object. Secondly, contrast measurement methods (for example, US2006109369 and US2004017502) involve maximizing the contrast of the image, or part of the image, on the photosensor by changing the focusing condition. This contrast measurement method differs, in all its aspects, from the range finder apparatus and corresponding methods 25 described in this document.
US2008/0137059 and the document (A. Greengard et al, Opt. Lett. 21, 181-183, 2006) propose an apparatus and method for optical range finding and imaging, including multiple optical masks and multiple images, which evaluate a point spread function 30 (PSF) that rotates depending on the position of the object. The invention described in this document is different in its design, embodiments and methods from US2008/0137059. Several, but not necessarily all, differences are listed here: Firstly, the invention described in this document does not evaluate the PSF (including rotation thereof). Secondly, it needs only one image of the object to estimate defocus and range, 4 and thus, only one optical mask. Thirdly, the optical mask is such that it provides displacement of the spatial spectrum, and, additionally, can provide characteristic pattern in the spatial spectrum; the optical mask includes a chiral optical element in a preferred embodiment described in this document. Fourthly, for estimating the degree 5 of defocus (and for imaging), it uses the known a priori infomiation on the inherent spectral response of, in the examples set forth in this document, said chiral optical element. The inherent, spectral response can be represented, for example, by the modulus of the incoherent OTF of the chiral prism, which is calculated, or, alternatively, measured, for example during manufacturing of the apparatus, only once 10 for all future range finding and imaging. Fifthly, in this document the spatial spectmm of the image (with unknown a priori placement of the pattern of lines caused by the mask, the placement of said pattern depending on defocus) is compared with the placement of the, largely similar, pattern of lines (or, alternatively, pattern of shapes, which, preferably, are lines, for example, the modulus of the incoherent optical transfer 15 function of the mask) depending solely on the inherent spectral response of the optical mask and having a known a priori dependence on defocus. With said spatial spectra of the image and inherent spectral response of the mask a mutual displacement of said patterns of lines such as shift, rotation and scaling can be measured, and the degree of defocus can be estimated from the displacement, and subsequently, the range can be 20 calculated from the degree of dcfocus. Sixthly, the present document describes, by means of an example, an embodiment with a half-aperture prismatic optical mask and, in this example, a detailed method for defocus estimation is presented in terms of mathematical formulas. Seventhly, and lastly, this document describes also the imaging function of the range finding apparatus and provides, by means of an example of the 25 least-mean-square-error inversion filter, the method for obtaining the in-focus image of an object from the, likely blurred, image detected by the photosensor.
General description
This document describes a solid apparatus (apparatus with no moving parts) and a 30 method for, basically passive, optical range finding, which can be adapted to an imaging apparatus or a combination of a range finder and an imaging apparatus which carry out distance measurements and provide in-focus imaging of an object by optical/digital processing. Processing steps can be accomplished in real-time (at a frame rate of a 5 photosensor), so that movement, speed and direction of an object can be determined. Also, defocus maps and depth maps of an object and/or a scene can be provided, in realtime. Such apparatus is in principle solid, i. e. the apparatus has no moving parts, which has advantages for manufacturing, expense, and sturdiness of the apparatus.
5
An optical rangefinder is proposed which is comprised of standard imaging optics in combination with an optical mask, a photosensor and processing means. In the examples given in this document, the optical mask includes at least one chiral optical element, which element has a prismatic surface covering only a part of the mask 10 aperture, but optical surfaces of such optical mask are not restricted to prismatic or chiral surfaces. The image of the object captured by the photosensor is digitally transformed into a spatial spectrum. The displacement of this spatial spectmm versus a reference spectmm (which corresponds to the inherent spectral response of the optical mask) is related to the degree of defocus of the object. The degree of defocus of the 15 image and the distance of the object are calculated. Additionally, the apparatus can be expanded with additional processing means to provide an in-focus image of the object, which can be further processed for extended depth of field images, depth-maps and defocus-maps. Detailed descriptions of the apparatus as well as methodologies are provided, 20
The present invention relates to an apparatus and, more generally, a method for optical range finding. From the single invention a number of applications can be derived, of which a number are listed below, but which applications are not restricted to said list: Firstly, this document describes an apparatus and method for estimating the degree of 25 defocus of the image of an object in the image plane relative to the in-focus plane (the in-focus plane position depends on the distance to the object) without prior knowledge of the distance to the object; Secondly, this document describes means to determine the distance from an object to the range finder; Thirdly, describes means to reconstmct an in-focus image of an object by digital processing; Fourthly, this document describes 30 means for estimating the degrees of defocus of multiple sub-images from the corresponding sub-areas of the image of an object or an object scene; Fifthly, this document describes means to reconstruct multiple in-focus sub-images corresponding to sub-areas of the image of an object or an object scene by digital processing; Sixthly, this document describes means to combine multiple in-focus sub-images from the 6 corresponding multiple sub-areas of the image of an object or an object scene into a final in-focus image by digital processing; Seventhly, this document describes means to construct a defocus map, i. e. a two-dimensional distribution of degrees of defocus; Eighthly, can be adapted to calculate speed and distance of an object by analyzing 5 subsequent images of the object, including speed in all directions X, Y and Z based on degrees of defocus of multiple sub-images analyzed in consecutive time periods; Ninthly, the invention(s) can be adapted for wave-front characterization by analyzing local degrees of defocus corresponding to sub-areas of an image or an object scene of the object from which an estimated wave-front can be reconstructed; Tenthly, can be 10 adapted to reconstruct images with the extended depth of field (EDF) by combining multiple in-focus images from the corresponding multiple sub-areas of the image of an object or an object scene; Eleventhly, can be adapted to many non-optical applications, for example, tomography for digital reconstmction of a final in-focus image of an object of interest from multiple blurred sub-images resulting from a non-local spatial response 15 of the acquisition system (i. e. degradation of sub-images can be attributed to a convolution with the system response function which, in turn, changes from one subimage to another, but remains almost constant within a sub-image), of which the response function is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori; Twelfthly, a man skilled in the arts will 20 conclude that the prismatic optics of the invention can be sensitive to the wavelength of light, and thus can be adapted, with additional optical and processing means, to provide information on the colour spectrum of an object, i. e. the invention can be adapted for also spectroscopy. For example, in military applications and various technical applications, a range finder combined with some form of spectrometer or a colour filter 25 can be advantageous.
With the methods described in this document the unknown a priori degree of defocus of the image of an object relative to the in-focus plane (where a focused image of the object is formed) can be determined from the amount of displacement of the spatial 30 spectrum of the image. This displacement is represented by a combination, or any particular type of displacement, including lateral shift, rotation and scaling of the image spectrum, which combination and dependency on defocus is known a priori and is completely stipulated by the optical mask design. The distance from the object of interest to the range finder is then calculated from the amount of displacement.
7
An additional property of the optical mask which might be important for practical reasons is to create characteristic mask-related features in the spatial spectrum of the object. These spectral features, or characteristic patterns in the spectrum, do not depend 5 on the object spectrum and allow determination of the amount of displacement (caused by defocus) versus the reference spectrum, So, the optical mask preferably modulates the incoming light beam such that the spatial spectrum of the image has, at least one, detectable feature, for example, a pattern of lines, from which displacement the degree of defocus of the image can be estimated. In many practical cases but not always, the 10 reference spectrum coincides with the spectral response of the mask, which spectral response, in turn, is represented by the incoherent OTF of the mask with no dcfocus,
Range finder and imaging apparatus
Hence the present invention provides an optical range finder comprising imaging optics 15 adapted to project an image of at least one object on the image surface of a photosensor being adapted to transform the image into a corresponding electronic image, at least one optical mask located in the optical path of the imaging optics to modulate the light beam and processing means for processing the electronic image produced by the photosensor, wherein the optical mask is adapted to modulate the incoming light beam such that 20 dcfocus of the image of the at least one object in the image plane relative to the in-focus plane results in displacement of the spatial spectrum of the image relative to the reference spectrum corresponding to the known inherent spectral response of the optical mask, and that the processing means comprise primary processing means adapted to provide the spatial spectrum of the, at least one, image by spectral decomposition, 25 secondary processing means adapted to provide the degree of defocus of the image of the object in the image plane relative to the in-focus plane from said degree of displacement of the spatial spectrum relative to the reference spectrum and tertiary processing means adapted to provide the distance from the, at least one, object to the range finder from the degree of defocus.
30
The optical mask is generally positioned behind the imaging optics, and wholly, or partially, covers the light beam, The optical mask is adapted to modulate the incoming light beam such that defocus of the image of the, at least one, object in the image plane relative to the in-focus plane results in displacement of the spatial spectrum of the 8 image relative to the reference spectrum corresponding to the known inherent spectral response of the optical mask. The spectral response of the optical mask, for example, can be represented by the incoherent OTF of the mask.
5 The processing means include primary processing means adapted to provide the spatial spectrum of the, at least one, image by spectral decomposition, secondary processing means adapted to provide the degree of defocus of the image of the object in the image plane relative to the in-focus plane from said degree of displacement of the spatial spectrum relative to the reference spectrum, and tertiary processing means adapted to 10 provide the distance from the, at least one, object to the range finder from the degree of dcfocus.
For example, the spatial spectrum of the image can be obtained by discrete Fourier transformation of the electronic image. The comparison of the image spectrum with the reference spectrum (modulus of the incoherent OTF of the mask), can be made by, for 15 example, calculating the cross-correlation of these spectra. Maximization of the crosscorrelation results in an estimate of the relative displacement, which displacement can be converted into defocus, and by using a simplified model of the imaging optics (for example, Nayar et al., Proc. of Fifth Inti. Conf. on Computer Vision, 995-1001, Cambridge, MA, USA, 1995) the distance of an object from the range finder can be 20 evaluated.
The optical mask can be adapted to provide a degree of displacement of the spatial spectrum correlated to the degree of defocus of the image. For example, the degree of rotation and scaling of a pattern of lines in the image spectrum is correlated to the 25 degree of defocus of the image. In the examples set forth in this document, the optical mask, in its basic embodiment, is an optical mask including, at least one, optical element with a chiral function which is designed such that the spatial spectmm of the image is rotated and scaled, of which the degrees of rotation and scaling are correlated to the degree of defocus of the image. Such chiral optical masks and chiral functions 30 will be discussed in detail below.
The optical mask can be further adapted to provide, at least one, characteristic pattern in the spatial spectrum. The characteristic patterns, or characteristic mask-related features in the image spectrum, do not depend on the object spectrum and allow determination of 9 the amount of displacement (correlated with defocus) versus the reference spectrum. For the optical mask with a chiral optical element the characteristic pattern is a pattern of lines. The spatial spacing (scaling) and angular orientation (rotation) of the pattern of lines depends on the degree of defocus. Clearly, a displacement of, for example, a 5 distinct set of lines, a pattern of lines, is much simpler to detect and measure than displacement of, for example, a severely blurred image or grey cloud.
Note that the effects of the optical mask, and, of course, the subsequent processing are not necessarily restricted to said chirality, rotation, and pattern of parallel lines. The 10 optical mask can also be designed, depending on the degree of defocus, to displace/shift the image spectrum laterally, to widen gaps between the spectrum features, to increase thickness of the features, to change the scale of the features, or any other combination of such variations.
15 The optical mask, in its basic embodiment, as set forth in this document, is a traditional transparent refractive optical element with multiple sub-areas. However, said optical masks can derive said functionality from said transmission (e. g. traditional transparent prismatic optics, likely manufactured from glass or transparent plastic polymer), but also reflection (e. g. the optical masks being mirrors of specified prismatic shape), 20 periodic phase and amplitude structures (e. g. the masks being phase and amplitude gratings effecting specified prismatic shape), holograms including computer generated holograms, for example, a detour computer generated optical hologram for specific applications to provide a chiral effect with only an amplitude mask, and diffraction optical elements and other optical embodiments, including GRIN type optics, which 25 result in the various functionalities described herein.
The optical mask can, for a number of specific applications, be represented by an amplitude function, designed to modulate the distribution of light, and a phase function, which modulates the phase of the light beams. The optical mask must be designed such 30 that the amplitude function in combination with the phase function provides a non-uniform distribution of the power spectra density, preferably with a characteristic pattern. Moreover, the configuration and spatial structure of this non-uniform distribution have to be dependent, in a known manner, on the degree of defocus.
10
An example of the optical mask: a chiral optical element
The phase function generated by the optical mask can be adapted to provide a chiral modulation of phase. To achieve such chiral modulation, at least part of the phase function of the, at least one, optical mask must be adapted such that the phase function 5 is represented by, at least one, chiral surface. Chiral surfaces are generally of complex designs. However, for the inventions described in this document it was found that an adequate optical mask with chiral functionality can be composed of, in a simplified form, a prismatic surface which covers only part of the aperture of the optical mask. In the case of multiple such surfaces the direction of chirality, i. e. rotation 10 clockwise/counter-clockwise, should be similar. The, at least one, chiral prismatic surface can either constitute the complete optical mask, or, alternatively, can constitute only part of the optical mask, with the remaining part of the optical mask consisting of, at least one, prismatic surface with, at least one, alternative degree of prismatic effect, or, alternatively, the remaining part of the optical mask can be a flat window, i. e. with 15 parallel surfaces and no prismatic effect, or, alternatively, the remaining part comprises any other optical surface.
An example of the optical mask with a chiral prismatic surface is a square aperture mask with a prism which occupies a half of the aperture (wedge of the prism is along the side of the mask) and a flat surface occupying another half of the mask. Another 20 example is the mask with rectangular shape and optical surface represented by a helical surface.
Note that the angular steepness (in this context the partial derivative with respect to the polar angle) of the chiral prismatic surface can vary depending on the application and 25 specifications of said surface, e. g. said steepness can be linear or non-linear function. Similarly, the radial steepness (the partial derivative with respect to the radius) can be designed depending mostly on additional requirements, for example, the mask might include an additional spherical surface to change the focal distance of the imaging system. In general, one optical mask can comprise any number of such surfaces which 30 surfaces can represent any combination of chiral prismatic surfaces, but at least one, with non zero angular steepness.
11
Note also that a man skilled in the arts will conclude that chiral prismatic surfaces can be provided by various means, for example an aperture covered, only in part, by a prism, an aperture covered, only in part, by a combination of parabolic or spherical lenses surfaces, which, in combination, might result in prismatic effect (for example, 5 two parabolic surfaces shifted laterally produce a variable wedge). Also, two prisms, positioned back-to-back and rotated by a certain degree can provide a variable prism depending on the degree of rotation, which variable prism can also cover part of the aperture and create a chiral prismatic surfaces. Clearly, chiral surfaces can be obtained by a variety of components and constructions and the choice of the component or 10 construction depends on the design of the complete range finding apparatus and on its specifications. From the practical point of view, employing standard optical surfaces to compose a chiral prismatic surface presents an interest since it greatly simplifies the fabrication of such optical surfaces.
The, at least one, optical masks can be adapted to provide functionalities according to 15 any combination of phase functions and amplitude functions which can be concluded from the descriptions above. Clearly, the final choice of design is dependent upon specifications of the apparatus including, for example, the required accuracy and dynamic range.
20 Note that colour filters can be included into the mask design. Application of such filters allows subsequent imaging and evaluation of images based on their wavelength, or, alternatively, wavelength ranges. Colour images can be detected by, for example, a photosensor with Bayer colour filter pattern.
25 Image processing means and processing method
The optical range finder includes also processing means or, alternatively, calculation means, to evaluate the distance of an object from the optical range finder by processing a selected sub-area of, at least one, image on the, at least one, photosensor. Note that the electronic signals can be transformed, after processing by any of the processing means, 30 into an image on a display. Such displayed image can be evaluated by, for example, optical-mechanical evaluation, for example, a system of optical elements and mechanical sliding or rotating scales and visual inspection by an observer. Alternatively, and in the modem digital era more likely, it can be further processed 12 electronically for final digital evaluation by processing or calculation means by, at least one, electronic processor and adapted software to drive said processor.
The processing means include, at least one, primary processing means to provide, at 5 least one, spatial spectrum of the, at least one, image. The optical mask is adapted to modulate the incoming light beam such that the degree of defocus of the image of the, at least one, object in the image plane relative to the in-focus plane results in a displacement of the spatial spectrum of the image relative to the reference spectrum corresponding to the inherent known spectral response of the optical mask, known a 10 priori.
The primary processing means are adapted to provide the spatial spectrum of the, at least one, image by spectral decomposition. The primary processing means can be comprised of, at least one, digital processor, or, alternatively, at least one, optical 15 element to provide the spatial spectrum of the, at least one, image. The spatial spectrum of the image can be obtained by digital processing, for example, by discrete Fourier transformation, of the electronic image provided by the photosensor, or, alternatively, the spatial spectrum can be obtained directly from the image by an optical processor, for example, using the incoherent optical processor described in US4556950, which is 20 included in this document by reference.
The secondary processing means adapted to provide the degree of defocus of the image of the object in the image plane relative to the in-focus plane from said displacement relative to the reference spectrum using the inherent spectral optical response of the 25 optical mask, which is known, a priori. So, the secondary processing means are adapted to provide an estimate of the degree of defocus of the, at least one, image of the, at least one, object in the, at least one, image plane relative to the, at least one, in-focus plane from the displacement of the, at least one, spatial spectrum. The secondary processing means either include an optical-mechanical assembly to estimate the degree of defocus 30 by visual evaluation and optical-mechanical means by an observer, or, include an electronic processor and corresponding software to estimate the degree of defocus.
The tertiary processing means provide an estimate of the distance of the, at least one, object from the range finder. The distance is provided from the degree of defocus of the 13 image of the object relative to the in-focus plane. Tertiary processing means can ether include optical-mechanical means and an observer or, more likely, an electronic processor performing signal processing.
5 In many cases the reference spectrum represented by the inherent spectral response of the optical mask can be calculated in advance (analytically or numerically) if the amplitude function and the phase function are analytically defined functions. A preferable choice for the reference spectmm is the modulus of the OTF calculated with the amplitude and the phase functions of the optical mask. In situations when the 10 inherent spectral response cannot be evaluated analytically or numerically, the reference spectrum can be generated during, for example, factory calibration of the apparatus using an object placed, for example, in the plane optically conjugate to the photosensor plane and resulting in zero defocus.
15 Additional imaging apparatus
An optical range finder and imaging apparatus can be constructed as a combination of the optical range finder and imaging apparatus. The optical range finder as described hitherto can be adapted to provide an image of the object. Moreover, the optical range finder can be coupled to focusing means of the imaging apparatus. If required so, the 20 range finding and imaging functions can be combined in an integrated optical range finder and imaging apparatus. Such imaging apparatus can be adapted such that, at least one, in-focus image of the, at least one, object is reconstructed from the image of the object as provided by the photosensor. It is, however, preferable to reconstruct such infocus image from the spatial spectmm of the image resulting from the transformation by 25 the primary processing means. So, the imaging apparatus is adapted to reconstruct, at least one, in-focus image of the object from the spatial spectmm of the image detected by the photosensor.
Also, an imaging apparatus as described above, can be adapted by including additional 30 processing means, Firstly, to provide the degrees of defocus of, at least two, sub-images from the corresponding sub-areas of, at least one, image of the object. Secondly, additional processing means can be also adapted to reconstruct, at least two, in-focus sub-images from the corresponding sub-areas of, at least one, image of the object.
14
Thirdly, additional processing means can be also adapted to combine said in-focus sub-images, electronically, into a final in-focus image. Fourthly, the imaging apparatus can include additional processing means to provide, by construction, a representation of the image as a defocus-map or, alternatively, as a depth-map.
5
Note that from the degree of defocus per sub-image (each sub-image depicts a “sub-object”) the distance from the sub-object to the imaging apparatus can be calculated according to adapted tertiary processing means as described above. Such repeated over time distance measurement can provide estimates of speed and direction of an object. 10 Also, combining the distance values of a large number of sub-images a depth-map can be constructed. Dcfocus-maps and depth-maps arc important tools for general image analysis, but can be of special importance for military and home-security applications because of additional options for optical, and generally passive, detection of movement and speed of objects.
15
Note that the position or course of an object can be visually or digitally followed while, in parallel, its speed and direction are provided. Clearly, a combined apparatus likely suited for many defence and home-security applications, in addition to likely consumer (e. g. cameras), automotive (e. g. imaging in combination with distance and speed 20 measurements), and various technical and medical applications.
Methods for range finding and imaging
The method for optical range finding includes projection of an image of at least one object on an image plane, modulation of the incoming light beam by an optical mask, 25 transformation of, at least one, image into a corresponding electronic image, and processing steps with the following characteristics: Firstly, at least one image, of, at least one object in, at least one, image plane is projected by, largely standard, imaging optics; Secondly, the, at least one, optical mask modulates the light beam (after the beam passes the imaging optics and before the image is projected on the photosensor) 30 such that defocus of the image of the, at least one, object in the image plane relative to the in-focus plane results in a displacement of the, at least one, spatial spectrum of the image relative to the reference spectmm which reference spectrum corresponds to the inherent spectral response of the optical mask, which spectral response is known, a 15 priori; Thirdly, the modulated light beam is transformed into a corresponding electronic image by a photosensor.
Fourthly, a number of processing steps include: The primary processing step which provides, at least one, spatial spectrum of the image by spectral decomposition of the 5 image provided by the photosensor. The primary processing step decomposes the, at least one, image into the spatial spectrum (by, for example, discrete Fourier transformation, or any other spectral decomposition method), following basic imaging, modulation of the light beam and transformation of the image in electronic signals by photosensing. The secondary processing step which provides the degree of defocus of 10 the at least one object in the image plane relative to the in-focus plane by evaluation of the displacement of the spatial spectrum relative to the reference spectrum which corresponds to the inherent optical response of the, at least one, optical mask, which optical response is known, a priori. The secondary processing step analyzes the spatial spectrum by either an observer who estimates defocus values by analyzing an image of 15 the spatial spectrum by optical-mechanical methods, or, the spatial spectrum is analyzed by electronic processing (by, for example, evaluating the cross-correlation of spectra). However, being optical and mechanical or, alternatively, electronic and digital, both methods are adapted to provide the degree of defocus of the, at least one, image of the, at least one, object in the, at least one, image plane relative to the, at least one, in-focus 20 plane from the displacement of the, at least one, spectrum. The tertiary processing step provides the distance from the, at least one, object to the range finder, by using the calculated degree of defocus.
The method for modulation of the light beam is preferably such that the degree of 25 displacement of the spatial spectrum is correlated with the degree of defocus of the image. For example, with a chiral optical masks discussed above, a certain degree of scaling and rotation of a pattern of lines, as characteristic pattern of the image spectrum, is correlated to a certain degree of defocus of the image. A combination of scaling and rotation is just one example, of displacement. The spatial spectrum can be laterally 30 shifted, rotated and scaled including any combination of said displacements, and other displacements not listed in this document.
The light beam can also be modulated to provide characteristic pattern in the image spectrum. For example, with a chiral optical masks discussed above, such modulation 16 can provide a pattern of lines. Such pattern (see also, for example, patterns of lines in the figures included in this document) greatly simplifies the subsequent analysis of the spatial spectrum and its comparison with the reference spectrum.
5 The methods above can be extended to include additional methods and processing steps adapted to provide image reconstruction. Such methods, for example, provide reconstruction of, at least one, in-focus image of the, at least one, object. Similarly to the foregoing, the methods can be also extended to provide reconstmction of multiple sub-images from the corresponding multiple sub-areas of the, at least one, image of the 10 object.
A man skilled in the art of digital image reconstruction will conclude that (a) - the initial blurred image, (b) - its spatial spectmm and (c) the information of the degree of defocus provide, in most situations, ample information to reconstruct a sharp image. Clearly, the quality of such reconstructed image depends on the requirements, specifications of 15 hardware and software as well as various other factors.
An embodiment of the optical range finder as described above includes a projection of, at least one, image on, at least one, photosensor. Such image can be blurred and defocused completely or partially, but, depending on the degree of said blurring and 20 dcfocus, at least one, recognizable image is provided. While such image is generally does not represent a high quality image, the image might be useful to provide, for example, a targeting reference for the range finder. However, the blurring and defocus can be, at least partially, corrected for digitally by reconstruction algorithms, for example, the least-mean-square-error inverse filter (J.W. Goodman, Introduction to 25 Fourier Optics, McGraw-Hill Co., Inc., New York, 1996). More complex image reconstmction algorithms requiring lower signal-to-noise premium can also be applied.
A man skilled in the arts will conclude that such algorithms and methods can be adapted to reconstmct a sharp image from the blurred and defocused image resulting from the optical mask employed in the optical range finder described above, which blurred and 30 defocused image is also the “intermediate image”, in terms of wave-front encoding/decoding technologies. All these encoding and decoding steps can be carried out in real-time which allows constant monitoring with regard to position, speed and direction of even rapidly moving objects.
17
Additionally, a method can be adapted to obtain an in-focus sub-image from a corresponding (blurred/defocused) sub-image of the, at least one, object. Moreover, a large number of such in-focus sub-images from corresponding (blurred/defocused) subimages can be combined into a final in-focus image of the object. Three-dimensional 5 sharp images, or, alternatively EDF images, can be constructed.
Analytical framework
The method for optical range finding requires, at least one, optical mask positioned inside or outside an optical system, preferably in the plane of the exit pupil of the 10 optical system, to modulate the phase and the amplitude of the incoming light. In the simplest embodiment of the optical range finder, the light after passing the mask is registered by a photosensor in the image plane, which plane is a fixed plane specified by the system design and, generally, does not coincide with the in-focus plane of the optical system for the distance range of interest. Alternatively, the light can be 15 registered by the photosensor positioned in plane representing the spatial spectrum of the image, for example, in the output plane of the optical processor described in US4556950.
In many cases the mask can be designed as a part of the optical system or the photosensor, or a part of any element of the optical system, for example, a lens with a 20 modified surface. The optical mask comprises an amplitude function, to modulate the distribution of light, and a phase mask, to modulate phase. In the Cartesian coordinate system with the Z axis directed along the optical axis of the system, and the X and Y axes lying in the plane perpendicular to the optical axis, the complex mask transmission function is given by 25 P(x,y) = p(x,y)exv[ift(x,y)], (1) where p(x,y) is the amplitude transmission function and d(v,y) is the phase function. Thus, the plane XY is the plane of the mask. The reduced (dimensionless) coordinates are used in the present formulas (H.H. Hopkins, Proc. Roy. Soc. of London, A231, 91-103,1955). Assuming that the optical system is affected by defocus with the magnitude 30 φ, and the mask is positioned in the exit pupil of the optical system, the incoherent optical transfer function (OTF) as a function of the reduced spatial frequencies G). and 18 (üy, satisfying the inequalities | G)r |,| 0, |< 2, becomes (H.H. Hopkins, Proc. Roy. Soc. ofLondon, A231, 91-103, 1955)
1 30 00 CO (O
//(0,,0),.,9) [ ΪΡ(χ+^,γ + -^)Ρ*(χ-^,γ-^-)οχρ[ΐ2φ(ωχχ+ω y)]dxdy, il J J 2 2 2 2 —00 —00 (2) 5 here Ω is the total area of the pupil in reduced coordinates 00 00 Ω= l \\P(x,y)\2 dxdy. (3)
Specifying the spatial spectrum of the object as a Fourier transform of the object intensity distribution 2 OC 00 h(ωχ,(0,,) - — I 11()(x\ v)expHfco/+(0, /)1 dxdy , (4) —OO —00 10 where x and y are the transverse coordinates in the object plane and I0(x',y) is the intensity distribution that characterizes the object, the spectrum of the image (in the image plane) takes the form (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996) /,.((0^) = //(0,,0,,9)/,,(0,,0^). (5)
15 Thus, the spatial spectrum of the image is a product of the object spectrum and the OTF
of an incoherent system with defocus originating from the unknown a priori distance from the optical system to the object of interest.
The present invention discloses a method to modify the mask transmission function 20 P(x,y) in such a way that the spatial spectrum of the object /,,(0), ,0), ), given by Eq. 5, depicts defocus changes as detectable displacements of the spectrum. Moreover, these displacements contain characteristic pattern or spectral features that depend, in a known manner, on defocus and permit quantitative characterization of displacements in order to determine φ. Among possible displacements of the image spatial spectrum are lateral 25 shift, rotation and scaling. More complex displacements that include combinations of lateral shift, rotation and scaling are also admissible as soon as they allow an 19 unambiguous and quantitative characterization of defocus irrespectively the spatial spectrum of the object /0((i)x,my).
In the present invention, to obtain detectable displacements resulting from changes in 5 the magnitude φ of defocus and produce characteristic features in the spatial spectmm of the image, the phase function ΰ(χ,γ') of the optical mask is represented, at least in part, by a chiral prismatic surface. The amplitude function p(x, y), in this case, has to be adapted to produce characteristic features in the spatial spectrum of the image, which features allow an unambiguous and quantitative characterization of displacements in the 10 spatial spectrum of the image. For example, by expressing τθ =ri(x,y) in polar coordinates according to x-r cosa, y = rsinOC, rr—r (6> r = jx +y , a = arctan(v/v), it may be rewritten as ϋ =ü'(r,a). Assuming that d'&'/dr = 0, so ϋ =ϋ'(α), then for the amplitude function, for example, with circular symmetry , , [U2 + v2<l 15 p(*,y)= n 2 , 1» (7) [0,x2 + .y“ > l the resulting image spectrum appears to contains only symmetric characteristic features in the spatial spectrum of the image which are not related with the spatial spectrum of the object /0(ωχ,ω},). One may conclude that this combination of 'ö(x.y) and p(x,y) is not suitable for determination of (p.
20 For most combinations of O(x,y) and p(x,y), the analytical expression for the system OTF, specified by Eq. 2, can not be found explicitly. However, numerical simulations can be carried out to predict spectmm displacement caused by defocus. Alternatively, a fully assembled optical system with the properly designed mask can be calibrated with a set of objects positioned at different distances from the range finder. With a discreet set 25 of the experimentally registered degrees of displacements corresponding to a discreet set of distances an intermediate distance can be evaluated by, for example, interpolating the calibration data.
20
Quantitative characterization of the degree of spectrum displacement versus defocus requires comparison of the image spatial spectrum with the reference spectrum which represents the case of zero defocus, or, alternatively any known a priori defocus. One of 5 the simplest choices is to use the incoherent OTF of the optical system evaluated at φ = 0. In this case, the degree of displacement can be found by comparing ff((Ox,(Oy,0) with /,(00,,0) ). The magnitude of defocus, in turn, is evaluated by comparing /7(ωχ,ω ,<p) with 1{((0χ,(0},), where φ is adjusted to get the closest match between fi(οι ,ω},.φ) and Ii(C0x,(0y). In many cases, but not always, the best match between 10 Η((ύχ,(ύγ,φ) and /;(ωτ,<ΜΓ) can be found by, for example, maximizing their crosscorrelation.
Once the magnitude of defocus φ is found the in-focus image of the object, or equivalently the spectrum of the object, can be calculated using the spatial spectrum 15 /.(GXjCöQ of the image and the system optical transfer function //(ωΛ,ων,φ). For example, using Eq. 5, the simplest inversion method requiring, however, the largest signal-to-noise premium, results in the following object spectrum (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996) , „ x /;(ωχ,ων)/Τ(ωχ,ων,φ) /η(ων,ω„) =-z-, (o) 0 ’ |//(«x,GV(p)|2+e 20 where the constant ε_1, by analogy with the least-mean-square-error filter (Wiener filter), denotes the signal-to-noise ratio. The in-focus image of the object is then calculated with the inverse Fourier transformation.
An example: apparatus with a half-aperture prismatic optical mask 25 Among the simplest implementations of the optical masks which, firstly, create features in the spatial spectrum of the image and, secondly, produce an easily detectable displacement of these features is a rectangular aperture with a half-aperture prismatic element. The amplitude function of the mask is given by fl,| x |< 1 and I y |< 1 ρ(χ’Ϋ) = otherwise ’ (9) 21 and the phase function is specified as \Ay,x> 0 *(*θΗη ^ n ' (10) [0,x < 0
It is obvious that fi(x,y) specified by Eq. 10 is chiral. Actually, a mirrored phase function with respect to the Y axis yields Offl(x,y) =O-(-x, v) which cannot be 5 superimposed with fi(x,y) by any rotation or shifl.
Assuming, for simplicity of calculations, that 0 < (ύχ < 1 and 0 < 0),, < 2, the integration according to Eq. 2 with the mask specified by Eqs. 9-10 results in the OTF which can be represented as a combination of three contributions Η(ωχ,ω^) = ΗΙ(ωχ,ων,(ρ) + ΗΙΙ(ωχ,ωγ^) + ΗΙΠ{ωχ,ω^<ρ) (11) 10 coming from: (I) the intersection of two flat half-apertures #,(ω„ω,,φ) = £ΧΡ(Γφω) sinten (1 - οι)] sinten, (2 - ω )], (12) 4φ ωχω}.
(II) the intersection of flat and prismatic half-apertures exp(zAa),, / 2) Ί
Hn( ω,,ω^,φ) = -- y sin((po))sm[(l -0),,/2)(A + 2φω)], (13) y 2φωχ(Λ + 2φων) and (ΙΠ) the intersection of two prismatic half-apertures 15 Ηηι(ωχ,ω ,φ) = exp(;^>+φω*]) ^[φω,Π-ω^Μφω, (2-(0,,)]. (14) 4φ
After substituting Eqs. 12-14 into Eq. 11, the defocused OTF of the optical system with a half-aperture prismatic mask takes the form //(ωχ,ω}1,φ) =[a + />cos(cpcöx + Aa>y/ 2)] exp(/4o)„ / 2), (15) where the real coefficients a and b arc sin(9(öx) sin[(l - ω / 2)( A + 2φω )] 20 a =---:—, (16) 2φωχ(Α + 2ψωγ) t _ sin[<P<Prfl - m,)]^η[φο), (2 -ω„)] 2φ2ωχω}, 22
As follows from Eq. 15, the OTF contains a periodic structure, the said pattern of lines, which structure does not depend on the object structure at all, but is sensitive to defocus. This periodic structure can be treated as a pattern of lines created largely by interference of the terms ^(ω^,ω ,φ) and //^(0),,0),. ,φ) coming from intersections of two flat 5 half-apertures and two prismatic half-apertures, respectively.
The phase Φ in Eq. 15 is represented by a linear function of spatial frequencies 01 and Φ =φωΛ+^ω3,/2. (18) 10 Introducing notations 0)χ =cocosa, ων =0sina, * ' Γ~2 2 (19) o) = ^o);+o);, a = arctan(ö)F /ωχ),
Eq. 18 can be rewritten Φ = ω^φ2 + Α2/4 cos(a - β), (20) where β = arctan[ri/(2cp)] is the angle perpendicular to the line pattern.
15 From Eq. 20, it follows that the line pattern is rotated by the angle a = -π /2 + β about the origin (a < 0 when A > 0 and φ > 0). At a = β the spatial period T of the line pattern structure reaches its minimum T = 2n/^2 + A2/4 . (21)
So, the pattern orientation specified by the angle 20 a =7r/2-arctan[ri/(2(p)] (22) and its spatial period, given by Eq. 21, vary, in a known manner, with the magnitude φ of defocus. These dependencies are known a priori and can be employed to determine φ.
25 Additional notes 23
Passive and active optical rangefinding: Note that, the rangefinding apparatus described in this document is, in essence, a passive optical rangefinder, but it can be adapted to become an active optical range finder by addition of illuminating components, for example visible (search) lights, TR-lights, various types oflaser-lights, and so forth. In 5 combination with the additional imaging apparatus described in this document it can also function as targeting instrument in both passive and active modes.
Use of polychromatic light: Note that, a range finder using polychromatic light, but separated in quasi-monochromatic spectral components, can be constructed by dividing, 10 for example, the surface of the optical mask in three sections, with one section equipped with a red filter, one section with a blue filter and one section with a green filter in combination with, at least one, photosensor with a corresponding, for example, Bayer color filter pattern and providing three images in rapid succession while, for example, rotating the optical mask for 120 degrees in between taking the images. Such procedure 15 has the disadvantage of turning a basically solid apparatus in an apparatus with mechanical movement, but will have the advantage of improving the accuracy of the range finding (because the effects of wavelength can be corrected for) as well provide for full color imaging at image reconstruction, which image reconstruction will be described. Color imaging can also be achieved by splitting polychromatic light into 20 narrow spectral bands. White, visible light can be imaged when separated into, for example, red (R), blue (B) and green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer color filter pattern providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images.
25
Object-related artifacts in the image spectrum: Note that, the spatial spectrum of the image can have various shapes, depending on the spatial structure of the object of interest, the imaging optics, the type of optical masks and the other components. The spatial spectrum takes, for example, the shape of multiple sets of datapoints, generally 30 an asymmetric cloud of datapoints. The spectrum of the image might have strong artifacts, i. e. pronounced features, caused by the object spatial structure (for example, its spatial periodicity), which features might result in an improper determination of the degree of defocus: the main peak is obscured by the additional peaks, in terms of the cross-correlation function calculated with the image and reference spectra. To get rid of 24 the object-related features in the spatial spectra, an additional optical mask with a random amplitude function or/and a random phase function can be combined with the main optical mask which converts defocus into displacement of the spatial spectra. The function of the additional mask is to homogenize the spatial spectrum of the object by 5 suppressing object-related features.
Optical-mechanical processing and digital processing: Note that, all the processing steps described in this document can be optical-mechanical methods and optical-mechanical means, for example, a system of optical elements and mechanical sliding or 10 rotating scales and visual inspection by an observer. (Or, in part, even by a traditional calculation ruler, for, for example, the tertiary processing calculation: The calculation to derive distance from degree of defocus is relatively straightforward and simple in essence - it follows most standard basic optical laws). Alternatively, said processing can be digital electronic by digital electronic means which, for example, evaluate 15 correlation between the reference spectrum and the spectrum of the image, which correlation is a function of displacement parameters, for example, rotation, shift and scaling. Maximization of the correlation function yields quantitative estimates of displacements. Alternatively, traditional statistical methods such a least square methods or regression analysis can be employed to find displacements. Clearly, said calculations, 20 processing and estimations can, of course, in modem times, best be provided by digital electronic means and calculation means.
Speed of processing and processing means: Note that all processing steps can include individual processing means (e. g. different electronic processors) or processing steps 25 can be combined in a single processing means, or, one processor with software which performs the separate processing steps in sequence. The non-iterative nature of all calculations forwarded in this document allow for high speed of processing which can coincide with the acquisition rate of the photosensor and, thus, allow for reliable and precise range finding, speed and distance measurements of objects as well as imaging of 30 the objects moving at even high velocities, all updated in real-time.
Defocus-maps and depth-maps: Note that, with the information on defocus value of a large number of sub-images a defocus map can be provided, and, calculated from the defocus values of the defocus map, a depth map of the object, or, alternatively, object 25 scene, or, alternatively, scene, can be evaluated. Note that in-focus imaging can also be achieved by traditional methods (e. g. apodizing an aperture in analogue and digital photography). However, defocus maps and depth maps are unique for advanced digital imaging as described in this document and can not be constructed via traditional 5 imaging methods. The use of defocus maps and depth maps, in particular if real-time updating is included, in defense, home-security, medical, automotive, consumer goods and a host of technical applications is significant.
Application for IR, X-rays and other waves: Note that, this invention can, in principle, 10 be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes. The invention can be applied to infrared (IR) spectra. X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but the methods can be used for X-rays by application of, for example, crystalline monochromators to produce 15 monochromacity.
For ultrasound and coherent radio frequency signals the formulas can be adapted for the coherent amplitude transfer function of the corresponding system. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to almost any wave process and to almost any aberration of choice by adaptation and derivation of 20 the formulas and mathematical concepts presented in this document.
Precision of measurements of the absolute defocus: Not that, the precision of the measurement of the absolute defocus is fundamentally limited by the entrance aperture of the primary optics and the distance from the primary optics to the object of interest 25 (see, for example, C.J.R. Sheppard, J. Microsc. 149, 73-75, 1988). A higher precision requires larger apertures of optical systems. However, an effectively large aperture can be obtained by optically combining the light signals from several relatively small reflective or refractive elements located widely apart in the direction perpendicular to the optical axis of the imaging system. When the elements are only small parts of a 30 large refractive surface and these elements are oppositely distributed on the surface periphery, the theoretical depth of focus, e. axial resolution, corresponds to the resolution of the whole refractive surface. The system with two distributed entrance apertures can be made flat.
26
Description of figures
Figure 1 shows the basic embodiment of the optical range finder and imaging apparatus with an object, 1, imaging optics, 2, which projects the image of the object in the image plane, 3, which coincides, in this particular embodiment, with the plane of the 5 photosensor, 4, optical mask, 5, with an amplitude function, to modulate the distribution of light, and a phase function, to modulate phase, positioned in the exit pupil, 6, a photosensor, 7, for transforming the image into electronic signals, to be processed by an electronic processor, 8. This figure also shows light beam coming from the object, 9, the incoming light beam, 10, and the light beam once it passed the optical mask, 11.
10
Figure 2 shows a half-aperture chiral prismatic optical mask, comprised of two elements, in the preferred embodiment with, in this example, a square aperture, 12, which, in this example, is comprised of one planar element, 13, and, in this example, one prismatic element, 14, of which each of these elements covers part of the aperture.
15
Figure 3 shows a full-aperture chiral prismatic optical mask, comprised of two elements, an alternative to the half-aperture chiral prismatic optical mask, with, in this example, a square aperture, 15, with two prismatic elements, 16, 17, of which each of these elements covers part of the aperture.
20
Figure 4 shows an example of the inherent spectral response of the optical mask when the amplitude function and the phase function are analytically defined functions. In this particular example, the inherent spectral response corresponds to the modulus of the incoherent optical transfer function calculated for the optical mask with the amplitude 25 function fl, I jc |< 1 and | v |< 1 M*. X’ = jo^ otherwise ' and the phase function [50γ,χ > 0 fr(x,y) = ^ lo,x<0 27
Figure 5 shows the generation of the reference spectrum when the inherent spectral response of the optical mask cannot be calculated analytically or numerically. Light beam, 9, coming from an object, 18, (which can be any standard object during, for example, factory calibration of the apparatus) is projected by imaging optics, 2, and 5 modulated by an optical mask, 5, on an photosensor, 7, which transforms, 19, the image into a corresponding electronic image, 20, which image is transformed, by the primary processing step, 21, into a spatial spectrum, 22. In this particular example, the light beam, 9, coming from the object, 18, has a random intensity distribution.
10 Figure 6 shows the method for range finding and imaging. Light beam, 9, coming from an object, 23, is projected by imaging optics, 2, and modulated by an optical mask, 5, on an photosensor, 7, which transforms, 19, the image into a corresponding electronic image, 24, which image is transformed, by the primary processing step, 21, into a spatial spectrum, 25, of which the degree of defocus, 26, of the image (calculated from 15 displacement of the spatial spectrum versus the reference spectrum, see for example, 22) is provided by the secondary processing step, 27, followed by calculation of distance, 28, by the tertiary processing step, 29. Also, the spatial spectrum, 25, in combination with the defocus, 26, in combination with additional processing means, 30, and processing step, 31, can provide a reconstructed image, 32, in addition to range 20 finding.
Figure 7 shows the results from a prototype optical range finder with the optical mask designed according to Figure 2. In this example, the image on the photosensor, 33, depicts the object, 34, in-focus (/. e. the photosensor plane coincides with the in-focus 25 image plane of the object). Spectral decomposition, 35, provides the spatial spectrum, 36, of the image of the object which contains the characteristic pattern of lines, 37, aligned, in this example, with the vertical axis, 38.
Figure 8 shows the results from a prototype optical range finder, with the optical mask 30 designed according to Figure 2. In this example, the image on the photosensor, 33, depicts the object, 34, out of focus (/. e. the photosensor plane does not coincide with the in-focus image plane of the object). Spectral decomposition, 35, provides the spatial spectrum, 36, of the image of the object which contains the characteristic pattern of lines, 37, angularly rotated (due to defocus), in this example, relative to the vertical axis, 5 28 38, by the degree of rotation, 39. The distance of the object from the range finder can be calculated from the degree of rotation using, for example, formulas derived in this document.

Claims (12)

1. Optische afstandmeter, omvattende: - beeldvormende optische middelen, die zijn ingericht voor het werpen van een 5 beeld van tenminste één voorwerp op het beeldvlak van een fotosensor, die is ingericht voor het omzetten van het beeld in een overeenkomstige elektronisch beeld; - tenminste één optisch masker dat in het optische traject van de beeldvormende optische middelen is geplaatst voor het moduleren van de lichtbundel; - bewerkingsmiddelen voor het bewerken van het door de fotosensor 10 geproduceerde elektronische beeld, met het kenmerk, dat het optische masker is ingcricht voor het zodanig moduleren van de binnenkomende lichtbundel dat de onscherpte van het beeld van het tenminste ene voorwerp in het beeldvlak ten opzichte van het focusseervlak resulteert in verplaatsing van het ruimtelijke spectrum van het beeld ten opzichte van het referentiespectrum dat overeenkomt met de bekende 15 inherente spectrale responsie van het optische masker, en dat de bewerkingsmiddelen omvatten: - primaire bewerkingsmiddelen voor het verschaffen van het radiale spectrum van het tenminste ene beeld door spectrale decompositie; - secundaire bewerkingsmiddelen die zijn ingericht voor het verschaffen van de 20 mate van onscherpte van het beeld van het voorwerp in het beeldvlak ten opzichte van het focusseervlak uit de mate van verplaatsing van het mimtelijke spectrum ten opzichte van het referentiespectrum; - tertiaire bewerkingsmiddelen die zijn ingericht voor het verschaffen van de afstand tussen het tenminste ene voorwerp en de afstandmeter uit de mate van 25 onscherpte. 1 2 Optische afstandmeter volgens conclusie 1, met het kenmerk, dat het optische masker is ingericht voor het verschaffen van een mate van verplaatsing van het ruimtelijke spectrum dat is gecorreleerd met de mate van onscherpte van het beeld. 30 2 Optische afstandmeter volgens conclusie 1 of 2, met het kenmerk, dat het optische masker tenminste één chiraal optisch element omvat.CLAIMS 1. An optical distance meter, comprising: - image-forming optical means, which are adapted to cast an image of at least one object on the image surface of a photo sensor, which is adapted to convert the image into a corresponding electronic image; - at least one optical mask placed in the optical path of the imaging optical means for modulating the light beam; processing means for processing the electronic image produced by the photo sensor 10, characterized in that the optical mask is arranged for modulating the incoming light beam such that the blurring of the image of the at least one object in the image plane with respect to the focal plane results in displacement of the spatial spectrum of the image relative to the reference spectrum which corresponds to the known inherent spectral response of the optical mask, and that the processing means comprise: - primary processing means for providing the radial spectrum of the at least one image by spectral decomposition; secondary processing means adapted to provide the degree of blurring of the image of the object in the image plane relative to the focusing plane from the degree of displacement of the temporal spectrum relative to the reference spectrum; tertiary processing means adapted to provide the distance between the at least one object and the distance meter from the degree of blur. Optical distance meter according to claim 1, characterized in that the optical mask is adapted to provide a degree of displacement of the spatial spectrum that is correlated with the degree of blurring of the image. Optical distance meter according to claim 1 or 2, characterized in that the optical mask comprises at least one chiral optical element. 4. Optische afstandmeter volgens conclusie 3, met het kenmerk, dat de verplaatsing van het ruimtelijke spectrum een combinatie is van het aanpassen van de schaal en het roteren van het ruimtelijk spectrum.Optical distance meter according to claim 3, characterized in that the displacement of the spatial spectrum is a combination of adjusting the scale and rotating the spatial spectrum. 5. Optische afstandmeter volgens conclusie 1, met het kenmerk, dat het optische masker is ingericht voor het verschaffen van tenminste één karakteristiek patroon in het ruimtelijk spectrum.Optical distance meter according to claim 1, characterized in that the optical mask is adapted to provide at least one characteristic pattern in the spatial spectrum. 6. Optische afstandmeter volgens conclusie 5, met het kenmerk, dat het 10 karakteristieke patroon in het ruimtelijk spectrum wordt gevormd door een lijncnpatroon.6. Optical distance meter according to claim 5, characterized in that the characteristic pattern in the spatial spectrum is formed by a line pattern. 7. Optische afstandmeter volgens één van de voorafgaande conclusies, met het kenmerk, dat de optische afstandmeter is gecombineerd met een beeldvormende 15 inrichting.7. Optical distance meter according to one of the preceding claims, characterized in that the optical distance meter is combined with an image-forming device. 8. Combinatie van een optische afstandmeter met een beeldvormende inrichting volgens conclusie 7, met het kenmerk, dat de optische afstandmeter is gekoppeld met de focusseermiddelen van de beeldvormende inrichting. 20A combination of an optical distance meter with an image-forming device as claimed in claim 7, characterized in that the optical distance meter is coupled to the focusing means of the image-forming device. 20 9. Combinatie van een optische afstandmeter met een beeldvormende inrichting volgens conclusie 7, met het kenmerk, dat de beeldvormende inrichting is ingericht voor het reconstrueren van tenminste één scherp beeld van het voorwerp uit het ruimtelijke spectrum van het beeld. 25A combination of an optical distance meter with an image-forming device as claimed in claim 7, characterized in that the image-forming device is adapted to reconstruct at least one sharp image of the object from the spatial spectrum of the image. 25 10. Combinatie van een optische afstandmeter met een beeldvormende inrichting volgens conclusie 7, met het kenmerk, dat deze is voorzien van extra bewerkingsmiddelen voor het verschaffen van de mate van onscherpte van tenminste twee hulpbedden uit desbetreffende gebieden van het voorwerp.10. Combination of an optical range finder with an image-forming device as claimed in claim 7, characterized in that it is provided with additional processing means for providing the degree of blurring of at least two auxiliary beds from respective areas of the object. 11. Werkwijze voor optische afstandmeting omvattende het werpen van een beeld van tenminste een voorwerp op een beeldvlak, het moduleren van de invallende lichtbundel door een optisch masker, het transformeren van het beeld tot een desbetreffend elektronisch beeld en bewerkingsstappen, met het kenmerk, dat - de modulatie de invallende lichtbundel aanpast opdat de onscherpte van het beeld van het tenminste ene voorwerp in het beeldvlak ten opzichte van het focusseervlak resulteert in een verplaatsing van het ruimtelijke spectrum van het beeld ten opzichte van het referentiespectrum dat overeenkomt met de inherente bekende 5 spectrale responsie van het optische masker; - een primaire bewerkingsstap het ruimtelijke spectrum van het beeld door spectrale decompositie omvat; een secundaire bewerkingsstap de mate van onscherpte verschaft van het beeld van het tenminste ene voorwerp ten opzichte van het focusseervlak uit de verplaatsing 10 van het ruimtelijke spectrum ten opzichte van het referentiespectrum; een tertiaire bewerkingsstap dc afstand verschaft tussen het tenminste ene voorwerp en de afstandmeter uit de mate van onscherpte.11. Method of optical distance measurement comprising throwing an image of at least one object onto an image surface, modulating the incident light beam through an optical mask, transforming the image into a relevant electronic image and processing steps, characterized in that - the modulation adjusts the incident light beam so that the blurring of the image of the at least one object in the image plane relative to the focusing plane results in a displacement of the spatial spectrum of the image relative to the reference spectrum corresponding to the inherent known spectral response of the optical mask; - a primary processing step comprises the spatial spectrum of the image by spectral decomposition; a secondary processing step provides the degree of blurring of the image of the at least one object relative to the focal plane from the displacement of the spatial spectrum relative to the reference spectrum; a tertiary processing step providing a distance between the at least one object and the distance meter from the degree of blur. 12. Werkwijze voor optische afstandmeting volgens conclusie 11, met het 15 kenmerk, dat de modulatie van de lichtbundel zodanig is dat de mate verplaatsing van het ruimtelijke spectrum gecorreleerd is met de mate van onscherpte van het beeld.12. Method of optical distance measurement according to claim 11, characterized in that the modulation of the light beam is such that the degree of displacement of the spatial spectrum is correlated with the degree of blurring of the image. 13. Werkwijze voor optische afstandmeting volgens conclusie 11, met het kenmerk, dat modulatie van de lichtbundel een karakteristiek patroon in het ruimtelijke 20 spectrum verschaft.13. Method of optical distance measurement according to claim 11, characterized in that modulation of the light beam provides a characteristic pattern in the spatial spectrum. 14. Werkwijze voor optische afstandmeting volgens conclusie 11,12 of 13, met het kenmerk, dat de werkwijze extra bewerkingsstappen omvat voor het uitvoeren van beeldreconstructie.A method of optical distance measurement according to claim 11, 12 or 13, characterized in that the method comprises additional processing steps for performing image reconstruction.
NL2002406A 2009-01-09 2009-01-09 Optical range finder and imaging apparatus. NL2002406C2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
NL2002406A NL2002406C2 (en) 2009-01-09 2009-01-09 Optical range finder and imaging apparatus.
US13/143,655 US8941818B2 (en) 2009-01-09 2010-01-08 Optical rangefinder and imaging apparatus with chiral optical arrangement
EP10700187.7A EP2386053B1 (en) 2009-01-09 2010-01-08 Optical rangefinder and imaging apparatus with chiral optical arrangement
JP2011545314A JP2012514749A (en) 2009-01-09 2010-01-08 Optical distance meter and imaging device with chiral optical system
PCT/NL2010/050007 WO2010080030A2 (en) 2009-01-09 2010-01-08 Optical rangefinder an imaging apparatus with chiral optical arrangement
CN2010800102452A CN102356298A (en) 2009-01-09 2010-01-08 Optical rangefinder and imaging apparatus with chiral optical arrangement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL2002406A NL2002406C2 (en) 2009-01-09 2009-01-09 Optical range finder and imaging apparatus.
NL2002406 2009-01-09

Publications (1)

Publication Number Publication Date
NL2002406C2 true NL2002406C2 (en) 2010-07-13

Family

ID=41228452

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2002406A NL2002406C2 (en) 2009-01-09 2009-01-09 Optical range finder and imaging apparatus.

Country Status (1)

Country Link
NL (1) NL2002406C2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5337181A (en) * 1992-08-27 1994-08-09 Kelly Shawn L Optical spatial filter
US7218448B1 (en) * 1997-03-17 2007-05-15 The Regents Of The University Of Colorado Extended depth of field optical systems
US20080137059A1 (en) * 2006-06-05 2008-06-12 University Of Colorado Method And System For Optical Imaging And Ranging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5337181A (en) * 1992-08-27 1994-08-09 Kelly Shawn L Optical spatial filter
US7218448B1 (en) * 1997-03-17 2007-05-15 The Regents Of The University Of Colorado Extended depth of field optical systems
US20080137059A1 (en) * 2006-06-05 2008-06-12 University Of Colorado Method And System For Optical Imaging And Ranging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JON C. LEACHTENAUER AND RONALD G. DRIGGERS: "Surveillance and Reconnaissance imaging systems: modeling and performance prediction.", 2001, ARTECH HOUSE INC., NORWOOD, MA, USA, XP002553938 *

Similar Documents

Publication Publication Date Title
EP2386053B1 (en) Optical rangefinder and imaging apparatus with chiral optical arrangement
US6229913B1 (en) Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
US4965840A (en) Method and apparatus for determining the distances between surface-patches of a three-dimensional spatial scene and a camera system
US5193120A (en) Machine vision three dimensional profiling system
US7612870B2 (en) Single-lens aperture-coded camera for three dimensional imaging in small volumes
US10317205B2 (en) Depth measurement using a phase grating
US8934097B2 (en) Laser beam centering and pointing system
CA2397095C (en) Apparatus and methods for surface contour measurement
WO2009108050A9 (en) Image reconstructor
EP0408224B1 (en) Computational methods and electronic camera apparatus for determining distance of objects, rapid autofocusing and obtaining improved focus images
US10458785B2 (en) Sample shape measuring method and sample shape measuring apparatus
Subbarao Direct recovery of depth map I: differential methods
CN105758381B (en) A kind of camera module method for detecting its tilt based on spectrum analysis
CN103635784A (en) Photoacoustic vibration meter
US20200410706A1 (en) Device and process for the contemporary capture of standard and plenoptic images
JPH09500730A (en) Device for three-dimensional measurement of inaccessible hollow space
Inui et al. Correction method of phase deference in accordance with the angle field for Wide-Viewing-Angle Fourier-Spectroscopic-Imaging
NL2002406C2 (en) Optical range finder and imaging apparatus.
EP0343158B1 (en) Range finding by diffraction
CA3051969A1 (en) Method and optical system for acquiring the tomographical distribution of wave fronts of electromagnetic fields
Mudge et al. Near-infrared simultaneous Stokes imaging polarimeter: integration, field acquisitions, and instrument error estimation
FI97085C (en) Method and imaging device for determining distance and use thereof
EP4004632A1 (en) Telescopes
Rodríguez-Ramos et al. New developments at CAFADIS plenoptic camera
JP2883193B2 (en) Rangefinder system

Legal Events

Date Code Title Description
V1 Lapsed because of non-payment of the annual fee

Effective date: 20130801