GB2589121A - Imaging apparatus - Google Patents

Imaging apparatus Download PDF

Info

Publication number
GB2589121A
GB2589121A GB1916967.1A GB201916967A GB2589121A GB 2589121 A GB2589121 A GB 2589121A GB 201916967 A GB201916967 A GB 201916967A GB 2589121 A GB2589121 A GB 2589121A
Authority
GB
United Kingdom
Prior art keywords
emr
imaging
array
detector
coherent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB1916967.1A
Other versions
GB201916967D0 (en
Inventor
William John Kent Lionel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BAE Systems PLC
Original Assignee
BAE Systems PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BAE Systems PLC filed Critical BAE Systems PLC
Priority to GB1916967.1A priority Critical patent/GB2589121A/en
Publication of GB201916967D0 publication Critical patent/GB201916967D0/en
Priority to PCT/GB2020/052831 priority patent/WO2021099761A1/en
Publication of GB2589121A publication Critical patent/GB2589121A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/957Light-field or plenoptic cameras or camera modules
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/42Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect
    • G02B27/4272Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect having plural diffractive elements positioned sequentially along the optical path
    • G02B27/4277Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect having plural diffractive elements positioned sequentially along the optical path being separated by an air space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/88Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/58Optics for apodization or superresolution; Optical synthetic aperture systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays
    • G02B3/0037Arrays characterized by the distribution or form of lenses
    • G02B3/0056Arrays characterized by the distribution or form of lenses arranged along two different directions in a plane, e.g. honeycomb arrangement of lenses
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B2215/00Special procedures for taking photographs; Apparatus therefor
    • G03B2215/05Combinations of cameras with electronic flash units
    • G03B2215/0564Combinations of cameras with electronic flash units characterised by the type of light source
    • G03B2215/0567Solid-state light source, e.g. LED, laser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

An imaging apparatus comprises a two-dimensional imaging array which comprises NxM imaging devices, where N and M are positive integers. The respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation detector. Each lenslet array comprises PxQ lenslets, wherein P and Q are positive integers. The lenslet array is arranged between the aperture lens and the detector. A set of S coherent EMR sources, wherein S is a positive integer, are arrangeable to irradiate an object with coherent EMR at respective angles of incidence. The variables P and Q may each be greater than or equal to 8, the EMR may have a wavelength of less than 1 micrometre, and each aperture lens may have a focal length z1 of greater than or equal to 28mm.

Description

Intellectual Property Office Application No. GB1916967.1 RTM Date:4 May 2020 The following term is a registered trade mark and should be read as such wherever it occurs in this document: Raytrix Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
IMAGING APPARATUS
Field
The present invention relates to imaging apparatuses. Particularly, the present invention relates to an imaging apparatus for imaging distant objects.
Background to the invention
Electromagnetic radiation (EMR) imaging, for example optical infrared, visible or ultraviolet imaging, from large stand-off distances typically results in low spatial resolution. When imaging a distant object, diffraction blur is a primary cause of resolution loss. Diffraction blur is typically caused by a limited angular extent of an aperture lens. Hence, it is desirable to use an aperture lens having a relatively large diameter for imaging such distant objects. For example, when imaging a distant object at a distance of z of 1 km, using an aperture lens having a relatively smaller diameter d of 12.5 mm, the diffraction limited resolution at that range is given by Az/d 50 mm diameter at a wavelength of 625 nm. This is the diffraction blur size (also known as spatial resolution) at the object, such that smaller features (i.e. less than 50 mm diameter) are unresolved i.e. blurred out. This means that facial and/or number plate recognition is not possible with this imaging system example. If an aperture lens, having the same focal length but a relatively larger diameter d of 125 mm, is used, a theoretical resolution spot size on the 1 km distant object is reduced to --L. 5 mm diameter at 625 nm. In this way, for example, facial and/or number plate recognition is possible. However, telescopic (also known as telephoto) lenses having comparable f-numbers (f/#s) are typically an order of magnitude more expensive, larger and/or heavier than equivalent portrait lenses, thereby precluding their use in certain applications. Furthermore, atmospheric turbulence may limit an effective useful resolution diameter of the aperture lens to a diameter equal to the Fried parameter (also known as the Fried coherence length), such that a practical (i.e. in use or achievable) diffraction spot size is increased compared with the theoretical diffraction spot size of the lens itself. Thus, the achievable spatial resolution may still be insufficient to resolve object features of interest such that facial and/or number plate recognition is not possible, even when using aperture lenses having sufficiently large diameters to not be limited by the optical aperture diffraction considerations.
Hence, there is a need to improve imaging, particularly EMR imaging of distant objects.
Summary of the Invention
It is one aim of the present invention, amongst others, to provide an imaging apparatus, a method of imaging an object using such an apparatus, a method of providing an image from EMR data detected by such an apparatus and a computer arranged to implement such a method of providing an image which at least partially obviates or mitigates at least some of the disadvantages of the prior art, whether identified herein or elsewhere. For instance, it is an aim of embodiments of the invention to provide an imaging apparatus that has an improved achievable spatial resolution, compared with conventional imaging apparatus, for imaging distant objects, such as at a distance of z of 1 km or more. For instance, it is an aim of embodiments of the invention to provide a method of imaging an object using such an apparatus that enables resolution of object features of interest of distant objects, such as at a distance of z of 1 km or more. For example, it is an aim of embodiments of the invention to provide a method of providing an image from EMR data detected by such an apparatus, having an improved achievable spatial resolution. For example, it is an aim of embodiments of the invention to provide a computer arranged to implement such a method of providing an image and/or from EMR data detected by such an apparatus.
A first aspect provides an imaging apparatus comprising: a two-dimensional imaging array comprising N x M imaging devices, wherein N and M are positive integers and wherein respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation, EMR, detector, wherein the lenslet array comprises at least P x Q lenslets, wherein P and Q are positive integers, and wherein the lenslet array is arranged between the aperture lens and the detector; and a set of S coherent EMR sources, wherein S is a positive integer, wherein respective sources are arrangeable to irradiate an object with coherent EMR at respective angles of incidence.
A second aspect provides a method of imaging an object comprising: irradiating the object with coherent EMR at respective angles of incidence using a set of S coherent EMR sources, wherein S is a positive integer; and imaging the object by detecting at least a part of the coherent EMR reflected therefrom using a two-dimensional imaging array comprising N x M imaging devices, wherein N and M are positive integers and wherein respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation, EMR, detector, wherein the lenslet array comprises at least P x Q lenslets, wherein P and Q are positive integers, and wherein the lenslet array is arranged between the aperture lens and the detector.
A third aspect provides a computer comprising at least a processor and a memory, the computer arranged to provide an image of an object from EMR detected by the apparatus according to the first aspect, wherein the computer is arranged to: receive data corresponding to the detected EMR from the respective detectors of the N x M imaging devices; for each respective detector, correct the data thereof for aberrations using, at least in part, images arising from the respective lenslet array; for each respective detector, determine a transform of the corrected data; combine the transforms for each respective detector, thereby providing a combined transform for the N x M imaging devices; and determine the image of the object from the combined transform.
A fourth aspect provides a method of providing an image from EMR detected by the apparatus according to the first aspect, the method implemented on a computer comprising at least a memory and a processor, the method comprising: receiving data corresponding to the detected EMR from the respective detectors of the N x M imaging devices; for each respective detector, correcting the data thereof for aberrations using, at least in part, images arising from the respective lenslet array; for each respective detector, determining a transform of the corrected data; combining the transforms for each respective detector, thereby providing a combined transform for the N x M imaging devices; and determining the image of the object from the combined transform.
A fifth aspect provides use of a coherent EMR source for irradiating an object for plenoptic imaging.
Detailed Description of the Invention
Imaging apparatus A first aspect provides an imaging apparatus comprising: a two-dimensional imaging array comprising N x M imaging devices, wherein N and M are positive integers and wherein respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation, EMR, detector, wherein the lenslet array comprises at least P x Q lenslets, wherein P and Q are positive integers, and wherein the lenslet array is arranged between the aperture lens and the detector; and a set of S coherent EMR sources, wherein S is a positive integer, wherein respective sources are arrangeable to irradiate an object with coherent EMR at respective angles of incidence.
That is, the imaging apparatus comprises at least one imaging device and at least one coherent EMR source. In this way, higher spatial resolution and correction of wave-front distortions, for example due to atmospheric turbulence, of the object may be improved compared with a conventional imaging apparatus.
Particularly, by imaging the object via the lenslet arrays of the respective imaging devices, wave-front distortions, for example due to atmospheric turbulence, at the aperture lenses of the respective imaging devices may be corrected for the respective imaging devices. Such imaging via a single lenslet array only, for a single imaging device and without irradiation of the object by a coherent EMR source, may be known as plenoptic imaging. In this way, wave-front distortion corrected Fourier transforms of the object at the respective aperture lenses of the respective imaging devices, may be provided. That is, an array of N x M wave-front distortion corrected Fourier transforms of the object may be provided.
Furthermore, by irradiating the object with the coherent EMR at the respective angles of incidence, thereby imaging the object via the N x M aperture lenses of the respective imaging devices, an effective aperture (also known as an artificial and/or synthetic aperture) size thus provided by the N x M imaging devices, is largerthan an aperture size of a single imaging device. Such imaging, for N x M imaging devices with irradiation of the object by a coherent EMR source but without imaging via respective lenslet arrays, may be known as synthetic aperture visible imaging (SAVI). In this way, a combined Fourier transform of the object across a plane, for example, of the N x M imaging devices may be provided, having improved spatial resolution and with correction of wave-front distortions, for example due to atmospheric turbulence, compared with a single imaging device.
Especially, the combined Fourier transform of the object across the plane, for example, of the N x M imaging devices is provided from the array of N x M wave-front distortion corrected Fourier transforms.
That is, the imaging apparatus provides imaging of the object at higher spatial resolution compared to that of the individual imaging devices that make up the imaging apparatus and with correction of wave-front distortions across the combined input optical apertures of the N x M imaging devices, for example due to atmospheric turbulence, allowing imaging of the object at increased distances, for example in a range from 500 m to 50 km, preferably in a range from 1 km to 25 km, more preferably in a range from 2 km to 10 km.
In other words, plenoptic imaging enables atmospheric turbulence aberrations to be corrected while still maintaining good optical resolution performance, while the N x M imaging devices provide an enlarged synthetic aperture, when used with coherent illumination. Hence, the imagining apparatus provides significant imaging advantages for long range imaging in the presence of air turbulence, compared with conventional imaging apparatus.
The imaging apparatus is thus suitable for applications such as security or military applications requiring optically imaging objects at ranges of up to 10 km when atmospheric turbulence severely limits the spatial resolution of traditional telephoto lens imaging systems without the use of adaptive optical aberration correction systems.
The most challenging source of resolution impairment for long range imaging, aside from obscurants such as fog or rain, is due to local refractive index variations along the optical path between imager and the object caused by random air turbulence. Air turbulence can reduce the effective optical aperture of a passive optical imager for resolution purposes to the Fried coherence length ro of typically -2 cm to -5 cm. Time evolution of atmospheric air turbulence is typically lOs of Hz to 100s of Hz. To image a face at 1 km to a resolution of at least 32 x 32 'pixels' needed to positively identify a face would require an effective diffraction limited optical aperture of 205 mm. Correction of air turbulence optical aberrations will generally be needed for high resolution long range imaging. This requires: pre-correction using adaptive optics at typically 10 times the Greenwood frequency for air turbulence (i.e. at an update rate of 100Hz to 1000's Hz) and/or post-image capture computational correction. Good image resolutions in processed image generally requires signal to noise levels of nOdB. Or long capture times for the alternative approach of lucky imaging.
The imaging apparatus comprises the two-dimensional imaging array comprising the N x M imaging devices, wherein N and M are positive integers. It should be understood that the imaging devices are arranged (i.e. positioned) in a two-dimensional array, for example a planar array. In this way, arrangement of the N x M imaging devices is simplified. It should be understood that by being arranged in a planar array, respective aperture lenses, two-dimensional lenslet arrays and/or EMR detectors of the N x M imaging devices are positioned on respective mutually-parallel planes. In one alternative example, the N x M imaging devices are arranged in a three-dimensional array, for example a non-planar array such as a concave array, for example a part of a spherical surface, such as a spherical sector, a spherical cap, a spherical dome, or a spherical wedge, wherein the imaging devices are oriented radially inwards. In this way, a distance from the object to each imaging device may be more constant. It should be understood that by being arranged in a non-planar array, such as a concave array, for example a part of a spherical surface, respective aperture lenses, two-dimensional lenslet arrays and/or EMR detectors of the N x M imaging devices are positioned on respective mutually-parallel surfaces.
In one example, N and/or M is in a range from 2 to 100, preferably in a range from 3 to 50, more preferably in a range from 4 to 25, most preferably in a range from 5 to 10, for example 5, 6, 7, 8, 9 or 10. In one example, N is equal to M. In one example, the array comprises and/or is a regular array, wherein the N x M imaging devices are regularly (i.e. periodically) arranged. In one example, the array comprises and/or is an irregular array, wherein the N x M imaging devices are irregularly (i.e. non-periodically) arranged. In one example, the array comprises and/or is a quadrilateral array, such as a rectangular or square array, wherein the imaging devices are equally spaced, for example. In one example, the array comprises and/or is a circular array, wherein the imaging devices are arranged in concentric, equally-spaced rings, for example. In one preferred example, the array comprises and/or is a regular rectangular or square array, wherein the N x M imaging devices are regularly (i.e. periodically) arranged and are equally spaced.
The respective imaging devices comprise the aperture lens, the two-dimensional lenslet array and the electromagnetic radiation, EMR, detector.
The lenslet array comprises the at least P x Q lenslets, wherein P and Q are positive integers. Generally, lenslets (also known as microlenses) are small lenses, typically having a diameter less than 1 mm and often as small as 10 pm. The small sizes of the lenses means that a simple design can give good optical quality but sometimes unwanted effects arise due to optical diffraction at the small features. A typical microlens may be a single optical element having a plane surface and an opposed spherical convex surface to refract the light. Usually, a substrate that supports a microlens is thicker than the microlens and this has to be taken into account in design. More advanced microlenses may have aspherical surfaces and/or may use several layers of optical material to achieve their required design performance. Another type of microlens, known as a gradient-index (GRIN) lens, has two flat and parallel surfaces and the focusing action is obtained by a variation of refractive index across this lens. Another type of microlens achieves focusing by both a variation in refractive index and by surface shape. Another type of microlens, known as micro-Fresnel lens, focuses light by refraction using a set of concentric curved surfaces. Such micro-Fresnel lenses may be very thin and lightweight.
Another type of microlens, known as a binary-optic microlens, focuses light by diffraction using grooves having stepped edges or multilevels that approximate an ideal shape. These binary-optic micro-lenses may be fabricated using conventional semiconductor processes such as photolithography and/or reactive-ion etching (RIE). Microlens arrays include multiple lenses formed in one-dimensional or two-dimensional arrays, usually on supporting substrates. If the individual microlenses have circular apertures and do not allowed overlap, the individual microlenses may be arranged in hexagonal arrays (also known as triangular packing), for example, to increase and/or maximise coverage by the microlenses of the substrates. Triangular packing of circles having the same radius has a packing density of 0.9069. Other packings based on uniform tiling are known, including square packing, hexagonal packing, elongated triangular packing, trihexagonal packing, snub square packing, truncated square packing, truncated hexagonal packing, rhombitrihexagonal packing, snub hexagonal packing, snub hexagonal (mirrored) packing and truncated trihexagonal packing. However, gaps between the individual microlenses remain, which may be reduced by making the microlenses with non-circular apertures and/or by having microlenses of different sizes, such that smaller microlenses are arranged in the gaps between larger microlenses. Typically for optical sensor arrays, the microlenses focus light onto active regions of the EMR detector, such as photo-diode surfaces, rather than onto non-active regions of the EMR detector, such as non-photosensitive areas. A fill-factor may be defined, being the ratio of the active refracting area (i.e. that area of the lenslet array which directs light to the photo-sensor) to the total contiguous area occupied by the lenslet array. Microlenses may be characterised by measured parameters such as focal length and quality of transmitted wavefront. Generally, since it is not practical to locate the principal planes of such small microlenses, measurements are often made with respect to the microlens or substrate surface. Where a microlens is used to couple light into an optical fibre, the focused wavefront may exhibit spherical aberration and light from different regions of the microlens aperture may be focused to different points on the optical axis. It is useful to know the distance at which the maximum amount of light is concentrated in the fibre aperture and these factors have led to new definitions for focal length. To enable measurements on microlenses to be compared, international standards have been developed.
In one example, the lenslets have a diameter in a range from 1 pm to 5 mm, preferably in a range from 3 pm to 3 mm, more preferably in a range from 10 pm to 1 mm, most preferably in a range from 100 pm to 500 pm. In one example, the lenslets comprise and/or are microlenses having respective plane surfaces and opposed spherical convex surfaces, microlenses having aspherical surfaces, microlenses having a plurality of layers of optical material, GRIN lenses, micro-Fresnel lenses, binary-optic microlenses and/or mixtures thereof. In one example, the lenslets have and/or provide circular apertures. In one example, the lenslets of the lenslet array are arranged in a hexagonal or a circular array. In one example, the lenslets of the lenslet array are packed with triangular packing. In one example, the lenslet array is arranged as described with respect to the imaging array, mutatis mutandis. In one example, the lenslets of the lenslet array are similar or identical. In one example, the lenslet array has a fill-ratio in a range from 60% to 100%, preferably in a range from 80% to 98%, more preferably in a range from 90% to 95%. In one example, Rand Q are respectively in a range from 1 to 10,000,000, preferably in a range from 100 to 1,000,000, more preferably in a range from 1,000 to 100,000.
In one example, the respective imaging devices comprise a mask, in place of and/or in addition to the lenslet array, as described below in more detail. In one example, the mask comprises P x Q perforations (also known as apertures) therethrough. The perforations may be as described with respect to the lenslets, mutatis mutandis.
The lenslet array is arranged between the aperture lens and the detector. In one example, the lenslet array is arranged at, in front of or behind the focal plane of the aperture lens. In one example, the lenslet array is arrangable at, in front of or behind the focal plane of the aperture
S
lens (i.e. a position of the lenslet array may be changed, for example by moving the lenslet array and/or the aperture lens).
In one example, the respective two-dimensional lenslet arrays are arranged spaced apart from the respective aperture lenses by respective focal lengths thereof. In one example, the respective detectors are arranged spaced apart from the respective lenslet arrays by respective focal lengths thereof.
In one example, the N x M imaging devices comprise a plenoptic camera (also known as a light field camera). In one example, each respective imaging device comprises and/or is a plenoptic camera, for example a similar or identical plenoptic camera.
In one example, there is provided an imaging apparatus comprising: a two-dimensional imaging array comprising N x M plenoptic cameras, wherein N and M are positive integers: and a set of S coherent EMR sources, wherein S is a positive integer, wherein respective sources are arrangeable to irradiate an object with coherent EMR at respective angles of incidence.
That is, the imaging apparatus comprises at least one plenoptic camera and at least one coherent EMR source.
Generally, plenoptic cameras capture information about the light field emanating from a scene (i.e.the intensity of light in a scene) and also the direction that the light rays are travelling in space.
A standard plenoptic camera is a mathematical model used by researchers to compare different types of plenoptic cameras. By definition, the standard plenoptic camera has microlenses (i.e. a lenslet array) placed one focal length away from the image plane of a light sensor. Research has shown that its maximum baseline is confined to the main lens entrance pupil size which proves to be small compared to stereoscopic setups. This implies that the standard plenoptic camera may be intended for close range applications as it exhibits increased depth resolution at very close distances that can be metrically predicted based on the camera's parameters.
In one example, the N x M imaging devices comprise a standard plenoptic camera. In one example, each respective imaging device comprises and/or is a standard plenoptic camera, for example a similar or identical standard plenoptic camera.
A focused plenoptic camera is a type of plenoptic camera in which the lenslet array is positionable in front of or behind the focal plane of the aperture lens. This modification samples the light field in a way that trades angular resolution for higherspatial resolution. With this design, images can be post focused with a much higher spatial resolution than with images from the standard plenoptic camera. However, the lower angular resolution can introduce some unwanted aliasing artefacts.
In one example, the N x M imaging devices comprises a focused plenoptic camera (also known as a light field camera). In one example, each respective imaging device comprises and/or is a focused plenoptic camera, for example a similar or identical focused plenoptic camera.
A coded aperture camera is a type of plenoptic camera that includes a (low-cost) printed film mask instead of a lenslet array. This design overcomes several limitations of lenslet arrays in terms of chromatic aberrations and loss of boundary pixels, and allows higher-spatial resolution photos to be captured. However, this mask-based design reduces the amount of light that reaches the light detector compared with plenoptic cameras based on lenslet arrays.
In one example, the N x M imaging devices comprise a coded aperture camera. That is, such an imaging device does not include a lenslet array. In one example, each respective imaging device comprises and/or is a coded aperture camera, for example a similar or identical coded aperture camera.
Plenoptic cameras are good for imaging fast moving objects where auto focus may not work well, and for imaging objects where auto focus is not affordable or usable and may be used to produce accurate 3D models of objects.
Plenoptic cameras are available from Raytrix GmbH (Germany), for example.
The imaging apparatus comprises the set of S coherent EMR sources, wherein S is the positive integer. It should be understood that EMR provided by the EMR sources is temporally and/or spatially coherent. In one example, the set of S coherent EMR sources comprises monochromatic light sources, for example lasers or light emitting diodes. In one example, the set of S coherent EMR sources comprises a laser providing EMR having a predetermined wavelength and/or wavelength range. In one example, the EMR light sources are similar or identical, for example providing EMR having the same predetermined wavelength and/or the same wavelength range. In one example, the EMR light sources are different, for example providing EMR having different predetermined wavelengths and/or different wavelength ranges.
In one example, the coherent EMR has a wavelength of 1 pm (i.e. the predetermined wavelength is 1,000 nm). In one example, the predetermined wavelength is in a range from 10 nm to 1,000 nm (i.e. ultraviolet, visible or infrared), in a range from 10 nm to 400 nm (i.e. ultraviolet), in a range from 700 nm to 1,000 nm (i.e. infrared), preferably in a range from 380 nm to 740 nm (i.e. visible). In one example, S is in a range from 1 to 100, preferably in a range from 2 to 50, more preferably in a range from 2 to 20, most preferably in a range from 2 to 10, for example 2, 3, 4, 5, 6, 7, 8, 9 or 10. In one example, the set of S coherent EMR sources comprises a gas laser for example a He-Ne laser, an Ar laser, a Kr laser, a Xe laser, a N2 laser, a CO2 laser, a CO laser and/or an excimer laser, a chemical laser for example a HF laser, a DF laser, a chemical oxygen-iodine laser (COIL) and/or an all gas-phase iodine laser (AGIL), a dye laser, a metal-vapour laser for example a He-Cd laser, a He-Hg laser, a He-Se laser, a He-Ag laser, a Sr laser, a Ne-Cu laser, a Cu laser, an Au Laser and/or a Mn/MnCl2 laser, a solid state laser for example a ruby laser, a Nd:YAG laser, a NdCrYAG laser, a Nd:YLF laser, a Nd:YV04 laser, a Nf:YCOB laser, a Nd:glass laser, a Ti:sapphire laser, a Tm:YAG laser, a Yb:YAG laser, a Yb glass or ceramic laser, a Yb doped glass laser, a Hi:YAG laser, a Cr:ZnSe laser, and/or a Ce:LiSAF laser, a semiconductor laser for example a GaN laser, an InGaN laser, an AlGaInP or AlGaAs laser, an InGaAsP laser, a lead salt laser, a vertical cavity surface emitting laser (VCSEL), a quantum cascade laser and/or a hybrid Si laser.
The respective sources are arrangeable to irradiate the object with the coherent EMR at the respective angles of incidence. That is, the respective EMR sources provide EMR having a sufficient brightness, intensity and/or luminosity at the object to irradiate the object and such that at least some of the EMR is reflected back to the imaging device, for example by the respective EMR sources having sufficient output power. Generally, brightness of the EMR provided by the respective EMR sources is dependent, at least in part, on a power of the source, a distance from the source to the object and the intervening medium, such as air. In one example, a beam distance of the respective EMR sources is in a range from 500 m to 50 km, preferably in a range from 1 km to 25 km, more preferably in a range from 2 km to 10 km. It should be understood that the respective EMR sources are mutually spaced apart. Additionally, angular orientations of the respective EMR sources may be different. In this way, the respective EMR sources may irradiate the object at different angles of incidence.
In one example, N 1, M 1 and/or S 1. If N = 1 and M = 1 (i.e. the imaging apparatus comprises one imaging device), for example, the imaging apparatus may be moved, for example translated parallel to the two-dimensional array of imaging devices, between acquiring successive images, for example a first image and a second image. Hence, successive (preferably overlapping) images from adjacent positions may be acquired, using coherent EMR. Enhanced resolution may be obtained by combining the first and second images, as described below in more detail. This is generally suitable for imaging stationary objects.
In one example, N 1, M 2 and S 1. If N = 1 and M = 2 (i.e. the imaging apparatus comprises two imaging devices), two images from adjacent positions may be acquired, for example simultaneously, using coherent EMR. Enhanced resolution may be obtained by combining the two images, as described below in more detail. This is generally suitable for imaging stationary objects, moving objects and/or where the imaging apparatus is moving relative to the objects, such as if imaging apparatus is mounted in a moving vehicle.
In one example, N 2, M 2 and S 1. If N = 2 and M = 2 (i.e. the imaging apparatus comprises four imaging devices), four images from adjacent positions may be acquired, for example simultaneously, using coherent EMR. Enhanced resolution may be obtained by combining the four images, as described below in more detail. This is generally suitable for imaging stationary objects, moving objects and/or where the imaging apparatus is moving relative to the objects, such as if imaging apparatus is mounted in a moving vehicle.
In one example, S = 1 and the EMR source is arrangeable in a first configuration to irradiate the object with coherent EMR at a first angle of incidence and in a second configuration to irradiate the object with coherent EMR at a second angle of incidence. For example, a first image may be acquired corresponding to the first configuration and a position and/or orientation of the EMR source may be changed to the second configuration, to thereby irradiate the object at a different angle of incidence, and a second image acquired corresponding to the second configuration. Enhanced resolution may be obtained by combining the first and second images, as described below in more detail. This is generally suitable for imaging stationary objects.
In one example, S 2 and wherein the apparatus is arranged to successively irradiate the object using the respective EMR sources. For example, a first image may be acquired corresponding to irradiating the object with a first EMR source and thereafter, a second image acquired corresponding to irradiating the object with a second EMR source. Enhanced resolution may be obtained by combining the first and second images, as described below in more detail. This is generally suitable for imaging stationary objects.
In one example, P 8 and Q 8, preferably wherein P 32 and Q 32. That is, the imaging apparatus may comprise at least 64 imaging devices, preferably at least 1,024 imaging devices.
A corresponding plurality of images from the imaging devices may be acquired, for example simultaneously and/or successively. Enhanced resolution may be obtained by combining the plurality of images, as described below in more detail.
In one example, the N x M imaging devices are configured to each acquire R images of the object, wherein R is a positive integer, for example successively or simultaneously. Enhanced resolution may be obtained by combining the NxMxR images, as described below in more detail. In one example, R 1. In one example, if N x M = 1, R 2.
In one example, the aperture lens has a focal length of 28 mm, for example in a range from 28 mm to 5200, mm, preferably in a range from 50 mm to 1,700 mm for example 1,200 mm or 1,600 mm, more preferably in a range from 100 mm to 800 mm, for example 200 mm, 300 mm, 500 mm 01 600 mm. In this way, images of long-range objects may be acquired.
S
In one example, the respective detectors (also known as image sensors) comprise a complementary metal oxide (CMOS) detector (also known as a CMOS sensor) or a charge coupled device (CCD) detector (also known as a CCD detector).
Generally, CMOS sensors are cheaper and have lower power consumption than CCD sensors.
CCD sensors are preferred for high end broadcast quality video cameras while CMOS sensors are preferred in still photography and consumer goods where overall cost is more important. Typically, CMOS sensors have smaller effective areas than CCD sensors, since they include one amplifier per photodiode, thereby capturing fewer photons. This may be overcome using microlenses in front of each photodiode, which focus light into the photodiode that would have otherwise hit the amplifier and not be detected. It should be understood that these microlenses are distinct from the lenslet array. A hybrid CCD/CMOS architecture (also known as sCMOS) combines CMOS readout integrated circuits (ROICs) that are bump bonded to a CCD imaging substrate. Another image sensor uses the very fine dimensions available in modern CMOS technology to implement a CCD like structure entirely in CMOS technology. Preferably, the respective detectors comprise and/or are CCD sensors.
A second aspect provides a method of imaging an object comprising: irradiating the object with coherent EMR at respective angles of incidence using a set of S coherent EMR sources, wherein S is a positive integer; and imaging the object by detecting at least a part of the coherent EMR reflected therefrom using a two-dimensional imaging array comprising N x M imaging devices, wherein N and M are positive integers and wherein respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation, EMR, detector, wherein the lenslet array comprises at least P x Q lenslets, wherein P and Q are positive integers, and wherein the lenslet array is arranged between the aperture lens and the detector.
In this way, the object may be imaged at higher spatial resolution and with correction of wave-front distortions, for example due to atmospheric turbulence, as described above with respect to the first aspect.
The object, the coherent EMR at respective angles of incidence, the set of S coherent EMR sources, the two-dimensional imaging array, the N x M imaging devices, the aperture lens, the two-dimensional lenslet array and the EMR detector may be as described with respect to the first aspect.
In one example, the method comprises: irradiating the object with the coherent EMR at the respective angles of incidence using a set of S coherent EMR sources, wherein S 2; imaging the object by detecting at least the part of the coherent EMR reflected therefrom using the two-dimensional imaging array; obtaining phase relationships between the reflected coherent EMR; passively irradiating the object with incoherent EMR; and imaging the object by detecting, through a filter corresponding to a wavelength of the coherent EMR, at least the part of the incoherent EMR reflected therefrom using the two-dimensional imaging array.
In this way, Fourier transform fringes at the wavelength are more prominent.
In one example, the method comprises: wherein irradiating the object with the coherent EMR comprises using a narrow band filter; wherein irradiating the object with the coherent EMR comprises modulating the coherent EMR; 20 and wherein imaging the object comprises imaging the object by detecting at least the part of the modulated coherent EMR reflected therefrom using the two-dimensional imaging array.
In this way, the modulated coherent EMR may discriminate from a passive (i.e. incoherent) EMR background. Particularly, the narrow band filter provides a single wavelength snap shot of the Fourier transform at the aperture lens.
In one example, there is provided a method of imaging an object comprising: irradiating the object with coherent EMR at respective angles of incidence using a set of S coherent EMR sources, wherein S is a positive integer; and imaging the object by acquiring N x M plenoptic images of the object using at least a part of the coherent EMR reflected therefrom.
A third aspect provides a computer comprising at least a processor and a memory, the computer arranged to provide an image of an object from EMR detected by the apparatus according to the first aspect, wherein the computer is arranged to: receive data corresponding to the detected EMR from the respective detectors of the N x M imaging devices; for each respective detector, correct the data thereof for aberrations using, at least in part, images arising from the respective lenslet array; for each respective detector, determine a transform of the corrected data; combine the transforms for each respective detector, thereby providing a combined transform for the N x M imaging devices; and determine the image of the object from the combined transform.
The object, the coherent EMR at respective angles of incidence, the set of S coherent EMR sources, the two-dimensional imaging array, the N x M imaging devices, the aperture lens, the two-dimensional lenslet array and the EMR detector may be as described with respect to the first aspect.
A fourth aspect provides a method of providing an image from EMR detected by the apparatus according to the first aspect, the method implemented on a computer comprising at least a memory and a processor, the method comprising: receiving data corresponding to the detected EMR from the respective detectors of the N x M imaging devices; for each respective detector, correcting the data thereof for aberrations using, at least in part, images arising from the respective lenslet array; for each respective detector, determining a transform of the corrected data; combining the transforms for each respective detector, thereby providing a combined transform for the N x M imaging devices; and determining the image of the object from the combined transform.
The object, the coherent EMR at respective angles of incidence, the set of S coherent EMR sources, the two-dimensional imaging array, the N x M imaging devices, the aperture lens, the two-dimensional lenslet array and the EMR detector may be as described with respect to the first aspect.
A fifth aspect provides use of a coherent EMR source for irradiating an object for plenoptic imaging.
Generally, plenoptic cameras are a recent development that use arrays of independent lens instead of a single large lens to collect light from the object. A micro-lens array placed at the focus of a large objective lenses creates a matching array of light patterns on a focal plane array detector positioned at the focus of the micro-lenses. Computational post processing of these light patterns recreates an 'image' of the object of interest. It is possible to recover high resolution images even in the presence of atmospheric turbulence with plenoptic camera technology, as described below.
Image impairment Image impairment may arise from: i. optical diffraction due to finite size of primary imaging lens; and/or pixel size of the camera backplane (i.e. detectors) and while a traditional limitation for electronic image sensors, modern pixel sensors approach the diffraction limit with 1.2 pm pixels now commercially widespread; and/or air turbulence causing optical distortions of the captured image due to refractive index perturbations in the air path (often the primary limitation on optical resolution of powerful long range optics); and/or iv. intrinsic lens aberrations within the optical imager; and/or v. optical signal noise: various sources which reduce image contrast, examples photon shot noise, electrical signal noise; and/or vi. relative motion: particularly angular alignment of the imager and object during the shutter exposure; and/or vii. low light illumination, such as dawn, dusk and/or due to clouds; and/or viii. weather effects, such as mist, fog, rain or snow.
Resolution limit Primary cause Intrinsic/Extrinsic Diffraction Finite aperture Intrinsic acceptance angle Sampling Finite pixel size Intrinsic Aberrations Finite lens track Intrinsic length / design compatibility Noise Sensor readout, shot noise Intrinsic/Extrinsic Turbulence Air temperature, wind shear Extrinsic Motion Camera and/or scene motion Extrinsic Table 1: Intrinsic and extrinsic factors limiting resolution in long rang imaging.
Impact of optical diffraction on spatial resolution Optical diffraction determines the fundamental point spread function of any optical system. Figure 1 shows resolution of adjacent point sources. Two nearby point sources on the object when imaged onto the focal plane array become blurred spots as shows. A degree of blurring is an inverse linear function of the diameter of the imaging optics collection aperture; it limits the achievable spatial resolution of an image unless super-resolution computation techniques can be employed.
Impact of atmospheric air turbulence on spatial resolution Atmospheric air turbulence gives rise to random refractive index perturbations in the air along the optical path between the object being imaged and the 'camera'. Air turbulence is created by solar heating of the atmosphere either directly, or indirectly via thermal plumes from the earth's surface, or caused other heat sources, or by wind flow past physical obstacles. Kinetic energy due to convection or wind is transferred via turbulence to successively smaller and smaller eddies until the viscosity of air then dominates the transfer process and the kinetic energy is only then converted into heat. Turbulent mixing of air masses of different temperature gives rise to refractive index variations in the air transmission path. So do turbulent pressure variations. This effect is described by the Kolmogorov model of air turbulence.
Figure 2A schematically depicts optical wavefronts in air in absence of turbulence; and Figure 2B schematically shows distortion of optical wavefronts in air due to turbulence thereof.
Refractive index n of air as a function of temperature T in Kelvin, pressure p in mbars, water vapour partial pressure v in mbars and wavelength in m is given by the combined Cauchy & Lorentz formula: 77.6 x 10-6 v 77.6 x 10-6 n -1 - (1 + 7.52 x 10-32-2) + 4810 T) Random refractive index perturbations in the turbulent air create spatial distortions from the idea spherical wave front shape that would be emitted by every point in the scene. These random refractive index perturbations reduce the effective optical aperture of a large diameter camera lens, and so limit the spatial resolution in any passive optical system. Atmospheric air turbulence typically reduces the effective optical resolution aperture of a traditional optical imaging system to the Fried coherence length ro of turbulent air of typically -2 cm to -5 cm. If the aperture is less than the Fried coherence length ro, wave fronts are effectively planar. The time evolution of atmospheric turbulence due to Kolmogorov type turbulence is given by the 'Greenwood' frequency, which typically ranges from 10s to 100s of Hz depending on atmospheric conditions.
Long Range Imaging Resolution Requirement
S
Table 2 details minimum resolution requirements and required input diameter for a conventional optical imager to positively identify certain objects at a wavelength A of 633 nm.
Subject Resolution Input lens Diameter @100 m Input lens Diameter @1 km Input Lens Diameter @ 5 km Input Lens Diameter @ 10 km Comments An 1 mm 77.1 mm 770 mm 3850 mm 7710 mm -120 x 120 pixels unknown face A known face 5 mm 15.4 mm 154 mm 770 mm 1540 mm -24 x24 pixels Vehicle 5 cm 1.5 mm 15 mm 77 mm 154 mm Building lm 0.08 mm 0.8 mm 3.8 mm 7.7 mm Table 2: Minimum resolution requirements and required input diameter for a conventional optical imager to positively identify certain objects at a wavelength A of 633 nm.
Long Range High Resolution Imaging Estimates vary for the minimum pixel resolution needed to recognise a human face. Some researchers suggest that most human observers could recognise a face from a 32 pixel x 32 pixel image of the face with 4 bit intensity resolution. This implies a collection optical aperture of -1 m diameter is required to unconditionally recognise a face at 5 km under perfect atmospheric seeing conditions that are completely free of air turbulence, at 633 nm. Such an optical system would be large, heavy, and very expensive for a system with low intrinsic optical aberrations if based on a single collection mirror/lens. However, air turbulence is in reality a fact of observational life. Therefore, long range multi-km range imaging of faces is not possible with a traditional passive optical system based on a single large primary mirror or objective lens. Hence, another approach is required, as provided by the imaging apparatus according to the first aspect.
Air turbulence suppression using plenoptic imaging A Plenoptic imager uses a main objective lens or mirror which then focusses the object plane onto a close packed 2D lenslet array. A 2D detector array is located at the focal plane, behind the lens-let array. This detector samples the light field' consisting of the direction of arrival of rays as well as their origin on the object plane. The lenslets in the above configuration image the aperture of the objective lens onto the detector array and so provide information on the local wave-front distortion. Wave front gradients can be computed by cross correlation of every synthetic aperture image with respect to one of them. Computational processing of the focal plane data COMBINED with deconvolution computational techniques create a high resolution digital image comparable to a conventional camera. Recent developments allow atmospheric turbulence to be corrected such that good image resolutions are achieved with signal to noise levels of 30dB or better.
Synthetic Apertures for long range sub-diffraction-limited Visible Imaging (SAVI) SAVI is a development of a recent established technique used in microscopy to achieve super-resolution called Fourier Ptychography. Fourier Ptychography microscopy from which the SAVI technique is developed uses a 2D array of LED light sources to sequentially illuminate the object whilst capturing a corresponding succession of images using each LED illumination source. This allows an iterative recovery of the phase of the light fields that are imaged using Fourier techniques and facilitates the creation of large synthetic aperture. This enables super-resolution images to be computationally created that would beyond the spatial resolution of the basic optical system. The same super-resolution images can be created by translating the sample between image captures and using a single light source; an imaging process called ptychography. Synthetic Aperture Radar captures a sequence of radar returns with -ps timing accuracy which allows the phase of the ensemble of retum signals to be accurately combined. A directly equivalent approach using light would require a sequence of optical return signals to be captured with -is timing accuracy -this is not currently realistic. SAVI analyses the captured images in Fourier Space instead to recover the phase data of the individual images.
The coherent light incident on the plane of the system's main objective lens is a Fourier Transform of the object being imaged. Numerical calculation of the Fourier transform of each image recorded followed by stitching together of the mosaic of partially overlapping Fourier transform samples of the object allows higher frequency components of the object's overall Fourier Transform to be captured. An inverse Fourier Transform of the composite Fourier transform formed by the stitching process produces allows a high super-resolution image to be formed, which corresponds to a larger synthetic aperture than the individual camera. The image array forming the synthetic aperture can be captured either by translating the camera, or alternatively by sequential illumination using a wide angular fan of coherent sources.
Figure 4A schematically depicts a conventional imaging apparatus and Figure 4B schematically depicts a conventional SAVI apparatus. Particularly, as shown in Figure 4A, a conventional imaging apparatus using passive illumination with a fixed aperture size of 12.5 mm induces a diffraction spot size of 50 mm on an object 1 km away, destroying relevant image features. Using the conventional SAVI apparatus, as shown in Figure 4B, an array of such conventional imaging apparatus using coherent illumination creates a synthetic aperture an order of magnitude larger, resulting in a diffraction sport size of 5 mm on the object 1 km away.
SAVI generally requires a 65°,4, overlap between the sampling optical aperture for adjacent images in the 2D array of images captured is required to enable phase retrieval of the optical beams to be successful. SAVI generally requires monochromatic illumination sources to provide coherent illumination. Without coherent illumination, it is not possible to create an enlarged synthetic aperture that overcomes traditional optical diffraction limits. Instead of moving the camera between successive image captures, alternatively the illumination angle of the coherent source can be changed. The 65°/0 overlap requirement cannot be achieved with a conventional plenoptic lens array. Hence, the plenoptic lens array must be either translated mechanically or the object illuminated by an angular array of illumination sources. Conventional SAVI involves a stationary object such that a series of images may be captured with a conventional (nonplenoptic) camera that is tracked along an x-y overlapping track to form a synthetic aperture.
However, dynamic (i.e. moving) objects would require camera arrays to capture data in 1 shot (i.e. simultaneously). Similarly, acquiring an image of a stationary object from a camera mounted in a moving vehicle would also require camera arrays to capture data in 1 shot (i.e. simultaneously). That is, relative motion between the imaging apparatus (i.e. camera array) and the object requires image data to be acquired in 1 shot (i.e. simultaneously).
Hence, the solution for relative motion is to illuminate the object with multiple coherent sources, thereby overcoming the >65% adjacent aperture sampling overlap requirement of SAVI. Backscattered light is captured by an array of imaging devices, providing a true multiple-input, multiple output (MIMO) configuration.
MIMO optical imaging Synthetic Apertures for long range sub-diffraction-limited Visible Imaging (SAVI) is closer to a true MIMO optical imaging approach. A synthetic aperture is created by either: 1. Illuminating with a single coherent laser source and translating a camera across a synthetic aperture equal in size to that of the conventional aperture needed for the required spatial resolution while capturing a sequence of overlapping images; or 2. Illuminating the scene with a sequence of coherent lasers positioned in an array across the required synthetic aperture and capturing with a 2D array of smaller cameras.
The synthetic aperture yields an image far superior to more expensive camera optics. Atmospheric turbulence correction using SAVI has not yet been investigated. The optical hardware requirements of a plenoptic imaging system are less demanding than an optical system using adaptive optics correction techniques. One weakness of current plenoptic imaging cameras is the use of a large objective lens or mirror at the front end of the imaging cameras but this may be readily addressed. The plenoptic micro-lens array allows the wave-front distortion to measured over the aperture of the objective lens/mirror. From this data, it is possible to computationally correct out wave-front distortion due to turbulence effects. In other words, the imaging apparatus combines SAVI technology with a plenoptic micro-lens array. The use of SAVI techniques allows smaller diameter (and therefore cheaper) optics to be employed while plenoptic techniques may be used to correct for wavefront distortions, for example atmospheric turbulence effects, of each plenoptic camera in the SAVI camera array. If semi-covert SAVI imaging is required, the optical wavelength for the coherent illumination source could be located in the near IR, for example, so that silicon based focal plane arrays can be used, and to reduce the adverse diffraction limit implications of operation at longer optical wavelengths. In addition, MIMO sonar work shows the benefit of orthogonal encoding of transmitted acoustic signals. Hence, the coherent EMR optical signals be orthogonally encoded for analogous benefit.
MIMO sonar imaging MIMO sonar is of interest because it is predicted to provide major improvements in spatial image resolution capability according to simulations, and may provide valuable insights. MIMO sonar offers a degree of localisation not possible with a single-input, single output (SISO) system. To achieve super-resolution using MIMO sonar the following conditions should be respected: Independent views: the acoustic transceivers must be sufficiently spaced to ensure the independence of each view; ii. De-correlation: the total number of views has to be large enough to ensure the scatters de-correlation; iii. Broadband: in order to achieve the range resolution needed, the MIMO system has to use broadband pulses for range compression; iv. Simulated transmit signals used in this work were not coded and consisted of simple pulses; and v. Signal orthogonality means there is zero cross-correlation between different signal paths detected by the receivers.
These conditions may be applied analogously to optical imaging, so as to achieve super-resolution using MIMO optical imaging Plenoptic imaging Wave optic expressions for plenoptic camera image formation may be derived, as described below for a simple example, following Goodman, J.W. (2004) Introduction to Fourier Optics, 3' edition, Roberts and Company. For simplicity, consider a plenoptic camera arranged according to Figure 3.
-11)(x,y) is the scalar field at the object plane. Propagation from the object plane to the lenslet array follows that of a standard imaging setup, in which the lenslet array is positioned at the normal image plane. Hence, the impulse response of this part of the imaging system is given by the Fraunhofer diffraction pattern of the main lens pupil Pi (x/ , yr) 27r " h i(a, b; x, y) = K ff P (x', IA xi "a-M x)x' MY))71 dx' dy' where K is a constant and M is the magnification of the plenoptic camera given by = -z7z0.
Hence, due to linearity of wave propagation, the scalar field in front of the lenslet array is given by: 1P (a b) = h I (a, , y)dxdy If there are N x N identical microlenses, each having a diameter d2 and a focal length f2 = z2, propagation after the lenslet array is given by: N N.=[(a.-ma 11)(a, b) = (a. b)11 P2 (a -m d 2, b -nd2) x e Af2 771=1 77=1 where P2 (a, b) is the pupil function of the microlens.
Hence, propagation from the lenslet array to the detector array is given by Fresnel propagation with the impulse response given by: 27r erTz2 IR r( 2+,2) J. h2(0- = Az2 ei Due to superposition, the scalar field at the detector array is given by: IP c: = h2(o--at T -b)7 I; (a, b)dadb The image may be reconstructed, for example, by backpropagafion of the field from the detector array to an arbitrary object plane, such as described for coherent and incoherent approaches in Junker, A., Stenau, T. and Brenner, K. H. (2014) Scalar wave-optical reconstruction of plenoptic camera images, Appl. Opt. 53, 5784 -5790.
SA VI
The general goal of SAVI is to capture high resolution images of objects that are a considerable distance away from the camera, e.g. resolve a face 1000 meters away. Passive camera systems use available (incoherent) illumination sources such as the sun and suffer from significant diffraction blur. Existing super resolution methods may be able to effectively reduce the amount of blur by a factor of two, which may be insufficient to resolve enough detail to recover the object of interest. In order to resolve even smaller features, assume a coherent or partially coherent illumination source; that is, active illumination is provided.
Like other multi-image super-resolution methods, the imaging apparatus captures a series of low resolution images which are used to recover a high resolution image. Multiple images using a coherent illumination source are acquired, where each image is from a different (known) position in the XY plane. For simplicity, assume the camera positions coincide with a regular grid, though this is not necessary. It should be noted that the same result is obtained by leaving the camera stationary and moving the illumination source.
A. Image Formation Model For simplicity, assume a single fixed illumination source (i.e. coherent EMR source), which can either be co-located with or external to the imaging apparatus. The source should be quasi-monochromatic, with centre wavelength A. For color imaging, multiple quasi-monochromatic sources (i.e., laser diodes) are effective. The source emits a field that is spatially coherent across the plane which contains our object of interest, P(x,y) and assumed to occupy some or all of the imaging system field of view.
The illumination field, u(x,y), will interact with the object, and a portion of this field will reflect off the object towards the imaging apparatus. For our initial experiment, assume the object is thin and may be described by the 2D complex reflectivity function 0(x,y). Extension to surface reflectance from 3D objects follows from this analysis. Under the thin object approximation, the field emerging from the object is given by the product tp(x,y) = u(x,y)0(x,y). This then propagates a large distance z to the far field, where our imaging system occupies the plane S (x' ,y').
Under the Fraunhofer approximation, the field at S(xf,y') is connected to the field at the object plane by a Fourier transform: ejkx e("1.2 +Y12) tk (xi= Fl/Az[gx,Y)] j Az where k = 27r / A is the wavenumber and F1biz denotes a two dimensional Fourier transform scaled by 1/2z. For the remainder of this manuscript, we will drop multiplicative phase factors and coordinate scaling from our simple model. The following analysis also applies under the Fresnel approximation (i.e. in the near field of the object). The far field pattern, which is effectively the Fourier transform of the object field, is intercepted by the aperture of our camera. The limited camera aperture may be described using the function A(xf -cx" -cy,), which is centred at coordinate (c""cy,) in the plane S(x',y) and passes light with unity transmittance within a finite diameter d and completely blocks light outside (i.e., it is a "circ" function).
The optical field immediately after the aperture is given by the product: (x' , y') A (x' -c"" yi -cy, ) This bandlimited field then propagates to the image sensor plane. Again neglecting constant multiplicative and scaling factors, this final propagation may be represented using a Fourier transform. Since the camera sensor only detects optical intensity, the image measured by the camera is: -cx"yi -c, )]2 In a single image, the low pass nature of the aperture results in a reduced resolution image. For an aperture of diameter d and focal length f, diffraction limits the smallest resolvable feature within one image to be approximately 1.22,1f /d.
a FP to Improve Resolution Ptychography presents one strategy to overcome the diffraction limit by capturing multiple images and synthetically increasing the effective aperture size. The series of captured images is used to recover the high resolution complex field in the aperture plane and subsequently a high resolution image.
To achieve this, the imaging apparatus is re-centered at multiple locations, (cx,i,c3"), and one image captured at the ith location, for i = 1, , N. This transforms the equation above into a four-dimensional discrete data matrix. The N images can be captured in a number of ways, one can: physically translate the camera to N positions, construct a camera array with N cameras to simultaneously capture images, fix the camera position and use a translating light source, or use arbitrary combinations of any of these techniques.
If aperture centres are selected such that they are separated by the diameter d across a rectilinear grid, then values are approximately measured from the object spectrum across an aperture that is a times larger than what is obtained by a single image. Thus, it appears that such a strategy, capturing N images of a coherently illuminated object in the far field, may be combined together to improve the diffraction-limited image resolution to 1.22.11/ CNd.
However, since the detector cannot measure phase, this sampling strategy is not effective as-is. Instead, it is necessary to ensure the aperture centres overlap by a certain amount (i.e. adjacent image captures are separated by a distance S < d along both x' and y'). This yields a certain degree of redundancy within the captured data, which a ptychographic post-processing algorithm may utilize to simultaneously determine the phase of the field at plane S(x',/).
Typically, 6 -0.25d. Below is described a suitable post processing strategy that converts the data matrix /(x, y, c"" cy,) into a high-resolution complex object reconstruction C. Algorithm for image recovery Using the imaging apparatus (i.e. coherent camera array) measurements for I(x, y, c"" cy,), the goal is to recover the complex-valued, high-resolution field y'). The image measured with the imaging apparatus at location (c,,i,cy,i) is denoted by: Ii = 1tp112 = where = FRnitil denotes the complex-valued, bandlimited field whose intensity is measured at the image sensor, 113 denotes the complex-valued field at the Fourier plane (i.e. the aperture plane), and k"itr, denotes an aperture operator that sets all the entries of tkx', y9 outside the set D, = [(x, y): Ix -c 12 + ly c3,,il2 d/21 to zero To recover the high-resolution from a sequence of N low resolution, intensity measurements, thr_i, an alternating minimization-based phase retrieval problem is used, as shown in Figure 5. The recovery algorithm is based on the error-reduction phase retrieval algorithm.
Hence, the following problem is to be solved: = argminI 111) FRi2102 S. t. II2 = This may be achieved by alternatively constraining the support of and the squared magnitudes of j. The initial estimate of ifi° is set to be a scaled Fourier transform of the mean of the low resolution images.
Figure 5 schematically depicts a method of image recovery for a method of providing an image according to an exemplary embodiment.
At S501, respective images are measured with the imaging apparatus at locations denoted by: = lihI2 = IFRni'1112 For every iteration (k), the following three steps are performed: At S502, compute complex-valued images at the sensor plane using the existing estimate of the field at the Fourier plane, CP: ipf = Eksiguk for all i At S503, replace the magnitudes of Or with the magnitude of the corresponding observed images I!: <- 1-27,/)k for all i At S504, convert to the Fourier domain and update the estimate of V; by solving the following regularized, least-squares problem: minimize/ -FRi41122 +/-01122 where r> 0 is an appropriately chosen regularization parameter. Tikhonov regularization is used for numerical stability during reconstruction.
From S504, convert back to the spatial domain and retum to S502. Constraints on the image domain magnitude and the Fourier domain supper are thus enforced alternately until convergence or a maximum iteration limit is met.
This problem has a closed form solution, which can be efficiently computed using fast Fourier 20 transforms.
SAW combined with plenoptic imaging To obtain an image having higher spatial resolution and correction of wave-front distortions, for example due to atmospheric turbulence, from the imaging apparatus: 1. for each plenoptic image acquired by the imaging apparatus, reconstruct the respective images, for example, by backpropagation of the field from the detector array to an arbitrary object plane, to thereby correct for wave-front distortions, for example due to atmospheric turbulence; and 2. using the reconstructed images, recover the complex-valued, high-resolution field IRxr, Y1).
According to the present invention there is provided, as set forth in the appended claims. Also provided is. Other features of the invention will be apparent from the dependent claims, and the
description that follows.
Definitions Throughout this specification, the term "comprising" or "comprises" means including the component(s) specified but not to the exclusion of the presence of other components. The term "consisting essentially or or "consists essentially of' means including the components specified but excluding other components except for materials present as impurities, unavoidable materials present as a result of processes used to provide the components, and components added for a purpose other than achieving the technical effect of the invention, such as colourants, and the like.
The term "consisting of' or "consists of' means including the components specified but excluding other components.
Whenever appropriate, depending upon the context, the use of the term "comprises" or "comprising" may also be taken to include the meaning "consists essentially of" or "consisting essentially of', and also may also be taken to include the meaning "consists or or "consisting of'.
The optional features set out herein may be used either individually or in combination with each other where appropriate and particularly in the combinations as set out in the accompanying claims. The optional features for each aspect or exemplary embodiment of the invention, as set out herein are also applicable to all other aspects or exemplary embodiments of the invention, where appropriate. In other words, the skilled person reading this specification should consider the optional features for each aspect or exemplary embodiment of the invention as interchangeable and combinable between different aspects and exemplary embodiments.
Brief description of the drawings
For a better understanding of the invention, and to show how exemplary embodiments of the same may be brought into effect, reference will be made, by way of example only, to the accompanying diagrammatic Figures, in which: Figure 1 schematically depicts optical resolution of adjacent point sources; Figure 2A schematically depicts optical wavefronts in air in absence of turbulence; and Figure 2B schematically shows distortion of optical wavefronts in air due to turbulence thereof Figure 3 schematically depicts a conventional plenopfic camera; Figure 4A schematically depicts a conventional imaging apparatus and Figure 4B schematically depicts a conventional SAVI apparatus; Figure 5 schematically depicts a method of image recovery for a method of providing an image according to an exemplary embodiment, Figure 6 schematically depicts an imaging apparatus according an exemplary embodiment; Figure 7 schematically depicts a method of imaging an object according to an exemplary embodiment, Figure 8 schematically depicts a method of providing an image according to an exemplary embodiment; Figure 9 shows an image acquired by a conventional imaging apparatus; Figure 10 shows unprocessed images acquired by an imaging apparatus according to an exemplary embodiment; Figure 11 shows reconstructed images 12, provided using the method according to Figure 8; and Figure 12 shows a recovered image 13, provided using the method according to Figure 8.
Detailed Description of the Drawings
Figure 5 schematically depicts the method of image recovery for a method of providing an image according to an exemplary embodiment, as described above.
Figure 6 schematically depicts an imaging apparatus 10 according an exemplary embodiment. 30 The imaging apparatus 1 comprises a two-dimensional imaging array 10 comprising N x M imaging devices 100A to 100P, wherein N and M are positive integers. In this example, N = M = 4. The respective imaging devices 100A to 100P comprise an aperture lens 110, a two-dimensional lenslet array 20 and an electromagnetic radiation, EMR, detector 130. The lenslet array 20 comprises at least P x Q lenslets 120, wherein P and Q are positive integers. In this example, P = Q = 6 and the lenslet array 20 includes 52 lenslets 120, in which 1 x 4 lenslets are arranged on each side of a 6 x 6 lenslet matrix. The lenslet array 20 is arranged between the aperture lens 110 and the detector 130. The imaging apparatus 11 comprises a set of S coherent EMR sources 200A to 200C, wherein S is a positive integer. In this example, S = 3. The respective sources 200A to 200C are arranged to irradiate an object with coherent EMR at respective angles of incidence. Particularly, the sources 200A to 200C are arranged top left, centre and bottom right, respectively. In this way, the object is illuminated by an angular array of illumination sources.
In this example, S = 3 and the apparatus 1 is arranged to successively irradiate the object using the respective EMR sources 200A to 200C.
Figure 7 schematically depicts a method of imaging an object according to an exemplary embodiment.
At S701, the object is irradiated with coherent EMR at respective angles of incidence using a set of S coherent EMR sources, wherein S is a positive integer.
At S702, the object is imaged by detecting at least a part of the coherent EMR reflected therefrom using a two-dimensional imaging array comprising N x M imaging devices, wherein N and M are positive integers and wherein respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation, EMR, detector, wherein the lenslet array comprises at least P x Q lenslets, wherein P and Q are positive integers, and wherein the lenslet array is arranged between the aperture lens and the detector.
The method may include any of the steps described herein.
Figure 8 schematically depicts a method of providing an image according to an exemplary embodiment. Particularly, the method is of providing the image from EMR detected by an imaging apparatus according to an exemplary embodiment. The method implemented on a computer comprising at least a memory and a processor.
At S801, data corresponding to the detected EMR are received from the respective detectors of the N x M imaging devices.
At S802, for each respective detector, the data thereof are corrected for aberrations using, at least in part, images arising from the respective lenslet array, for example, by backpropagafion of the field from the detector array to an arbitrary object plane, to thereby correct for wave-front distortions, for example due to atmospheric turbulence, as described above.
At S803, for each respective detector, a transform of the corrected data is determined.
At S804, the transforms for each respective detector are combined, thereby providing a combined transform for the N x M imaging devices.
At 3805, the image of the object is determined from the combined transform.
S
The method may include any of the steps described herein, particularly as described with respect to Figure 5.
Figure 9 shows an image I acquired by a conventional imaging apparatus. Particularly, the image I is aberrated due to wave-front distortions, for example due to atmospheric turbulence, and due to diffraction blur.
Figure 10 shows unprocessed images 11 acquired by an imaging apparatus according to an exemplary embodiment.
Particularly, Figure 10 shows N x M plenoptic images, acquired by the imaging apparatus 1, as described above, where N = M = 4. Briefly, the N x M plenoptic images each comprise 52 images, corresponding to the 52 lenslets of the lenslet array 20. The N x M plenoptic images are acquired simultaneously.
Figure 11 shows reconstructed images 12, provided using the method according to Figure 8.
Particularly, Figure 11 shows the reconstructed images 12 after completing S802. Briefly, each of the N x M reconstructed images is reconstructed from the respective image plenoptic image of Figure 11, for example, by backpropagation of the field from the detector array to an arbitrary object plane, to thereby correct for wave-front distortions, for example due to atmospheric turbulence, as described above.
Figure 12 shows a recovered image 13, provided using the method according to Figure 8.
Particularly, Figure 12 shows the recovered image 13 after completing S805. Briefly, the complex-valued, high-resolution field 117(xt,y9 is recovered from the reconstructed N x M reconstructed images of Figure 12.
Although a preferred embodiment has been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims and as described above.
Attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
All of the features disclosed in this specification (including any accompanying claims and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at most some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of the foregoing embodiment(s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
S

Claims (15)

  1. CLAIMS1. An imaging apparatus comprising: a two-dimensional imaging array comprising N x M imaging devices, wherein N and M are positive integers and wherein respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation, EMR, detector, wherein the lenslet array comprises at least P x Q lenslets, wherein P and Q are positive integers, and wherein the lenslet array is arranged between the aperture lens and the detector; and a set of S coherent EMR sources, wherein S is a positive integer, wherein respective sources are arrangeable to irradiate an object with coherent EMR at respective angles of incidence.
  2. 2. The apparatus according to claim 1, wherein N 1, M 1 and S 1.
  3. 3. The apparatus according to any previous claim, wherein N 1, M a 2 and S 1. 15
  4. 4. The apparatus according to any previous claim, wherein N a 2, M a 2 and S a 1.
  5. 5. The apparatus according to any previous claim, wherein S = 1 and wherein the EMR source is arrangeable in a first configuration to irradiate the object with coherent EMR at a first angle of incidence and in a second configuration to irradiate the object with coherent EMR at a second angle of incidence.
  6. 6. The apparatus according to any previous claim, wherein S a 2 and wherein the apparatus is arranged to successively irradiate the object using the respective EMR sources.
  7. 7. The apparatus according to any previous claim, wherein P a 8 and Q a 8, preferably wherein P a 32 and Q a 32.
  8. 8. The apparatus according to any previous claim, wherein the coherent EMR has a wavelength of 1 pm.
  9. 9. The apparatus according to any previous claim, wherein the aperture lens has a focal length of a 28 mm.
  10. 10. The apparatus according to any previous claim, wherein the respective two-dimensional lenslet arrays are arranged spaced apart from the respective aperture lenses by respective focal lengths thereof.
  11. 11. The apparatus according to any previous claim, wherein the respective detectors are arranged spaced apart from the respective lenslet arrays by respective focal lengths thereof.
  12. 12. A method of imaging an object comprising: irradiating the object with coherent EMR at respective angles of incidence using a set of S coherent EMR sources, wherein S is a positive integer; and imaging the object by detecting at least a part of the coherent EMR reflected therefrom using a two-dimensional imaging array comprising N x M imaging devices, wherein N and M are positive integers and wherein respective imaging devices comprise an aperture lens, a two-dimensional lenslet array and an electromagnetic radiation, EMR, detector, wherein the lenslet array comprises at least P x Q lenslets, wherein P and Q are positive integers, and wherein the lenslet array is arranged between the aperture lens and the detector.
  13. 13.A computer comprising at least a processor and a memory, the computer arranged to provide an image of an object from EMR detected by the apparatus according to any of claims 1 to 12, wherein the computer is arranged to: receive data corresponding to the detected EMR from the respective detectors of the N x M imaging devices; for each respective detector, correct the data thereof for aberrations using, at least in part, images arising from the respective lenslet array; for each respective detector, determine a transform of the corrected data; combine the transforms for each respective detector, thereby providing a combined transform for the N x M imaging devices; and determine the image of the object from the combined transform.
  14. 14. A method of providing an image from EMR detected by the apparatus according to any of claims 1 to 12, the method implemented on a computer comprising at least a memory and a processor, the method comprising: receiving data corresponding to the detected EMR from the respective detectors of the N x M imaging devices; for each respective detector, correcting the data thereof for aberrations using, at least in part, images arising from the respective lenslet array; for each respective detector, determining a transform of the corrected data; combining the transforms for each respective detector, thereby providing a combined transform for the N x M imaging devices; and determining the image of the object from the combined transform.
  15. 15. Use of a coherent EMR source for irradiating an object for plenopfic imaging.
GB1916967.1A 2019-11-21 2019-11-21 Imaging apparatus Pending GB2589121A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1916967.1A GB2589121A (en) 2019-11-21 2019-11-21 Imaging apparatus
PCT/GB2020/052831 WO2021099761A1 (en) 2019-11-21 2020-11-09 Imaging apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1916967.1A GB2589121A (en) 2019-11-21 2019-11-21 Imaging apparatus

Publications (2)

Publication Number Publication Date
GB201916967D0 GB201916967D0 (en) 2020-01-08
GB2589121A true GB2589121A (en) 2021-05-26

Family

ID=69137259

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1916967.1A Pending GB2589121A (en) 2019-11-21 2019-11-21 Imaging apparatus

Country Status (1)

Country Link
GB (1) GB2589121A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240080580A1 (en) * 2022-09-07 2024-03-07 Microsoft Technology Licensing, Llc Resolution enhancement in spatial-frequency space

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007105205A2 (en) * 2006-03-14 2007-09-20 Prime Sense Ltd. Three-dimensional sensing using speckle patterns
US20140239071A1 (en) * 2013-02-28 2014-08-28 Hand Held Products, Inc. Indicia reading terminals and methods for decoding decodable indicia employing light field imaging
US20150075068A1 (en) * 2013-09-13 2015-03-19 Palo Alto Research Center Incorporated Unwanted plant removal system
US20160165105A1 (en) * 2014-12-05 2016-06-09 National Security Technologies, Llc Hyperchromatic Lens For Recording Time-Resolved Phenomena
US20170188564A1 (en) * 2013-09-13 2017-07-06 Palo Alto Research Center Incorporated Unwanted plant removal system having variable optics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007105205A2 (en) * 2006-03-14 2007-09-20 Prime Sense Ltd. Three-dimensional sensing using speckle patterns
US20140239071A1 (en) * 2013-02-28 2014-08-28 Hand Held Products, Inc. Indicia reading terminals and methods for decoding decodable indicia employing light field imaging
US20150075068A1 (en) * 2013-09-13 2015-03-19 Palo Alto Research Center Incorporated Unwanted plant removal system
US20170188564A1 (en) * 2013-09-13 2017-07-06 Palo Alto Research Center Incorporated Unwanted plant removal system having variable optics
US20160165105A1 (en) * 2014-12-05 2016-06-09 National Security Technologies, Llc Hyperchromatic Lens For Recording Time-Resolved Phenomena

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240080580A1 (en) * 2022-09-07 2024-03-07 Microsoft Technology Licensing, Llc Resolution enhancement in spatial-frequency space
WO2024054349A1 (en) * 2022-09-07 2024-03-14 Microsoft Technology Licensing, Llc Resolution enhancement in spatial-frequency space
US11950000B2 (en) * 2022-09-07 2024-04-02 Microsoft Technology Licensing, Llc Resolution enhancement in spatial-frequency space

Also Published As

Publication number Publication date
GB201916967D0 (en) 2020-01-08

Similar Documents

Publication Publication Date Title
US20170276545A1 (en) An imaging system parallelizing compressive sensing imaging
US11175489B2 (en) Smart coded access optical sensor
JP2013535931A (en) Reduced image acquisition time for compression imaging devices
CN104469183B (en) A kind of light field of X-ray scintillation body imaging system catches and post-processing approach
US10783652B2 (en) Plenoptic imaging apparatus, method, and applications
JP4031306B2 (en) 3D information detection system
KR20180080219A (en) Hologram optical field imaging device and method of using same
WO2005085936A1 (en) An optical system for producing differently focused images
WO2021099761A1 (en) Imaging apparatus
US11523097B2 (en) Process and apparatus for the capture of plenoptic images between arbitrary planes
Fife et al. A 3D multi-aperture image sensor architecture
CN1702452B (en) Digital microscope multi-objective imaging spectrometer apparatus
CN112866675B (en) Depth map generation method and device, electronic equipment and computer-readable storage medium
CN106663312B (en) System and method for improved computational imaging
GB2589121A (en) Imaging apparatus
CN109246349B (en) High-quality super-resolution area array imaging camera and imaging method
EP3826283A1 (en) Imaging apparatus comprising at least a light source and a plenoptic camera
Tang et al. Multi-image-distance imaging system for extending depth-of-field
RU2806249C1 (en) Method for obtaining high spatial resolution images by opto-electronic observation tool for remote objects
RU2782506C1 (en) Method for generation of high resolution image in lens-free camera
RU2785213C1 (en) High resolution imaging device in a lensless camera
RU2781756C1 (en) Method for generation of high resolution image in lens-free camera
Ahuja et al. Design of large field-of-view high-resolution miniaturized imaging system
Du et al. A spectral line imager based on a MEMS vibratory grating scanner
Romasew et al. Hadamard camera for 3d imaging