WO2021127726A1 - Apparatus, system and method of digital imaging - Google Patents

Apparatus, system and method of digital imaging Download PDF

Info

Publication number
WO2021127726A1
WO2021127726A1 PCT/AU2020/051410 AU2020051410W WO2021127726A1 WO 2021127726 A1 WO2021127726 A1 WO 2021127726A1 AU 2020051410 W AU2020051410 W AU 2020051410W WO 2021127726 A1 WO2021127726 A1 WO 2021127726A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
computer
pinholes
light
images
Prior art date
Application number
PCT/AU2020/051410
Other languages
French (fr)
Inventor
Vijayakumar ANAND
Soon Hock NG
Jovan MAKSIMOVIC
Saulius Juodkazis
Original Assignee
Swinburne University Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2019904895A external-priority patent/AU2019904895A0/en
Application filed by Swinburne University Of Technology filed Critical Swinburne University Of Technology
Publication of WO2021127726A1 publication Critical patent/WO2021127726A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
    • G01J3/0229Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using masks, aperture plates, spatial light modulators or spatial filters, e.g. reflective filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
    • G01J3/0237Adjustable, e.g. focussing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/30Measuring the intensity of spectral lines directly on the spectrum itself
    • G01J3/32Investigating bands of a spectrum in sequence by a single detector
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/42Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Definitions

  • a RAP mask can be used to image objects with a high SNR in comparison with existing imaging techniques.
  • the intensity pattern which is the sum of the images generated by modulated light from the multiple pinholes of the RAP mask, may be decoded by application of a cross correlation with a corresponding point spread function, as described in further detail below.
  • the object may be a sample for microscopic imaging and the housing includes an opening for receiving a carrier carrying the sample.
  • the opening may be positioned to allow placement of the sample adjacent the mask defining an array of pinholes.
  • the opening may be configured such that the carrier and sample is position at a predetermined distance from the RAP mask.
  • Embodiments of the invention may therefore advantageously provide a lensless, interference-less, motionless, non-scanning, space, spectrum and time resolved five-dimensional incoherent imaging technique using a mask defining an array of pinholes.
  • light diffracted from an approximated point object may be modulated by a random pinhole array in the RAP mask.
  • Unique spatio-spectral intensity signatures corresponding to different wavelengths and distances with respect to the illumination source may be recorded and catalogued in a library. The library may be used to decode the intensity pattern recorded for an object into multiple spatio-spectral images of the object.
  • Figure 1 B is a schematic diagram of an imaging system for digital imaging of an object according to another embodiment of the invention.
  • the inset in Figure 7 shows the plot of the autocorrelation and the plot of cross-correlation on the same axis.
  • Figure 8A is a schematic diagram illustrating an iterative optimisation procedure for improving the Signal to Noise Ratio (SNR) in the digital construction of the one or more images for the object.
  • SNR Signal to Noise Ratio
  • NBS National Bureau of Standards
  • Figures 14A and 14B respectively illustrates an experimental imaging system setup according to a first and a second scenario to evaluate ability of the imaging system to function under extreme conditions.
  • the object 102 is a sample for microscopic imaging and the housing 108 includes an opening 126 for receiving a carrier 128 for carrying the sample 102.
  • the opening 126 is positioned to allow placement of the sample 102 at a predetermined distance from the RAP mask 112 and at a position that is axially aligned with a centre of the RAP mask 112.
  • the opening 126 may be configured such that when the carrier 128 is received within the housing 108, a minimum amount of light passes through the opening 126.
  • the opening 126 may include a seal for blocking light.
  • PSF Point Spread Function
  • the digital image construction module 204 constructs one or more digital images of the object 102 in accordance with the following process steps.
  • the digital image construction module 204 obtains the numerical representation of the object intensity pattern l 0 , for example, in the form of a double precision matrix.
  • a cross correlation function (6) as described in further detail below is applied based on the object intensity pattern matrix l 0 and the PSF matrixI PSF ( ⁇ n , u m ) to obtain a numerical value for the constructed image IR.
  • the cross correlation function (6) includes tuned values for a and b to optimise background noise. This is described in further detail below with reference to Figure 11 .
  • Figure 6C illustrates that the spectral resolution also improves with decreasing radius R (and diameter) of the pinholes.
  • the simulation in relation to spectral resolution was conducted using wavelengths in the visible spectrum. Flowever, it is expected that the simulation could have been conducted using different wavelengths in order to arrive at the same conclusions.
  • a mask defining a single pinhole (not shown) having a diameter of 100 ⁇ m was used to approximate a point object and determine the axial resolution response of the experimental imaging system 600 as shown in Figure 9A.
  • the single pinhole mask is positioned between each respective LED channel 602, 604 in place of the USAF and NBS objects 606, 608. [0159] Initially, the channel illuminated by green LED 604 was blocked and in the red LED channel 602, the location of the pinhole mask was shifted, and the corresponding point spread function intensity patterns were recorded.
  • embodiments of the invention utilise the depth-wavelength relationship described and validated herein to see colour and resolve depth by sampling either the spectrum or the depth.
  • embodiments of the invention can be used for seeing through scattering layers at depths.
  • the depth-wavelength relationship proves that when reconstructing images for a complex object, the depth-wavelength relationship can be used to reconstruct images at different depths by only varying the wavelength (e.g. in tissue imaging applications) or reconstruct images at different wavelengths by only varying depth (e.g. in microscopic imaging applications).
  • the depth-wavelength reciprocity can facilitate creation and compilation of the library of spatio-spectral intensity information.
  • the library creation may only require either sampling of depth or a broad band source with a monochromator but not necessarily both. This may advantageously save resources and thus further reduce costs.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Described herein is an apparatus, system and method of digital imaging. One embodiment provides an apparatus (104) for coupling to a computational device (106) to facilitate digital imaging of an object (102). The apparatus (104) comprises a mask (112) defining an array of pinholes for modulating light (114) diffracted from the object (102). An intensity pattern (120) generated by modulated light (122) from the mask (112) is detectable using an image sensor (124) for digital construction of one or more images of the object (102) by the computational device (106).

Description

Apparatus, System and Method of Digital Imaging Technical Field
[0001] The invention is directed to an apparatus, system and method of digital imaging. In particular, embodiments of the invention may be directed to digital imaging of microscopic objects, although the scope of the invention is not necessarily limited thereto.
Background of Invention
[0002] Pinhole imaging is an imaging technique developed several centuries ago with the first published report in 1545 showing a drawing in Gemma Frisius’ De Radio Astronomica et Geometrica. Today, pinhole imaging continues to be an attractive area of research as the technique is easier to implement in almost any band of the electromagnetic spectrum and exhibits distortion, astigmatism free and infinite depth of field imaging capabilities. In particular, pinhole imaging systems have been useful in astronomical, biomedical applications, holography with visible and x-rays, and phase imaging using neutron beam. One of the main drawbacks in pinhole imaging is that most of the incoming light is blocked, resulting in a low intensity image with a poor signal-to-noise ratio (SNR).
[0003] Alternative imaging techniques such as those using a Fresnel zone plate, photon sieves, coded apertures and the like, may be used to improve SNR. Coded aperture imaging techniques for multidimensional imaging often require an active device such as a spatial light modulator (SLM) or advanced lithography procedures to manufacture a phase mask to reconstruct an image of an object with a high signal to noise ratio. This substantially increases the building cost of the imaging system.
[0004] Moreover, implementing coded apertures using a SLM or multilevel phase elements is not always possible for other non-visible bands of the electromagnetic spectrum. This is because active devices such as SLM are currently not available for non-visible bands of electromagnetic spectrum and fabrication of phase masks for non- visible bands is a challenging task.
[0005] In addition, the field of view of imaging is limited in coded aperture imaging techniques, and the fabrication of coded aperture phase masks are often challenging due to varying feature sizes across the mask. [0006] Embodiments of the invention may provide an apparatus, system and method of digital imaging which overcomes or ameliorates one or more of the disadvantages or problems described above, or which at least provides the consumer with a useful choice.
Summary of Invention
[0007] According to one aspect of the invention, there is provided an apparatus for coupling to a computational device to facilitate digital imaging of an object, the apparatus comprising a mask defining an array of pinholes for modulating light diffracted from the object, wherein an intensity pattern generated by modulated light from the mask is detectable using an image sensor for digital construction of one or more images of the object by the computational device.
[0008] It has been discovered that using an amplitude modulating mask defining an array of pinholes, such as a random or quasi-random array of pinholes (referred to herein as a RAP mask), the digital imaging technique can be transferred to all areas of electromagnetic spectrum. Moreover, it is relatively cost effective and easy to manufacture a RAP mask.
[0009] A RAP mask can be used to image objects with a high SNR in comparison with existing imaging techniques. In embodiments of the present invention, the intensity pattern, which is the sum of the images generated by modulated light from the multiple pinholes of the RAP mask, may be decoded by application of a cross correlation with a corresponding point spread function, as described in further detail below.
[0010] The RAP mask may include any suitable configuration. For example, the RAP mask may include any suitable number of pinholes in any random or quasi-random arrangement. The RAP mask can have any suitable shape. For example, the RAP mask may be a generally planar sheet of material having any suitable shape, such as a square, circular, square, rectangular, or any regular and irregular shape. In some embodiments, the RAP mask may have a non-planar, curved or three-dimensional shape. The pinholes in the RAP mask may be generally circular in cross-section. Alternatively, the pinholes may be rectangular, square, triangular or have any regular and irregular shape in cross-section. [0011] In some embodiments, the pinholes may be arranged in a regular pattern on the mask. In these cases, the mask may define a plurality of pinholes. However, the pinholes may not be arranged randomly across a planar surface of the mask.
[0012] The intensity pattern may be captured by the image sensor in one or more frames. Moreover, the computational device may be configured to construct one or more images in three or more dimensions based on each frame of the intensity pattern, the three or more dimensions including spatial dimensions of the object.
[0013] The three or more dimensions may further include spectral information associated with wavelengths of light diffracted from the object.
[0014] Each frame of the intensity pattern may be captured instantaneously, and the computational device may be configured to construct the one or more images in real-time or near real-time.
[0015] The computational device may be configured to generate video content based on the images constructed in real-time or near real-time.
[0016] Accordingly, in some embodiments, the apparatus may be capable of facilitating digital imaging of the object in five dimensions, including three spatial dimensions of the object, spectral (wavelength) information and time.
[0017] The apparatus may further include an illumination source for illuminating the object. Alternatively, the illumination source may be provided externally to the apparatus, and the apparatus may include an aperture for receiving diffracted light from the object therethrough.
[0018] The illumination source may provide incoherent light for illuminating the object.
[0019] The apparatus may further include a receiving portion for receiving the object for placement at an axial location adjacent the mask defining an array of pinholes. The receiving portion may be configured such that once the object is placed at the axial location within the apparatus, the object is positioned at a predetermined distance from the RAP mask.
[0020] The apparatus may further include a housing for enclosing the mask defining an array of pinholes, the housing having an engagement portion for engagement with the computational device to form a seal such that light passing through the seal is minimised. In one embodiment, the housing may include an aperture at one end thereof for receiving diffracted light from the object therethrough, and the engagement portion at an opposite end thereof for engagement with the computational device.
[0021] The engagement portion may include any suitable engagement means for engaging with the computation device. For example, the engagement portion may include a threaded portion for threaded engagement with the corresponding threaded portion of the computational device. In another example, the engagement portion may include one or more clips for clipping to the computation device. In a further example, the engagement portion may include a cover for covering at least a portion of the computational device. Typically, the engagement portion enables connection between the apparatus and the computational device such that the RAP mask is suitably aligned with an image sensor provided by the computational device.
[0022] In one embodiment, the image sensor is provided by a camera of the computational device. The computational device may be a mobile device, a purpose- built imaging processing system and/or a computing device. Alternatively, the image sensor may be provided by a separate imaging device such as a camera, and the camera may be coupled to the computational device. In some embodiments, the image sensor may be provided in the apparatus for coupling to the computational device. For example, the apparatus may include one or more communication interface (e.g. wireless or cable connection) for communicating image data from the image sensor with a computational device.
[0023] The object may be a sample for microscopic imaging and the housing includes an opening for receiving a carrier carrying the sample. The opening may be positioned to allow placement of the sample adjacent the mask defining an array of pinholes. In particular, the opening may be configured such that the carrier and sample is position at a predetermined distance from the RAP mask.
[0024] As mentioned, the computational device may include a camera. The camera may provide the image sensor for detecting the intensity pattern when the apparatus is coupled to the computational device.
[0025] According to another aspect of the invention, there is provided a computer- implemented method for digital imaging of an object, the method including detecting, via an image sensor, an intensity pattern representative of light diffracted from the object, the light diffracted from the object being modulated by a mask defining an array of pinholes prior to detection by the image sensor, and digitally constructing one or more images of the object based on the intensity pattern.
[0026] The mask defining an array of pinholes may define a random array of pinholes.
[0027] The computer-implemented method may be implemented via software for installation and execution on a computer processor. The computer processor may include a software application installed thereon for executing one or more of the steps of the computer implemented method. In alternative embodiments, the software application may be a cloud-based application accessible via a network such as the internet. In some embodiments, the software application may be accessible remotely via a local network.
[0028] The method may include capturing the intensity pattern, via the image sensor, in one or more frames, construct one or more images in three or more dimensions based on each frame of the intensity pattern, the three or more dimensions including spatial dimensions of the object.
[0029] The three or more dimensions may further include information associated with wavelengths of light diffracted from the object.
[0030] The computer-implemented method may include capturing each frame of the intensity pattern instantaneously, and constructing the one or more images in real-time or near real-time.
[0031] The computer-implemented method may further include generating video content based on the images constructed in real-time or near real-time.
[0032] Advantageously, according to embodiments of the invention, the method may be capable of digitally constructing the one or more images in five dimensions, including three spatial dimensions of the object, spectral (wavelength) information and time. [0033] Typically, in microscope or telescope applications, the recorded image is only a 2D image (e.g. including x and y dimensions) of a 3D object. One therefore has to focus and refocus to get images at different depths. According to embodiments of the present invention, the recorded image can be processed into a 3D image (x, y and z dimensions). In other words, the object can be seen at different depths from a single image shot or recording. In addition, embodiments of the invention provide images of the object illuminated using light of different wavelengths. For example, a portion of the object that emits a green colour can be viewed as a different image to a portion of the object that emits a red colour and so forth. This may be achieved using a monochrome camera. This spectral/wavelength dimension provides a fourth dimension.
[0034] Generally, 3D imaging requires more than two camera shots (e.g. for inline set up). However, according to embodiments of the invention, only a single camera shot is required for the digital constructions of images for the object. Consequently, it is possible to achieve high speed imaging using embodiments of the present invention, for example in real time or near real-time. Accordingly, time provides the fifth dimension.
[0035] Digitally constructing one or more images of the object based on the intensity pattern may include constructing the one or more images based on a library of spatio-spectral intensity information for an approximated point object.
[0036] The method may further include compiling a library of spatio-spectral intensity information for an approximated point object by carrying out the following steps providing a pinhole mask defining a single pinhole to approximate a point object, positioning the pinhole mask at one or more different axial locations with respect to an illumination source, using the illumination source, sequentially illuminating the pinhole mask at one or more different wavelengths of light, recording a plurality of intensity patterns generated by modulated light from the pinhole mask for each wavelength of light and each axial location, compiling a library of point spread functions based on the recorded intensity patterns, each point spread function within the library, approximating a point spread function of a point object for a respective axial location with respect to the illumination source and wavelength of light.
[0037] The library of spatio-spectral intensity information including the relevant intensity patterns can be compiled in a number of suitable ways. For example, the pinhole mask may be positioned at a discrete number of axial locations with respect to the illumination source, and the pinhole mask may be illuminated at a plurality of different wavelengths of light for each axial location to record a plurality of intensity patterns to compile the library.
[0038] In another example, the pinhole mask may be illuminated at a plurality of different wavelengths of light, and for each wavelength of light, the pinhole mask may be positioned at a discrete number of axial locations with respect to the illumination source to record a plurality of intensity patterns to compile the library.
[0039] In another example, the pinhole mask may be positioned at a single axial location with respect to the illumination source, and the pinhole mask may be illuminated at a plurality of different wavelengths of light for the single axial location to record a plurality of intensity patterns to compile the library. Remaining intensity patterns corresponding to different axial locations can be estimated based on a predetermined relationship between the axial location of the pinhole mask and wavelengths of light, as discussed in further detail below with reference to Figures 7 and 13.
[0040] In a further example, the pinhole mask may be illuminated at a single wavelength of light, and the pinhole mask may be positioned at a discrete number of axial locations with respect to the illumination source to record a plurality of intensity patterns to compile the library. Remaining intensity patterns corresponding to different wavelengths of light can be estimated based on a predetermined relationship between the axial location of the pinhole mask and wavelengths of light, as discussed in further detail below with reference to Figures 7 and 13.
[0041] In one embodiment, the computer-implemented method may further include recording a plurality of approximated point object intensity patterns by carrying out the following steps
1. providing a pinhole mask defining a single pinhole to approximate a point object, 2. positioning the pinhole mask at an initial axial location with respect to an illumination source,
3. using the illumination source, sequentially illuminating the pinhole mask at a plurality of different wavelengths of light,
4. recording an intensity pattern generated by modulated light from the pinhole mask for each wavelength of light,
5. compiling a point spread function sub-library based on the recorded intensity patterns in step 4, each point spread function within the sub-library approximating a point spread function of a point object for a respective axial location with respect to the illumination source and wavelength of light,
6. moving the pinhole mask to a further axial location with respect to the illumination source and repeating steps 3 to 5,
7. repeating steps 1 to 6 for a predetermined number of axial locations with respect to the illumination source, and
8. compiling the library of spatio-spectral information based on the point spread function sub-libraries.
[0042] Digitally constructing one or more images of the object may include determining a correlation between the intensity pattern representative of light diffracted from the object, and a point spread function selected from the library of spatio-spectral information.
[0043] The correlation may be implemented using a cross-correlation or autocorrelation function. In some embodiments, the correlation may be implemented using a linear or non-linear correlation. In some embodiments, constructing one or more images of the object may include using any one or more of a Lucy Richardson, Weiner filter, Matched filter and a phase-only filter.
[0044] In some embodiments, the pinholes in the mask defining an array of pinholes each have a diameter between a range of about 1 μm to 200μm. In one embodiment, the size of the pinholes in the RAP mask are roughly 70μm to 90μm in diameter.
[0045] Typically, the diameter of the pinholes in the RAP mask is greater than the wavelength of light. The diameter may have a maximum size limit of roughly a few hundred micrometres. [0046] The locations of pinholes in the mask defining an array of pinholes may be determined using an iterative optimisation method to increase signal to noise ratio.
[0047] The iterative optimisation method may include a first optimisation step to determine a first simulated mask defining a random array of pinholes having a highest signal to noise ratio from a plurality of simulated masks each defining a different random array of pinholes, a second optimisation step to determine a second simulated mask defining a quasi-random array of pinholes by changing the position for each of the pinholes in the first simulated mask from the first optimisation step so as to yield the highest signal to noise ratio. The position of each of the pinholes in the first simulated mask may be moved within a predetermined range.
[0048] According to another aspect of the invention, there is provided a computer device having a computer-readable storage medium storing instructions that when executed by the computer device causes the computer device to perform a computer- implemented method as described herein.
[0049] According to a further aspect of the invention, there is provided a system for digital imaging of an object including an apparatus as described herein, and a computer device having a computer-readable storage medium storing instructions that when executed by the computer device causes the computer device to perform a computer-implemented method as described herein, and wherein the apparatus is coupled to a camera of the computer device.
[0050] Embodiments of the invention may therefore advantageously provide a lensless, interference-less, motionless, non-scanning, space, spectrum and time resolved five-dimensional incoherent imaging technique using a mask defining an array of pinholes. In particular, light diffracted from an approximated point object may be modulated by a random pinhole array in the RAP mask. Unique spatio-spectral intensity signatures corresponding to different wavelengths and distances with respect to the illumination source may be recorded and catalogued in a library. The library may be used to decode the intensity pattern recorded for an object into multiple spatio-spectral images of the object. [0051] As discussed in further detail below, a depth-wavelength relationship (governing a relationship between wavelengths of light and distances between the approximated point object with respect to the illumination source) may be exploited to match one spatio-spectral signature to another to see colour from depth and depth from colour. Embodiments of the invention may be used to see an event synchronously from different spatio-spectral perspectives. Advantageously, embodiments of the invention may be useful for various applications including telescopy, microscopy, imaging through scattering layers, imaging under extreme conditions, photospectrometry, hyperspectral imaging, creation of 4D videos, video encryption. In addition, embodiments of the invention may be extended to non-visible bands of the electromagnetic spectrum as it is relatively easy to manufacture a pinhole array for non- visible bands of the electromagnetic spectrum.
[0052] In order that the invention may be more readily understood and put into practice, one or more preferred embodiments thereof will now be described, by way of example only, with reference to the accompanying drawings.
Brief Description of Drawings
[0053] Figure 1 A is a schematic diagram of an imaging system for digital imaging of an object according to one embodiment of the invention.
[0054] Figure 1 B is a schematic diagram of an imaging system for digital imaging of an object according to another embodiment of the invention.
[0055] Figure 2 is a further schematic diagram of an imaging system for digital imaging of an object according to one embodiment of the invention.
[0056] Figure 3 is a process flow diagram illustrating a method of compiling a library of spatio-spectral information for a computer implemented method according to embodiments of the invention.
[0057] Figure 4 is a schematic diagram of a system for digital imaging of an object according to one embodiment of the invention further illustrating a process for capturing and processing image data from an image sensor using a computational device.
[0058] Figure 5 is a schematic diagram including a process flow chart illustrating a computer implemented method for digital imaging of an object according to an embodiment of the invention. [0059] Figure 6A is a plot of an autocorrelation function for different radii of the pinholes in a RAP mask.
[0060] Figure 6B illustrates images of the intensity pattern (IPSF) for R = 10μm, 20μm, 30μm, 40μm and 50μm, where R is the radius of the pinholes in a RAP mask.
[0061] Figure 6C is a plot of normalised intensity IR(x = y = 0) for different wavelengths and for different radii of the pinholes of the RAP mask from R = 10μm to 50μm in steps of 10μm, where IR represents a cross-correlation with IPSF (l) when the wavelength is varied from l = 551 nm to 650nm in steps of 1 nm.
[0062] Figure 7 illustrates a plot of the normalised cross-correlation IR(x = 0, y = 0) between IPSF (u = 10 cm, v = 10 cm, λ1 = 617 nm) and IPSF (u, v, λ2 = 530 nm) for different values of distances u and v, and a separate plot of the normalised autocorrelation of IPSF (u = 10 cm, v = 10 cm, λ1 = 617 nm). The inset in Figure 7 shows the plot of the autocorrelation and the plot of cross-correlation on the same axis.
[0063] Figure 8A is a schematic diagram illustrating an iterative optimisation procedure for improving the Signal to Noise Ratio (SNR) in the digital construction of the one or more images for the object.
[0064] Figure 8B is a plot of the resulting SNR as a function of number of iterations whilst executing the iterative optimisation procedure as shown in Figure 8A.
[0065] Figure 8C illustrates images of simulated quasi - RAP (QRAP) masks for a) lowest SNR of a first optimisation process, b) highest SNR of the first optimisation process and, c) highest SNR of a second optimisation process; and reconstruction image result of a test object using the respective simulated RAP masks d), e) and f).
[0066] Figure 9A is a schematic diagram of a two-channel experimental set up, in which a two-channel optical set up illuminated by red and green light emitting diodes with central wavelength and FWHM of 617 nm, 18 nm and 530 nm, 33 nm respectively.
[0067] Figure 9B illustrates the point spread intensity pattern for u = 10 cm, v = 10 cm for wavelengths λ1 = 617 nm and λ2 = 530 nm.
[0068] Figure 9C is a plot of the normalized cross-correlation value at (x = y = 0) for z = -35 mm to 35 mm for λ1 = 617 nm. The Full Width at Half Maximum (FWHM) is approximately 3 mm. [0069] Figure 9D Plot of the normalized cross-correlation value at (x = y = 0) for z = -35 mm to 35 mm for λ2 = 530 nm. The FWHM is approximately 2.5 mm.
[0070] Figure 10A illustrates an object intensity pattern recorded for u = 10 cm and v = 10 cm for a sample from the 1951 US Air Force resolution test chart (referred to herein as an USAF object) illuminated by light having a wavelength of λ1 = 617 nm.
[0071] Figure 10B illustrates an object intensity pattern recorded for u = 10 cm and v = 10 cm for a sample from a National Bureau of Standards (NBS) test chart (referred to herein as an NBS object) illuminated by light having a wavelength of λ2 = 530 nm
[0072] Figure 10C illustrates an object intensity pattern recorded for u = 10 cm and v = 10 cm for both USAF and NBS objects illuminated by light having wavelengths λ1 = 617 nm and λ2 = 530 nm.
[0073] Figure 11 illustrates image reconstruction results using a non-linear filter for different values of a and b in Formula (6) between -1 and 1 in steps of 0.2. The optimal non-linear filter for entropy value 1 was found to occur at (α = 0 and β = 0.6).
[0074] Figure 12 illustrates image reconstruction results for a two-channel experimental imaging system according to an embodiment of the invention in which one of two objects is moved whilst another one of the two objects is kept stationary to demonstrate 5D imaging capabilities of the experimental imaging system as shown in Figure 9A.
[0075] Figure 13 illustrate intensity patterns and corresponding reconstructed images to validate the depth-wavelength relationship with respect to the a two-channel experimental imaging system as shown in Figure 9A.
[0076] Figures 14A and 14B respectively illustrates an experimental imaging system setup according to a first and a second scenario to evaluate ability of the imaging system to function under extreme conditions.
[0077] Figure 15A is a schematic diagram illustrating an optical configuration of an imaging system using a lens.
[0078] Figure 15B is a schematic diagram illustrating an optical configuration of an imaging system using a RAP mask.
[0079] Figure 15C is a schematic diagram illustrating an optical configuration of the imaging system using a RAP mask as shown in Figure 15B. Detailed Description
[0080] As shown in Figure 1A, a system 100 for digital imaging of an object 102 includes an apparatus 104 for coupling to a computational device 106. The apparatus 104 includes a housing 108 (represented by broken lines) having engagement means (represented by solid line 110) for engagement with a computational device 106. The housing 108 encloses a mask 112 defining an array of pinholes such as a random or quasi-random array of pinholes (also referred to herein as a RAP mask 112) for modulating light 114 diffracted from the object 102.
[0081] The engagement portion 110 may form a seal with the computational device 106 such that any light passing through the seal from within the system 100 to an external environment and vice versa is minimised. In this case, the housing 108 when coupled to the computational device 106 via the engagement portion 110 may provide a complete seal such that external light cannot pass through walls of the housing 108 and enter the apparatus 104, and similarly internal light from the illumination source 116 cannot pass through the walls of the housing 108 to the external environment. The embodiment of the invention as shown in Figure 1A may be used in mobile device microscopy applications, in which the apparatus 104 is configured for mounting to a mobile device 106 such that the RAP mask 112 is axially aligned with a built-in camera of the mobile device 106. The built-in camera of the mobile device 106 may provide a suitable image sensor 124.
[0082] The engagement portion 110 may include any suitable engagement means for engaging with the computation device 106. For example, the engagement portion 110 may include a threaded portion for threaded engagement with a corresponding threaded portion of the computational device 106. In another example, the engagement portion may include one or more clips for clipping to the computation device 106. In a further example, the engagement portion may include a cover for covering at least a portion of the computational device 106. Typically, the engagement portion enables connection between the apparatus 104 and the computational device such that the RAP mask 112 is suitably aligned with an image sensor 120 provided by the computational device 106.
[0083] In some embodiments, the apparatus 108 may include an illumination source 116 for illuminating the object 102. Typically, the illumination source 116 provides incoherent light for illuminating the object 102. The apparatus 108 may further include a power source (e.g. a battery, solar power module and the like) for powering the illumination source 116. In some embodiments, the apparatus 108 may be configured for connection to an external power source (e.g. mains power supply, power supply from computational device and the like).
[0084] In some embodiments, the apparatus 104 includes a receiving portion for receiving the object 102 for placement at an axial location adjacent the mask 112 defining an array of pinholes. Generally, the receiving portion is configured such that once the object 102 is placed at the axial location within the apparatus 108, the object 102 is positioned at a predetermined distance from the RAP mask 112.
[0085] Typically, the object 102 is a sample for microscopic imaging and the housing 108 includes an opening 126 for receiving a carrier 128 for carrying the sample 102. The opening 126 is positioned to allow placement of the sample 102 at a predetermined distance from the RAP mask 112 and at a position that is axially aligned with a centre of the RAP mask 112. In particular, the opening 126 may be configured such that when the carrier 128 is received within the housing 108, a minimum amount of light passes through the opening 126. For example, the opening 126 may include a seal for blocking light.
[0086] As shown in Figure 1 B, in some embodiments, the apparatus 104’ only includes a housing 108’ enclosing a RAP mask 112. In these embodiments, the object 102 and illumination source 116 can be provided externally to the apparatus 104’. In addition, the housing 108’ may include an aperture 118 at one end thereof for receiving diffracted light 114 from the object 102 therethrough, and the engagement portion 110 at an opposite end of the housing 108’ for engagement with the computational device 110. In Figure 1 B, like numerals refer to like features described above with reference to Figure 1 A.
[0087] Now referring back to Figure 1A, an intensity pattern 120 generated by modulated light 122 from the RAP mask 112 is detectable using an image sensor 124 for digital construction of one or more images of the object by the computational device 106. For ease of reference, the digital construction of one or more images of the object is interchangeably referred to herein as image reconstruction or digital image reconstruction. [0088] As described in further detail below with reference to Figure 2 and 5, the intensity pattern 120 is typically captured by the image sensor 124 in a plurality of frames.
[0089] In some embodiments, the image sensor 124 is provided by a camera of the computational device 106. The computational device 106 may be a mobile device, a purpose-built imaging processing system and/or a computing device. Alternatively, the image sensor 124 may be provided by a separate imaging device such as a camera 160 (e.g. see Figure 4), and the camera 160 may be a peripheral device coupled to the computational device 106. In some alternative embodiments, the image sensor may be provided in the apparatus 104 for coupling to the computational device 106.
[0090] In particular, the computation device 106 has a computer-readable storage medium storing instructions that when executed by the computer device causes the computer device 106 to perform a computer-implemented method 200 to digitally construct one or more images of the object 102 based on the intensity pattern 120. The computer-implemented method 200 is described in further detail with reference to Figures 2 to 5.
[0091] The computer device 106 and computer implemented method 200 may be considered as a computer system. Generally, the computer system includes an image data acquisition module 202 and a digital image construction module 204. The data acquisition module 202 is configured for acquiring and processing image data from the image sensor 124 into an appropriate format for image construction via a digital image construction module 204. This is described in further detail with reference to Figure 4. The digital construction module 204 further processes image data received from the data acquisition module 202 and digitally constructs one or more images of the object 102 by application of a correlation function using data from a spatio-spectral database library 206. This is described in further detail with reference to Figure 5. The compilation of the spatio-spectral library is described in detail with reference to Figures 2 and 3.
[0092] Typically, the compilation of the spatio-spectral library is conducted during configuration of the computer system. The spatio-spectral library may be specific for a particular imaging system configuration for the selected RAP mask 112, image sensor 124 and their relative axial positions with respect to one another. [0093] Figure 2 illustrates an imaging system setup 140 for compiling spatio- spectral library 206. The setup 140 includes an incoherent illumination source 116 capable of providing different wavelengths of light, a pinhole mask 142 defining a single pinhole to approximate a point object, a RAP mask 112 and an image sensor 124. The illumination source 116, pinhole mask 112, RAP mask 112 and image sensor 124 are axially aligned. As explained in further detail with reference to Figure 3 below, the pinhole mask 142 is movable in the axial direction relative to the RAP mask 112 (and thus altering a relative axial distance between the pinhole mask 142 and the illumination source 116) to record varying intensity patterns 120 detected using the image sensor 124 for a given wavelength of light. The process can be repeated for each different wavelength of light within a desirable spectral range (e.g. visible light spectrum, infrared spectrum and/or ultraviolet spectrum) to compile the library 206.
[0094] A method 300 for compiling the library 206 using the setup 140 will now be described with reference to Figure 3.
[0095] At step 302, a first wavelength of light λ1 is selected, and a plurality of axial depth values u1, u2, u3... um are determined. Typically, the intervals of axial depth values for u is determined by the axial resolution of the imaging system. Each depth value u defines an axial distance between the pinhole mask 142 and the RAP mask 112 (see Figure 2). As mentioned, the pinhole mask 142 defines a single pinhole to approximate a point object. Typically, the smaller the pinhole, the more accurate the approximation. The diameter of the pinhole is generally greater than the wavelength of light. The image sensor 124 is used to record a first intensity pattern of the approximated point object for λ1 and ui and this is saved in the library database 206.
[0096] At step 304, the illumination source 116 is used to provide light at the first wavelength λ1. Whilst keeping the wavelength of light λ1 constant, the pinhole mask 142 is moved sequentially to each of the plurality of positions defined by depth values ui, U2, U3... Um, and the image sensor 124 is used to record a plurality of intensity patterns for the approximated point object corresponding to wavelength λ1 and each of the depth values u1, u2, u3... um to thereby compile a first Point Spread Function (PSF) sub-library PSF( λ1 , um), M=1 ,... m.
[0097] At step 306, a further wavelength of light λn+1 is selected (where N=1 , 2...n), whilst keeping the plurality of axial depth values u1, u2, u3... um the same as those in step 302. [0098] At step 308, the illumination source 116 is used to provide light at the further wavelength λn+1. Whilst keeping the wavelength of light λn+1 constant, the pinhole mask 142 is moved sequentially to each of the plurality of positions defined by depth values ui, U2, U3... Um, and the image sensor 124 is used to record a plurality of intensity patterns for the approximated point object corresponding to wavelength λn+1 and each of the depth values u1, u2, u3... um to thereby compile a further Point Spread Function (PSF) sub-library PSF(λn+1, um), N=1 ,... n; M=1 ,... m.
[0099] At query step 310, the method 300 determines whether all wavelengths within a desired electromagnetic spectrum (or electromagnetic spectrums) have been used to record the sub-libraries PSF(λn+1, um), N=1 ,... n; M=1 ,... m. If not, the method 300 returns to step 306. If so, the method 300 proceeds to step 312.
[0100] At step 312, a sub-library of Point Spread Functions (PSF) for each wavelength of light in a desired spectrum λ-i, λ2, l3... λn (N=1 ,... n) has been recorded, where the intensity patterns of the approximated point object at each wavelength of light is recorded using the image sensor 124 for a plurality of depth values u1, u2, u3... um (M=1 ,... m).
[0101] At step 314, The full collection of PSF sub-libraries form the complete PSF library PSF( λn, um), N = 1 ,... n; M=1 ,... m (also referred to herein as the spatio-spectral library 206). Each point spread function PSF(λn, um) in the library 206, is converted to a numerical value IPSFn, um), N = 1 , ... n; M=1 , ... m (e.g. an array of a suitable size and precision).
[0102] Now referring to Figure 4, when operating the system 100 (as shown in Figures 1A and 1 B) to construction one or more digital images of an object 102, the object 102 is placed at a predetermined distance ‘u’ with respect to the RAP mask 112 within the apparatus 104 (see Figure 1A). Light 162 diffracted from the object 102 is modulated by the RAP mask 112. A camera 160 having an image sensor can be used to detect an intensity pattern created by the modulated light 164. One or more frames of the intensity pattern can be captured using the camera 160.
[0103] Typically, the camera 160 is coupled to a computational device, having software installed thereon to process image data obtained from the camera 160 to construct one or more digital images of the object 120 via a data acquisition module 202 and a digital image construction module 204 (Figure 5). [0104] The data acquisition module 202 acquires image data from the camera 160 and processes the image data into a suitable format for input into the digital image constructions module 204 in accordance with a method illustrated in the flow diagram of Figure 4.
[0105] At step 208, image data in the form of one or more frames of the intensity pattern for object 102 (also referred to herein as the object intensity pattern) is captured using camera 160 and obtained by the data acquisition module 202.
[0106] At step 210, raw image data from step 208 is converted to an appropriate image format (e.g. JPEG, BMP).
[0107] At step 212, the image is loaded into application software for further preprocessing.
[0108] At step 214, the application software extracts a single RGB channel from the converted image data from step 210. Any suitable RGB channel may be extracted without affecting the implementation and outcome of the data acquisition module 202.
[0109] At step 216, the extracted single RGB channel of the converted image data is further processed to obtain a numerical representation of the object intensity pattern lo in the form of a matrix. Typically, the matrix includes values having appropriate size and precision (e.g. an array of 64-bit (8-byte) double-precision floating-point values).
[0110] Now referring to Figure 5, the digital image construction module 204 constructs one or more digital images of the object 102 in accordance with the following process steps.
[0111] At step 210, the digital image construction module 204 obtains the numerical representation of the object intensity pattern l0, for example, in the form of a double precision matrix.
[0112] At step 212, the module 204 calculates the Fourier transfer of the object intensity pattern matrix l0 and the PSF matrix IPSFn, um) for a selected wavelength value and depth value.
[0113] At step 214, a low pass filter is applied to remove noise.
[0114] At step 216, a cross correlation function (6) as described in further detail below is applied based on the object intensity pattern matrix l0 and the PSF matrixIPSFn, um) to obtain a numerical value for the constructed image IR. The cross correlation function (6) includes tuned values for a and b to optimise background noise. This is described in further detail below with reference to Figure 11 .
[0115] At step 218, a filter such as a median filter is applied to determine best values (e.g. median values) for IR for use in image construction.
[0116] At step 220, a graphical representation of the constructed image based on numerical values IR is generated and displayed via a display of the computer device 106.
[0117] Embodiments of the invention therefore provides a space, spectrum and time resolved five-dimensional (5D) imaging method using a RAP mask 112. When imaging an object 102 using a RAP mask 112, a random or quasi random array of images is generated in an image plane. The recorded intensity pattern 120 is the sum of the randomly located replica of the image of the object 102. This is true whether the object 102 is a point object or a more complicated 2D object. For a spatially incoherent illumination, an object 102 can be considered as a collection of uncorrelated point objects.
[0118] If IPSF is the Point Spread Function (PSF), then the object intensity pattern Io of an object O (102) can be expressed as a convolution of the object with the IPSF i.e., lo = O IPSF, where ® is the 2D convolutional operator. Therefore, the image O’ of
Figure imgf000021_0002
object O is reconstructed by lo * IPSF, where is a 2D correlation operator. For a 3D object illuminated by many wavelengths, Io can be considered as a sum of shifted and scaled multiple IPSF corresponding to different depths z and wavelengths l expressed as
Figure imgf000021_0001
[0119] where p and q are the number of wavelengths of light and depths (also previously referred to as ‘u’ denoting the distance between the approximated point object 142 and the RAP mask 112, in this case z is Δu) and i=1 ,2,3... p and j=1 ,2,3...q.
[0120] If a library of IPSF( λn ,Zj) where I ∈ [1 , p] and j ∈ [1 , q] was pre-recorded for different values of z and l, then Io can be reconstructed into wavelength and depth specific images of the object. When imaging objects using a pinhole, any change in the axial location of the object or wavelength creates an image of the object with a different degree of defocus. Contrarily, when an object is imaged using a RAP, unique random spatio-spectral intensity signatures are obtained for different values of depth and wavelength. As illustrated in Figure 2, light from an incoherent source 116 illuminates an approximated point object 142 critically, the diffracted light is modulated by a RAP mask 112 and the intensity pattern 120 is recorded by a monochrome camera (not shown). A library 206 of spatio-spectral intensity signatures are recorded, catalogued and stored as a library function IPSF(Z, l). Typically, the above process is conducted only once. In the embodiments described herein, an object 102 is placed within the axial boundaries of the library 206 and illuminated by wavelengths of light within the spectral limits of the library 206 and the object intensity pattern Io (120) is recorded. Preferably, the object intensity patter Io is recorded under identical conditions and distances as IPSF( z, λ).
Theoretical Analysis
[0121] The theoretical analysis is from the object plane (o plane) to the sensor plane (s plane). Light from an incoherent point located at (r0, u) with an amplitude of √Io reaches the plane of the RAP mask 112 with a complex amplitude given by Ci is a complex constant,
Figure imgf000022_0002
Figure imgf000022_0010
Figure imgf000022_0003
Figure imgf000022_0004
are the linear and quadratic phase factors. The RAP mask 112 located at the m plane consisting of a random array of N pinholes can be expressed as is a Delta
Figure imgf000022_0005
function, R is the radius of the pinhole, and
Figure imgf000022_0006
is a two dimensional convolutional operator,
Figure imgf000022_0007
, U is a uniform random variable and so x; and y, are randomly distributed on [0, C2]. The above convolution operation creates circles in the locations of the Delta functions. The complex amplitude after the mask is given as The intensity
Figure imgf000022_0008
pattern recorded by the image sensor located at a distance of vfor a single point is given as
Figure imgf000022_0009
[0122] The above equation can be expressed as
Figure imgf000022_0001
[0123] A two-dimensional object located in the object plane o when illuminated by a spatially incoherent light can be considered as a collection of M point objects given by
Figure imgf000023_0001
[0124] Due to the lack of spatial coherence, the object intensity pattern obtained in the s plane with the same mask is the summation of the shifted and scaled point spread intensity functions given as
Figure imgf000023_0002
[0125] The image of the object is reconstructed by a cross-correlation between /0 and IpsF
Figure imgf000023_0003
[0126] where the transverse magnification MT= (v/u) and L is a delta-like function with a maximum at the origin and negligible values in places other than the origin.
[0127] The above theoretical analysis demonstrates that it is possible to reconstruct the object information by a cross-correlation between the object intensity pattern and the point spread function. For an object consisting of multiple planes and illuminated by different wavelengths, the object intensity pattern is the summation of the object intensity patterns at different planes and different wavelengths
Figure imgf000023_0005
which can be reconstructed at different wavelengths and depth using where p and p are the number of wavelengths and depth, / = 1 , 2, 3 ... p
Figure imgf000023_0004
and j= 1 , 2, 3 ... q.
Spatial and spectral resolutions of imaging using a RAP mask
[0128] Experiments have been conducted to determine the relationship between size of the pinholes in the RAP mask 112 and the spatial and spectral resolutions of imaging. The lateral and axial resolutions of imaging using a RAP mask 112 may be determined in a similar manner as those of regular imaging with 1.22λu/D for lateral resolution and 8λ(u/D)2 for axial resolution respectively, where D is the diameter of the RAP mask 112.
[0129] The above limits assume that the point object is a true point object. However, in experiments, the pinholes have diameters of few tens of micrometres and therefore imposes limits on the resolutions resulting in a lower value than the theoretical ones.
[0130] During experimentation, the spatial and spectral resolutions for a particular RAP mask 112 were simulated for u = 10 cm, v = 10 cm, D = 4 mm and n = 500, where u denotes the axial distance between the theoretical point object and the RAP mask 112, v denotes the axial distance between the RAP mask 112 and the image sensor 124, D is the diameter of the RAP mask 112, and n is the number of pinholes in the RAP mask 112. In the simulations, pinhole diameters were varied from 10 μm to 100 μm in steps of 10 μm and wavelengths of 550 nm to 650 nm in steps of 1 nm were used.
[0131] The autocorrelation is also the measure of the resolving power of the system. As previously described, the object intensity pattern Io = O
Figure imgf000024_0002
IPSF, where O is the object and is the 2D convolutional operator. The image of the object is given by O’ = Io * IPSF, where is the 2D correlation operator. From the above two equations, O’ = O IPSF * IPSF which reduces to O’ = O
Figure imgf000024_0001
A, where A is a delta-like function obtained by the autocorrelation of IPSF. In other words, the spatial frequency at which the object gets sampled in an imaging system depends upon the autocorrelation function.
[0132] Figure 6A is a plot of the autocorrelation function for different radii of the pinholes in the RAP mask 112. In particular, plots 402 to 418 respectively correspond with the normalised intensity recorded for a RAP mask 112 having radii of the pinholes varied between 5μm to 45μm respectively in steps of 5μm.
[0133] From the plot, it is seen that the resolving power of the system improves with a decrease in the size of the pinholes. This effect can be accounted to the variation in the size of the smallest speckle in the intensity signature when the size of the pinhole was varied. The recorded intensity patterns 420, 422, 424, 426, 428 {IPSF) for R = 10μm, 20μm, 30μm, 40μm and 50μm respectively are shown in Figure 6B. As illustrated in Figure 6B, the increase in the speckle size with increase in the radius R of the pinholes on the RAP mask 112 can be seen, which explains the variation in the autocorrelation functions as shown in Figure 6A. [0134] The spectral resolution of the system can be understood by calculation of IPSF (L = 600nm) followed by a cross-correlation with IPSF (λ) when the wavelength is varied from λ = 551 nm to 650nm in steps of 1 nm. A plot of the IR (X = y = 0) for different wavelengths and for different radii of the pinholes from R = 10μm to 50 μm in steps of 10 μm are shown in Figure 6C. Plot 432 corresponding to R = 10μm, plot 434 corresponding to R = 20μm, plot 436 corresponding to R = 30μm, plot 438 corresponding to R = 40μm, and plot 440 corresponding to R = 50μm.
[0135] Figure 6C illustrates that the spectral resolution also improves with decreasing radius R (and diameter) of the pinholes. The simulation in relation to spectral resolution was conducted using wavelengths in the visible spectrum. Flowever, it is expected that the simulation could have been conducted using different wavelengths in order to arrive at the same conclusions.
Depth-Wavelength relationship
[0136] It has been found that the imaging system shown in Figure 2 can be simulated using mathematical Fresnel propagators which can be decomposed into linear and quadratic factors given by
Figure imgf000025_0001
= exp[
Figure imgf000025_0002
respectively, neglecting the complex constants. It can be seen that the wavelength and depth parameters are related in the scaling factor λu responsible for varying IPSF(Z, λ), when the depth or the wavelength is varied. Consequently, the image sensor sees the same change irrespective of whether λ is increased or u is decreased by the same factor and vice versa.
[0137] Therefore, to record the IPSF for different wavelengths, it is sufficient to record IPSF for different depths using the same wavelength and vice versa. Advantageously, it has been found that this property opens numerous possibilities. In particular, by training the imaging system as shown in Figures 1 and 2 to discriminate information in four dimensions, it can see information in five dimensions.
[0138] To further demonstrate, Equation (1 ) as set out above can be considered. In Equation (1 ), the distance and wavelength factors which affect IPSF are λu and λv in the quadratic and linear phase factors. Therefore, it has been discovered that any change in the wavelength can be compensated by equal and opposite change in the distances.
[0139] For instance, if the wavelength is increased by a factor of k, then if the distances u and v are decreased by a factor of 1/k, the function IPSF remains a constant. Consequently, IPSF for a wavelength λ1 can be synthesized from a different wavelength λ2 by varying u and v by a factor of λ1/ λ2.
[0140] On the other hand, by varying λ and v, it is possible to obtain IPSF for a different u. This is an extraordinary relationship as it is possible to discriminate wavelength without having all the wavelength samples and discriminate depth without having all the depth sampled using the point object.
[0141] A computer simulation was carried out to validate the above idea using the following parameters: u = 10cm, v = 10cm and λ1 = 617nm. The wavelength was varied to λ2 = 530nm. The values of u and v were iterated between 5cm and 15cm in steps of 20μm to find a case u = u1 and v = v1 when the cross-correlation between IPSF (U = 10cm, v = 10cm, λ1 = 617nm) and I’PSF(U, V, λ2 = 530 nm) equals the autocorrelation of IPSF (u = 10cm, v = 10cm, λ1 = 617nm). The simulation result of the IR(X = 0, y = 0) for different values of u and v is plotted in Figure 7. As illustrated in Figure 7, the crosscorrelation value matched with the autocorrelation for a case u = ui and v = vi and λ2 = 530nm and the values of u1 = v1 = 11.642 cm. The factor of increase of ui and vi is same as the factor of decrease of λ1 to λ2. Embodiments of the present invention therefore enables a reduction in the spectral and spatial sampling requirements unlike the previous systems.
Optimisation procedure for RAP mask
[0142] In some instances, the digital construction of the object’s 102 image by cross-correlation can result in a high background noise. In order to reduce the background noise, an iterative optimisation procedure may be applied. In particular, the iterative optimisation procedure may include a two-step optimisation procedure 500 as shown in Figure 8A to determine an arrangement of the pinholes in a RAP mask that would reduce background noise.
[0143] In the embodiment shown in Figure 8A, an array of pinholes 502 is synthesized using two uncorrelated random variables xi, yi ∈ {C2*(U[0, 1])}, where U is a uniform random variable distributed on [0, 1], while xi, yi are uniform random variables distributed on [0, C2]. To reduce the background noise, quasi-random variables ui, vi were synthesized such that the RAP mask consisting of the pinholes at the new locations have an improved Signal to Noise Ratio (SNR). The SNR is defined as Signal/(Average background noise) which reduces to 1 /(Average background noise) upon normalizing the reconstructed intensity pattern.
[0144] A schematic diagram summarising the iterative optimisation procedure is illustrated in Figure 8A. The random variables xi, yi are iterated over N times and the random variables ui, Vi corresponding to maximum SNR is selected and given as input to the second stage optimization procedure.
[0145] In the next stage, the location of the pinholes is shifted along the X and Y directions with the limits -L < Aui < L and -L < Dni < L both in steps of D and the SNR is calculated at every step and the quasi-random variables ui, Vi corresponding to the maximum SNR is determined. The second optimization therefore runs over 2LM iterations for every location along X or Y direction. Therefore, the total number of iterations in the second optimization is 4NLM.
[0146] During every iteration, the light from a point object is propagated by 10cm and modulated by the RAP mask and the modulated light is propagated by another distance of 10cm. The diameter of the pinholes was selected to be 80μm and the wavelength was selected to be 617nm. The intensity distribution at 10cm from the quasi RAP (QRAP) mask is autocorrelated using a phase-only filter as IPSFTPSF, where I’PSF = |F-1{exp[— i arg(IPSF)]}| and the SNR was calculated as 1/(Average background noise) after normalizing the maximum intensity value of the autocorrelation using a phase-only filter.
[0147] A plot 540 of the resulting SNR with respect to the number of iterations simulated using the iterative optimisation method described above is shown in Figure 8B. Region 542 plots the SNR for the iterations simulated in accordance with the first optimisation step and the region 544 plots the SNR for the iterations simulated using the second optimization step.
[0148] In the first optimization process, a random array generator was iterated 1000 times and the RAP mask profiles with lowest and highest SNR are identified. The RAP mask profile with the highest SNR was selected for the second optimization process where the location of every pinhole was shifted by 5 pixels and the profile with the highest SNR was identified. In the experiment, an overall SNR enhancement of 64% was achieved in comparison to the lowest SNR of the first optimisation step. [0149] Figure 8C illustrates images of the simulated RAP masks 546, 548, 550 and corresponding digitally constructed images 552, 554, 556 using autocorrelation with a phase-only filter for a point object. Constructed digital image 554 corresponds to simulated RAP mask 546 having the lowest SNR. Constructed digital image 548 corresponds to simulated RAP mask 548 having the highest SNR after carrying out the first optimisation process. Constructed digital image 556 corresponds to simulated RAP mask 550 having a highest SNR after carrying out the second optimisation process.
[0150] As shown in Figure 8C, digital image reconstruction 556 has improved SNR when compared to digital image reconstruction 554. Similarly, digital image reconstruction 554 has improved SNR when compared to digital image reconstruction 552. As mentioned, the experiment was able to yield an improvement of 64% in the SNR. In the experiment, the final RAP mask design was transferred to a chromium coated mask plate using Intelligent micropatterning SF100 XPRESS for Microfabrication. The size of the QRAP was 8mm and the diameter of the pinholes was 80μm after fabrication.
[0151] In addition to the above optimisation procedure, post processing technique such as non-linear correlation and median filter may be implemented to improve the SNR. In the non-linear correlation, the object reconstruction can be expressed as
Figure imgf000028_0001
where the values of a and b are tuned between a selected range of -1 to +1 until a case with minimum entropy is obtained. The entropy is expressed as
Figure imgf000028_0002
where f is the correlation
Figure imgf000028_0003
distribution, and (m, n) are the indexes of the correlation matrix.
Two-Channel Experimental Imaging System Configuration
[0152] A number of experiments were conducted to illustrate the workings of embodiments of the invention and the achievable results described herein.
[0153] A schematic diagram of the optical configuration of the experimental imaging system 600 is illustrated in Figure 9A. In particular, the experimental imaging system 600 includes a two-channel set up. Two LEDs 602, 604 from Thorlabs (M617L3, λc = 617 nm, FWHM = 18 nm) and (M530L3, λc = 530 nm, FWFIM = 33 nm) are used to critically illuminate two sample objects 606, 608 respectively. United States Air Force (USAF) resolution target (Group - 2, Element - 6, 7.13 Ip/mm) is selected as the first sample object 606, and National Bureau of Standards (NBS) resolution target (8 Ip/mm) is selected as the second sample object 608.
[0154] A RAP mask 614 having pinholes with a diameter of 100μm was used for sampling the imaging system 600. The light diffracted from the two objects 606, 608 were combined using a beam splitter (BS) 610 and recorded by an image sensor 612 (e.g. Thorlabs DCU223M, 1024 x 768 pixels, pixel size = 4.65 μm).
[0155] The distance ui between USAF object 606 and the RAP mask 614 was 10cm and the distance v between the RAP mask 614 and the image sensor 612 was 10cm. The equivalent distance U2 between the NBS object 608 and the RAP mask 614 was also 10cm. Even though the optical configuration and diameter of the RAP mask 614 allows a lateral resolution of 1.22λu/D = 9.4 μm and 8.1 μm for the red ( λ1 = 617nm) and green ( λ2 = 530nm) wavelengths, the size of the pinholes in the RAP mask 614 in the current experimental setup imposes a resolution limit of 100μm. The chosen USAF and NBS objects 606, 608 have a respective spacing of 140.25 μm and 125 μm respectively and fall within the resolution limit of the imaging system 600.
[0156] The size of the image sensor is limited to 4.7 mm along a longer side thereof and therefore the system 600 has a limited field of view. To perform a reasonable study with the limited field of view, the area of the USAF object 606 was limited to element 6 and its corresponding horizontal and vertical lines (whilst blocking all remaining areas of the USAF resolution chart). The area of the NBS object 608 was limited to element 8.0 and all remaining areas of the NBS resolution chart were blocked.
[0157] Figure 9B illustrates the recorded point spread intensity pattern 616 for the red wavelength (u1 = 10 cm, λ1 = 617nm), and the recorded point spread intensity pattern 618 for the green wavelength (U2 = 10 cm, λ2 = 530 nm).
Experiments - Axial resolution of imaging
[0158] A mask defining a single pinhole (not shown) having a diameter of 100μm was used to approximate a point object and determine the axial resolution response of the experimental imaging system 600 as shown in Figure 9A. The single pinhole mask is positioned between each respective LED channel 602, 604 in place of the USAF and NBS objects 606, 608. [0159] Initially, the channel illuminated by green LED 604 was blocked and in the red LED channel 602, the location of the pinhole mask was shifted, and the corresponding point spread function intensity patterns were recorded. The axial resolution of the system 600 with respect to the red LED channel was studied by cross- correlating the IPSF (u = 10 cm, λ1 = 617 nm) with IPSF (AU = -3.5 cm to +3.5 cm, λ1 = 617 nm) and the plot of the normalized intensity IR(X = 0, y = 0) for different values of Au is shown in Figure 9C. Figure 9C also illustrates the reconstructed images as a result of the cross-correlation for some of the values to facilitate visualisation. It can be seen that the FWHM of the plot in Figure 9C is about 3 mm.
[0160] The experiment was then repeated by blocking the red LED channel 602 and unblocking the green LED channel 604. The axial resolution of the imaging system 600 as shown in Figure 9A was then studied for green LED 604. The plot of the normalized IR(X = 0, y = 0) for different values of Au is shown in Figure 9D.
Experiments - Determination of the optimal image reconstruction parameters
[0161] The following tests were conducted to determine optimal parameters for a and b in to construct one or more digital images in accordance with Formula (6) previously described in the experimental imaging system 600 as shown in Figure 9A.
[0162] Initially, the object intensity patterns 620, 622, 624 (as shown in Figures 10A- 10C) were recorded by blocking one of the two channels 602, 604 and then unblocking both channels 602, 604 simultaneously. Figure 10A illustrates the intensity pattern 620 for the USAF object 606 illuminated by the green LED 604 (u=10cm, v=10cm, λ1 = 617nm). Figure 10B illustrates the intensity pattern 622 for the NBS object 608 illuminated by the red LED 602 (u=10cm, v=10cm, λ1 = 530nm). Figure 10C illustrates the intensity pattern 624 for both USAF and NBS objects 606, 608 as illuminated by both the red and green LEDs 602, 604.
[0163] As illustrated in Figures 10A to 10C, each respective intensity pattern 620, 622, 624 is formed by the addition of the random array of the respective images. As previously discussed, the conditions for obtaining a high-resolution image using a single pinhole has been studied. The array of gratings of USAF object 606 and the array of numbers of NBS object 608 are visible in the respective intensity patterns shown in Figures 10A to 10C. [0164] To determine optimal values of a and b in Formula (6) for which the entropy is 1 such that the reconstructed images have the minimum correlation noise, the values for a and b were varied between -1 and 1 in steps of 0.2 and the entropy was calculated for every value of a and b.
[0165] Figure 11 illustrates the image reconstructions for the recorded intensity pattern 620 for the NBS object 608 illuminated by the red LED 602 (u=10cm, v=10cm, λ1 = 530nm) based on the recorded point spread intensity pattern 616 for the red wavelength IPSF (UI = 10 cm, λ1 = 617nm).
[0166] The reconstructed images for all values of a and b are shown in the Figure 11 . In particular, Figure 11 illustrates the reconstructed image 626 for matched filter (a = 1 and β = 1 ), reconstructed image 628 for phase-only filter (α = 0 and β = 1 ), and the reconstructed image 630 for inverse filter (α = -1 and β = 1 . The reconstructed image 632 using the optimal filter (α = 0 and β = 0.6) with entropy value 1 is also shown in Figure 11 . The optimal filter (α = 0 and β = 0.6) can be selected for further image reconstructions in accordance with embodiments of the invention. In some embodiments, the output from the optimal filter can be fed in as input to a median filter to further improve the SNR.
Experiments - 5D imaging
[0167] To demonstrate the 5D imaging capability of the experimental imaging system 600 according to embodiments of the present invention, the NBS object 608 was moved from z = Δu = -10mm to 10mm while the USAF object 606 was maintained in the same position (see illustration 634 of Figure 9A).
[0168] A video Io(t) of the event was recorded using a camera associated with the image sensor 612. The video was decomposed into frames and each frame was cross- correlated with the corresponding IPSF(Z, λ) in real time using the optimal non-linear filter (i.e. α = 0 and β = 0.6 as discussed in the previous section).
[0169] The image reconstruction results of five frames corresponding to z = Au = - 10mm, z=Δu = -5mm, z=Δu = 0mm, z=Δu = 5mm and z=Δu = 10mm using spatio- spectral signatures namely IPSF(λI = 617nm, u = 10cm), IPSF(λ2 = 530nm, u = 11 cm), IPSF ( λ2 = 530nm, u = 9cm) are shown in Figure 12. (Also see illustration 634 of Figure 9A for visualisation of the NBS object 608 located at each of a depth ‘z’ corresponding to each of the five frames.) [0170] In particular, Figure 12 illustrates intensity patterns recorded when the NBS object 608 was moved away from USAF object 606 by Au = -10mm (640a), Δu =-5mm (640b), Au = 0 (640c), Au = 5mm (640d) and Au = 10 mm (640e) respectively.
[0171] Figure 12 further illustrates image reconstruction results 642a - 642b when each of the respective recorded intensity patterns in 640a - 640e is cross-correlated by point spread function 648 IPSF(λ = 617 nm, u = 10 cm).
[0172] Images 644a to 644e illustrates reconstruction results when each of the respective recorded intensity patterns in 640a - 640e is cross-correlated with point spread function 650 IPSF(λ = 530nm, u = 11cm).
[0173] Images 646a to 646e illustrates reconstruction results when each of the respective recorded intensity patterns in 640a - 640e is cross-correlated with point spread function 652 IPSF(λ = 530nm, u = 9cm).
[0174] The tests identified reconstructed images highlighted in boxes 654a to 654g to be image reconstructions of the USAF object 606 in best focus. Insets 656a to 656o in each of the reconstructed image frames 654 to 646 show the cross-correlation of IPSF(λ = 617 nm, u = 10 cm) with the respective reconstructive point spread functions. Insets 658a to 658o show the cross-correlation of IPSF(λ = 530 nm, Au) with the respective reconstructing point spread functions.
Experiments - Depth-wavelength reciprocity
[0175] The depth-wavelength relationship previously described with reference to Figure 7 is validated with respect to the two channel experimental imaging system 600 shown in Figure 9A.
[0176] In the experiment, lo and IPSF were recorded for the initial u and v values using the red LED (λ = 617 nm). Then in the other channel, the u and v values were adjusted to approximately (617/530)v and (617/530)v and the IPSF was recorded using the green LED (λ = 530 nm). As expected, the IPSF of the green LED recovered the object information recorded using the red LED. The images 680 and 682 of the IPSF (λ = 617nm, u = v = 10cm) and IPSF (λ = 530 nm, u = v ≈ 12cm) respectively are shown in Figure 13. Figure 13 further illustrates the image 684 of the object intensity pattern lo(λ = 617 nm, u = v = 10 cm). The results of cross-correlation between IPSF (λ = 617nm, u = v = 10 cm) and IPSF (λ = 530 nm, u = v ≈ 12 cm) and the reconstruction of the object using IPSF (λ = 617 nm, u = v = 10cm) and IPSF (λ = 530 nm, u = v ≈ 12cm) are respectively shown in images 686, 688, 690 of Figure 13. By comparing images 680 and 682 of Figure 13, it may be considered that both patterns are identical except for a shift.
[0177] The intensity signatures in different areas denoted by a square, circle and triangle matches with one another. Some of the distinct areas are also visible in the object intensity pattern. In the experiment, the cross-correlation result yielded a sharp peak indicating that the patterns in 680 and 682 are generally identical. Moreover, the reconstruction result using the green LED’s point spread function shown in 690 of Figure 13 generally matches the reconstruction result with the same wavelength.
Further Experiments - Imaging under different conditions
[0178] Further experiments were conducted to evaluate the robustness of the imaging system according to embodiments of the present invention.
[0179] In particular, imaging procedures using a RAP mask 112 were repeated under simulated extreme conditions and the performances were studied. In a first scenario as illustrated in Figure 14A, a part of a curved refractive element 700 (e.g. a broken beaker) was introduced between the object 102 and the RAP mask 112. Image 702 illustrates the respective point spread intensity pattern, and image 704 illustrates the respective object intensity pattern recorded by the image sensor 124. The final reconstructed digital image is shown in 706.
[0180] In a second scenario as illustrated in Figure 14B, a RAP mask 112 was created by creating multiple holes on black insulation tape. Then the insulation tape was attached to a curved surface 700. Image 708 illustrates the respective point spread intensity pattern, and image 710 illustrates the respective object intensity pattern recorded by image sensor 124. The final reconstructed digital image is shown in 712. Whilst the patterns 708, 710 were distorted due the curved surface of the RAP mask 700, the digital reconstruction of the image 712 of the object 102 could be satisfactorily recovered.
[0181] The above two scenarios illustrate that the imaging system according to embodiments of the invention is capable of satisfactorily constructing digital images in under extreme environmental conditions, for example due to humidity, heat, dust, interference and the like. Advantageously, the scenarios demonstrate the level of flexibility available with embodiments of the invention. Extended Field of View
[0182] The field of view (FOV) of an imaging system is generally limited by the magnification of the system and the size of the image sensor. In the imaging systems 720, 740 as shown in Figures 15A to 15C, the FOV of each imaging system 720, 740 is given as su/v, where s is the size of the image sensor, u is the distance between the object 102 and the lens 722 or RAP mask 112, and v is the distance between the lens 722 or RAP mask 112 and the image sensor 124 (not shown).
[0183] In a direct imaging system as illustrated in Figure 15A, when the object 102 is imaged, points of the object 102 which lie beyond the FOV are not recorded. For example, point 724 of object 102 lies beyond the FOV and therefore corresponding point 726 in the corresponding image 728 cannot be recorded.
[0184] In the imaging system 740 shown in Figures 15B and 15C, a RAP mask 112 is used in place of a lens 722. The intensity pattern 120 generated from light modulated using the RAP mask 112 includes a superposition of object images 730. The object images 730 are randomly positioned on an image plane of the image sensor 124 (not shown), as illustrated in Figure 15B.
[0185] In an alternative perspective of the same imaging system 740 as shown in Figure 15C, any point on the object 102 point will cause a shifted spatio-spectral signature to be generated on the image plane of the image sensor 124. Unlike the direct imaging system shown in Figure 15A, every point on the object 102 is converted by system 740 into spatio-spectral signatures which cover a much larger area than the image sensor 124. Therefore, even when a point 732 on the object 102 lies beyond the FOV, there will always be one or more corresponding partial spatio-spectral signatures that are incident on the image sensor 124. Indeed, the centre of the spatio-spectral signature is beyond the area of the image sensor 124 as shown in Figure 15C.
[0186] In some embodiments, the library 206 of spatio-spectral signatures can be acquired synthetically over a larger area at least once, to allow any object 102 imaged using a RAP mask 112 to be imaged with a relatively high FOV compared to a direct imaging system 720.
[0187] Imaging systems according to embodiments of the invention may therefore provide an extended field of view when compared to direct imaging systems such as that shown in Figure 15A. in particular, an imaging system 740 using a RAP mask 112 provides twice the field of view of that of a direct imaging system 720 using a lens 722.
[0188] Embodiments of the present invention may therefore provide 5D (including 3D space, time and spectrum) imaging system, method and apparatus using a RAP mask 112. In some embodiments, a video of an event can be decomposed into spectral specific and depth specific components to provide additional information and perception not previously achieved using regular videography techniques.
[0189] As described and demonstrated herein, embodiments of the invention may be capable of achieving satisfactory results under different non-perfect imaging conditions. Indeed, it has been demonstrated that embodiments of the invention can be used for imaging objects under extreme imaging conditions.
[0190] Advantageously, embodiments of the invention only require a RAP mask defining a random or quasi-random array of pinholes, an image sensor and a computational device (which may provide the image sensor) to construct the one or more digital images based on an object, such as a microscopic object. The locations and/or sizes of the pinholes in the mask can be selected using computational optical techniques described herein to yield a high signal to noise ratio. Further computational processing techniques described herein can be used to further improve the signal to noise ratio. The RAP mask is generally much easier to manufacture, for example, when compared to SLMs or phase masks. Accordingly, embodiments of the invention may be easily and inexpensively adapted for various imaging applications.
[0191] In the experimentation and testing described above, some embodiments of the invention have been implemented using incoherent light, which has a larger spectral width than coherent light, and hence can be directly applied for to applications such as fluorescence microscopy.
[0192] Moreover, embodiments of the invention utilise the depth-wavelength relationship described and validated herein to see colour and resolve depth by sampling either the spectrum or the depth. As such, embodiments of the invention can be used for seeing through scattering layers at depths. The depth-wavelength relationship proves that when reconstructing images for a complex object, the depth-wavelength relationship can be used to reconstruct images at different depths by only varying the wavelength (e.g. in tissue imaging applications) or reconstruct images at different wavelengths by only varying depth (e.g. in microscopic imaging applications). Moreover, the depth-wavelength reciprocity can facilitate creation and compilation of the library of spatio-spectral intensity information. In particular, the library creation may only require either sampling of depth or a broad band source with a monochromator but not necessarily both. This may advantageously save resources and thus further reduce costs.
[0193] In addition, embodiments of the invention require only a single camera shot of the object 102 to reconstruct a series of digital images in 5D. According, it can be used to record events occurring at a faster rate than other in-line digital holography techniques, which typically require at least 2-3 camera shots per frame.
[0194] Furthermore, embodiments of the invention provide a multidimensional imaging system which can be easily extended to other non-visible bands of the electromagnetic spectrum with incoherent illumination. The imaging system according to embodiments of the invention can also be used for a spectrophotometer, for example for molecular fingerprinting.
[0195] Moreover, the image system according to embodiments of the invention can be implemented with encryption capabilities. For example, information in the spatio- spectral library 206 may be encrypted, and the recorded event can be decrypted.
Interpretation
[0196] This specification, including the claims, is intended to be interpreted as follows:
[0197] Embodiments or examples described in the specification are intended to be illustrative of the invention, without limiting the scope thereof. The invention is capable of being practised with various modifications and additions as will readily occur to those skilled in the art. Accordingly, it is to be understood that the scope of the invention is not to be limited to the exact construction and operation described or illustrated, but only by the following claims.
[0198] The mere disclosure of a method step or product element in the specification should not be construed as being essential to the invention claimed herein, except where it is either expressly stated to be so or expressly recited in a claim. [0199] The terms in the claims have the broadest scope of meaning they would have been given by a person of ordinary skill in the art as of the relevant date.
[0200] The terms "a" and "an" mean "one or more", unless expressly specified otherwise.
[0201] Neither the title nor the abstract of the present application is to be taken as limiting in any way as the scope of the claimed invention.
[0202] Where the preamble of a claim recites a purpose, benefit or possible use of the claimed invention, it does not limit the claimed invention to having only that purpose, benefit or possible use.
[0203] In the specification, including the claims, the term “comprise”, and variants of that term such as “comprises” or “comprising”, are used to mean "including but not limited to", unless expressly specified otherwise, or unless in the context or usage an exclusive interpretation of the term is required.
[0204] The disclosure of any document referred to herein is incorporated by reference into this patent application as part of the present disclosure, but only for purposes of written description and enablement and should in no way be used to limit, define, or otherwise construe any term of the present application where the present application, without such incorporation by reference, would not have failed to provide an ascertainable meaning. Any incorporation by reference does not, in and of itself, constitute any endorsement or ratification of any statement, opinion or argument contained in any incorporated document.
[0205] Reference to any background art or prior art in this specification is not an admission such background art or prior art constitutes common general knowledge in the relevant field or is otherwise admissible prior art in relation to the validity of the claims.

Claims

The claims defining the invention are as follows
1. An apparatus for coupling to a computational device to facilitate digital imaging of an object, the apparatus comprising a mask defining an array of pinholes for modulating light diffracted from the object, wherein an intensity pattern generated by modulated light from the mask is detectable using an image sensor for digital construction of one or more images of the object by the computational device.
2. The apparatus of claim 1 , wherein the mask defining an array of pinholes defines a random or quasi-random array of pinholes.
3. The apparatus of claim 1 or 2, wherein the intensity pattern can be captured by the image sensor in one or more frames, and the computational device is configured to construct one or more images in three or more dimensions based on each frame of the intensity pattern, the three or more dimensions including spatial dimensions of the object.
4. The apparatus of claim 3, wherein the three or more dimensions further include information associated with wavelengths of light diffracted from the object.
5. The apparatus of claim 3 or 4, wherein each frame of the intensity pattern is captured instantaneously, and the computational device is configured to construct the one or more images in real-time or near real-time.
6. The apparatus of claim 5, wherein the computational device is configured to generate video content based on the images constructed in real-time or near realtime.
7. The apparatus according to any one of the preceding claims, wherein the apparatus further includes an illumination source for illuminating the object.
8. The apparatus according to claim 7, wherein the illumination source provides incoherent light for illuminating the object.
9. The apparatus according to any one of the preceding claims, wherein the apparatus further includes a receiving portion for receiving the object for placement at an axial location adjacent the mask defining an array of pinholes.
10. The apparatus according to any one of the preceding claims, wherein the apparatus further includes a housing for enclosing the mask defining an array of pinholes, the housing having an engagement portion for engagement with the computational device to form a seal such that light passing through the seal is minimised.
11 . The apparatus of claim 10, wherein the object is a sample for microscopic imaging and the housing includes an opening for receiving a carrier carrying the sample, the opening being positioned to allow placement of the sample adjacent the mask defining an array of pinholes.
12. The apparatus according to any one of the preceding claims, wherein the computational device includes a camera, the camera providing the image sensor for detecting the intensity pattern when the apparatus is coupled to the computational device.
13. A computer-implemented method for digital imaging of an object, the method including detecting, via an image sensor, an intensity pattern representative of light diffracted from the object, the light diffracted from the object being modulated by a mask defining an array of pinholes prior to detection by the image sensor, and digitally constructing one or more images of the object based on the intensity pattern.
14. The computer-implemented method of claim 13, wherein the mask defining an array of pinholes defines a random or quasi-random array of pinholes.
15. The computer-implemented method of claim 13 or 14, the method including capturing the intensity pattern, via the image sensor, in one or more frames, construct one or more images in three or more dimensions based on each frame of the intensity pattern, the three or more dimensions including spatial dimensions of the object.
16. The computer-implemented method of any one of claims 13 to 15, wherein the three or more dimensions further include information associated with wavelengths of light diffracted from the object.
17. The computer-implemented method of any one of claims 13 to 16, the method including capturing each frame of the intensity pattern instantaneously, and constructing the one or more images in real-time or near real-time.
18. The computer-implemented method of claim 17, the method further including generating video content based on the images constructed in real-time or near real-time.
19. The computer-implemented method of any one of claims 13 to 18, wherein digitally constructing one or more images of the object based on the intensity pattern includes constructing the one or more images based on a library of spatio-spectral intensity information for an approximated point object.
20. The computer-implemented method of claim 19, wherein the method further includes compiling a library of spatio-spectral intensity information for an approximated point object by carrying out the following steps providing a pinhole mask defining a single pinhole to approximate a point object, positioning the pinhole mask at one or more different axial locations with respect to an illumination source, using the illumination source, sequentially illuminating the pinhole mask at one or more different wavelengths of light, recording a plurality of intensity patterns generated by modulated light from the pinhole mask for each wavelength of light and each axial location, compiling a library of point spread functions based on the recorded intensity patterns, each point spread function within the library approximating a point spread function of a point object for a respective axial location with respect to the illumination source and wavelength of light.
21.The computer-implemented method of claim 19, the method further including recording a plurality of approximated point object intensity patterns by carrying out the following steps
1. providing a pinhole mask defining a single pinhole to approximate a point object,
2. positioning the pinhole mask at an initial axial location with respect to an illumination source,
3. using the illumination source, sequentially illuminating the pinhole mask at a plurality of different wavelengths of light,
4. recording an intensity pattern generated by modulated light from the pinhole mask for each wavelength of light,
5. compiling a point spread function sub-library based on the recorded intensity patterns in step 4, each point spread function within the sublibrary approximating a point spread function of a point object for a respective axial location with respect to the illumination source and wavelength of light,
6. moving the pinhole mask to a further axial location with respect to the illumination source and repeating steps 3 to 5,
7. repeating steps 1 to 6 for a predetermined number of axial locations with respect to the illumination source, and compiling the library of spatio-spectral information based on the point spread function sub-libraries.
22. The computer-implemented method of any one of claims 12 to 19, wherein digitally constructing one or more images of the object includes determining a correlation between the intensity pattern representative of light diffracted from the object, and a point spread function selected from library of spatio-spectral information.
23. The computer-implemented method of any one of claims 13 to 22, wherein the pinholes in the mask defining an array of pinholes each have a diameter between a range of about 0.4μm to 200μm.
24. The computer-implemented method of any one of claims 13 to 22, wherein locations of pinholes in the mask defining an array of pinholes are determined using an iterative optimisation method to increase signal to noise ratio.
25. A computer device having a computer-readable storage medium storing instructions that when executed by the computer device causes the computer device to perform a computer-implemented method according to any one of claims 12 to 24.
26. A system for digital imaging of an object including an apparatus according to any one of claims 1 to 12, and a computer device having a computer-readable storage medium storing instructions that when executed by the computer device causes the computer device to perform a computer-implemented method according to any one of claims 13 to 24, and wherein the apparatus is coupled to a camera of the computer device.
PCT/AU2020/051410 2019-12-23 2020-12-21 Apparatus, system and method of digital imaging WO2021127726A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2019904895 2019-12-23
AU2019904895A AU2019904895A0 (en) 2019-12-23 Apparatus, system and method of digital imaging

Publications (1)

Publication Number Publication Date
WO2021127726A1 true WO2021127726A1 (en) 2021-07-01

Family

ID=76572836

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2020/051410 WO2021127726A1 (en) 2019-12-23 2020-12-21 Apparatus, system and method of digital imaging

Country Status (1)

Country Link
WO (1) WO2021127726A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086550A (en) * 2022-05-30 2022-09-20 元潼(北京)技术有限公司 Meta-imaging method and system
CN117092876A (en) * 2023-10-16 2023-11-21 浙江大学 Extreme ultraviolet lithography mask plate defect detection system and method based on photon sieve

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110079725A1 (en) * 2009-10-02 2011-04-07 Ut-Battelle, Llc Apparatus and method to achieve high-resolution microscopy with non-diffracting or refracting radiation
US20190116364A1 (en) * 2011-03-18 2019-04-18 Texas Instruments Incorporated Methods and systems for masking multimedia data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110079725A1 (en) * 2009-10-02 2011-04-07 Ut-Battelle, Llc Apparatus and method to achieve high-resolution microscopy with non-diffracting or refracting radiation
US20190116364A1 (en) * 2011-03-18 2019-04-18 Texas Instruments Incorporated Methods and systems for masking multimedia data

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
A. VIJAYAKUMAR , JOSEPH ROSEN: "Interferenceless coded aperture correlation holography-a new technique for recording incoherent digital holograms without two- wave interference", OPTICS EXPRESS, vol. 25, no. 12, 12 June 2017 (2017-06-12), pages 13883 - 13896, XP055834531, DOI: https://doi.org/10.1364/OE.25.013883 *
CIEŚLAK MICHAŁ J.; GAMAGE KELUM A.A.; GLOVER ROBERT: "Coded-aperture imaging systems: Past, present and future development - A review", RADIATION MEASUREMENTS, vol. 92, 1 September 2016 (2016-09-01), pages 59 - 71, XP029703786, ISSN: 1350-4487, DOI: https://doi.org/10.1016/j.radmeas. 2016.08.00 2 *
FENIMORE E E , CANNON T M: "Coded aperture imaging with uniformly redundant arrays", APPLIED OPTICS, vol. 17, no. 3, 1 February 1978 (1978-02-01), pages 337 - 347, XP002558348, ISSN: 0003-6935, DOI: https://doi.org/10.1364/AO.17.000337 *
JOSEPH ROSEN ,VIJAYAKUMAR ANAND, MANI RATNAM RAI, SASWATA MUKHERJEE , ANGIKA BULBUL: "Review of 3D Imaging by Coded Aperture Correlation Holography (COACH)", APPLIED SCIENCES, vol. 9, no. 3, 605, 12 February 2019 (2019-02-12), pages 605, XP009529637, ISSN: 2076-3417, DOI: 10.3390/app9030605 *
K. MCMILLANA, P. MARLEAUA , E. BRUBAKER: "Random mask optimization for fast neutron coded aperture imaging", SAND2015-4109J, 1 January 2015 (2015-01-01), United States, pages 1 - 18, XP055834540 *
XIAOPENG PENG, GARRETH J. RUANE, MARCO B. QUADRELLI, AND GROVER A. SWARTZLANDER: "Randomized apertures: high resolution imaging in far field", OPTICS EXPRESS, vol. 25, no. 15, 24 July 2017 (2017-07-24), pages 18296 - 18313, XP055834533, DOI: https://doi.org/10.1364/OE.25.018296 *
XIAOWEI LI , CHENGQING LI , SEOK-TAE KIM , IN-KWON LEE: "An optical image encryption scheme based on depth-conversion integral imaging and chaotic maps", ARXIV.ORG, 17 January 2015 (2015-01-17), pages 1 - 18, XP080676610 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086550A (en) * 2022-05-30 2022-09-20 元潼(北京)技术有限公司 Meta-imaging method and system
CN115086550B (en) * 2022-05-30 2023-04-28 元潼(北京)技术有限公司 Meta imaging system
CN117092876A (en) * 2023-10-16 2023-11-21 浙江大学 Extreme ultraviolet lithography mask plate defect detection system and method based on photon sieve
CN117092876B (en) * 2023-10-16 2024-03-22 浙江大学 Extreme ultraviolet lithography mask plate defect detection system and method based on photon sieve

Similar Documents

Publication Publication Date Title
Anand et al. Single shot multispectral multidimensional imaging using chaotic waves
US9880054B2 (en) Simplified compressive sensing spectral imager
Liang Punching holes in light: recent progress in single-shot coded-aperture optical imaging
US7800683B2 (en) Optical method and system for enhancing image resolution
JP7246093B2 (en) Wavefront sensor and method of use
Bulbul et al. Partial aperture imaging by systems with annular phase coded masks
TWI655522B (en) Method and device for illuminating digital full image by structured light
DE112009001652T5 (en) Multichannel recording
WO2017040826A1 (en) Apparatus, systems, and methods for talbot spectrometers
WO2021127726A1 (en) Apparatus, system and method of digital imaging
JP2012526269A (en) Method for identifying scenes from multiwavelength polarization images
US20100103309A1 (en) Method and system for compressed imaging
KR101479249B1 (en) Coherent Structured Illumination Imaging Method And Coherent Structured Illumination Microscope System
CN116972969A (en) Spectrum detection method and device based on two-dimensional dispersion spectrum reconstruction
McMackin et al. Design of a multi-spectral imager built using the compressive sensing single-pixel camera architecture
Shumigai et al. Parallel multispectral ghost imaging data acquisition with supercontinuum
JP2022546720A (en) Optical device and method
Shabani et al. Simultaneous optical sectioning and super resolution in 2D-SIM using tunable structured illumination
Zhao et al. Color single-pixel imaging based on multiple measurement vectors model
Hussain et al. Optical super resolution using tilted illumination coupled with object rotation
Hartke et al. Snapshot dual-band visible hyperspectral imaging spectrometer
Tyo et al. Bandwidth and information in the design and analysis of polarimeters
CN112345424B (en) Method and device for detecting gas diffusion and concentration distribution by wavelength tuning single pixel
Wang et al. Angle-sensitive pixels: a new paradigm for low-power, low-cost 2D and 3D sensing
Castaneda et al. Reduction in data acquisition for resolution improvement in structured illumination digital holographic microscopy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20908048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20908048

Country of ref document: EP

Kind code of ref document: A1