FI20186038A1 - Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging - Google Patents

Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging Download PDF

Info

Publication number
FI20186038A1
FI20186038A1 FI20186038A FI20186038A FI20186038A1 FI 20186038 A1 FI20186038 A1 FI 20186038A1 FI 20186038 A FI20186038 A FI 20186038A FI 20186038 A FI20186038 A FI 20186038A FI 20186038 A1 FI20186038 A1 FI 20186038A1
Authority
FI
Finland
Prior art keywords
diffraction
hyperspectral
image
digital camera
photograph
Prior art date
Application number
FI20186038A
Other languages
Finnish (fi)
Swedish (sv)
Inventor
Arto Klami
Mikko Toivonen
Chang Rajani
Original Assignee
Helsingin Yliopisto
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Helsingin Yliopisto filed Critical Helsingin Yliopisto
Priority to FI20186038A priority Critical patent/FI20186038A1/en
Priority to PCT/FI2019/050859 priority patent/WO2020115359A1/en
Publication of FI20186038A1 publication Critical patent/FI20186038A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2803Investigating the spectrum using photoelectric array detector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/12Generating the spectrum; Monochromators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0202Mechanical elements; Supports for optical elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0272Handheld
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0291Housings; Spectrometer accessories; Spatial arrangement of elements, e.g. folded path arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/12Generating the spectrum; Monochromators
    • G01J3/18Generating the spectrum; Monochromators using diffraction elements, e.g. grating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/18Diffraction gratings
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/18Diffraction gratings
    • G02B5/1842Gratings for image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Studio Devices (AREA)

Abstract

It is disclosed using a photographic digital camera (110) together with a diffraction grating (130) for dispersing incident light towards an objective (112) of the digital camera (110) to provide a diffraction image for a diffraction photograph (300) for use in multi-/hyperspectral imaging.

Description

APPARATUS FOR ENABLING A PHOTOGRAPHIC DIGITAL CAMERA TO BE USED FOR MULTI- AND/OR HYPERSPECTRAL IMAGING
FIELD The present disclosure relates to providing image information that can be used for multi- and/or hyperspectral imaging.
BACKGROUND Multispectral, as well as hyperspectral, im- aging allows capturing the spectrum of electromagnetic radiation at a large number of consecutive wavelength bands for each pixel in an image, resulting in a 3D tensor which contains narrow-band slices of the spec- trum. This allows capturing image information corre- sponding to multiple wavelength slices. In traditional approaches, multi- /hyperspectral imaging has been performed using spe- cial devices called multi- and hyperspectral cameras. These devices generally operate by scanning the scene either spectrally (spectral scanning) or spatially (spatial scanning), where the image is scanned pixel by pixel or line by line. A significant drawback of these cameras is that capturing a single image in good lighting conditions might take tens of seconds or even minutes using a scanning method, since the camera © needs to capture each spectral or spatial dimension S separately. Further, the spatial resolution at which N these cameras operate is typically very low, ranging n 30 from roughly 0.25 megapixels in portable models to 7 typically 1-2 megapixels in more refined stationary i models. Multi- and hyperspectral cameras are also 2 highly expensive. While also non-scanning multi- O /hyperspectral cameras do exist, such as an implemen- > 35 tation where the optical sensing area of the multi- /hyperspectral camera is divided into multiple fre-
quency sensitive regions, these alternatives remain costly. In addition, they typically have low resolu- tion and are designed for a single-specific applica- tion.
OBJECTIVE An objective 1s to eliminate or alleviate at least some of the disadvantages mentioned above. In particular, it is an objective to provide a low-cost alternative for multi- and/or hyperspectral imaging, which can be used to capture multi- and/or hyperspectral images substantially in real time. Moreover, it is an additional objective to provide a versatile alternative for multi- and/or hy- perspectral imaging, which can be used for a plurality of applications, in contrast to the previously used application-specific multi- and hyperspectral cameras. Finally, it is an objective to provide an al- ternative that can be used not only for general multi- spectral imaging but hyperspectral imaging in particu- lar.
SUMMARY In accordance with the present disclosure, it has been discovered that an ordinary digital camera 00 (also "digital camera” or “photographic digital cam- & era”) can be used for multispectral or even for hyper- N spectral imaging with the method and apparatus as dis- 2 30 closed herein. As the solutions disclosed herein may I be used both for general multispectral imaging and for - more demanding hyperspectral imaging, the solutions 2 below are disclosed in terms multi-/hyperspectral im- O aging, corresponding to both the multispectral and/or > 35 hyperspectral imaging.
The ordinary digital camera is a photographic digital camera, which may be used to capture still im- ages as individual photographs and/or video by taking a rapid seguence of photographs. This allows capturing image information for multi-/hyperspectral imaging simultaneously in two spatial dimensions, i.e. produc- ing a snapshot, which may be taken substantially in real time. Moreover, this allows capturing image in- formation for multi-/hyperspectral imaging, where the image information can be video information. The image information may thereby have a frame rate correspond- ing to the frame rate of the photographic digital cam- era.
The ordinary digital camera may be, for exam- ple, a compact camera or a system camera such as a digital single-lens reflex camera (DSIR) or digital multi-lens reflex camera. The ordinary camera may also be an industrial camera or a surveillance camera such as a closed-circuit television camera. The ordinary camera may be an integrated camera. It may be a camera phone or a camera integrated into a computer or a tab- let. It may also be a separate web camera. What is im- portant is that the camera is not a multispectral or hyperspectral camera as such so that it is not intrin- sically capable of producing multispectral or hyper- spectral images. 0 Typically, an ordinary digital camera is > adapted to produce RGB images but it may, additionally N or alternatively, be adapted to produce CMY and/or A 30 monochrome images. Correspondingly, the ordinary digi- © tal camera is adapted to use the RGB, CMY and/or mono- E chromatic color model. The ordinary digital camera is © adapted to capture image information simultaneously in 3 two spatial dimensions. The ordinary digital camera = 35 has a sensor for capturing light for photographing, N which sensor may be a CCD (charge coupled image de- vice) sensor or a CMOS (complementary metal oxide sem-
iconductor) image sensor.
The sensor has a sensor area which is adapted for capturing an image for a photo-
graph.
In the following, it is further disclosed ap- paratuses and methods allowing an ordinary digital camera to be used for multispectral or even hyperspec- tral imaging.
The common concept allowing a photo- graphic digital camera to be used for multi- /hyperspectral imaging is the production of a diffrac- tion photograph by dispersing incident light with a diffraction grating before it arrives at the photo- graphic digital camera or the objective of the photo- graphic digital camera in particular.
A diffraction photograph is therefore a photograph taken by the pho- tographic digital camera, where incident light from a scene is dispersed before it arrives at the photo- graphic digital camera for multi-/hyperspectral imag- ing so that a plurality of different frequencies of the incident light corresponding to a single spatial point of the incident light are dispersed to different spatial points on the objective of the photographic digital camera and, correspondingly, to different pix- els of the photographic digital camera.
This effec- tively corresponds to mapping the spectral information of the incident light into spatial dislocations.
The plurality of different frequencies may comprise more than hundred or even more than several hundred contig- = uous spectral bands each dispersed to a different spa- N tial point or pixel in the diffraction photograph.
The = 30 incident light may comprise visible light but it may 2 also, additively or alternatively, include parts out- =E side the visible range.
For many types of digital cam- * eras, the inventive concept of the present disclosure 2 may be utilized for incident light having wavelength 3 35 of, for example, 400-700 nm or 400-1000 nm.
Conse- S quently, the apparatus as disclosed herein may beadapted for multi-/hyperspectral imaging in these wavelength regimes.
In a first aspect, an apparatus for enabling a photographic digital camera to be used for multi- 5 /hyperspectral imaging is disclosed. The apparatus may be configured as an accessory such as an attachment to the photographic digital camera. The apparatus com- prises a frame configured for coupling with the photo- graphic digital camera. The apparatus further compris- es a diffraction grating coupled to the frame and con- figured for dispersing incident light, when the frame is coupled to the photographic digital camera, towards an objective of the photographic digital camera for providing a diffraction image for a diffraction photo- graph for use in multi-/hyperspectral imaging, e.g. by a computational algorithm, examples of which are described below. A key concept is the creation of a diffraction photograph, which allows spreading fre- quency information across multiple spatial points in one and/or two dimensions. Due to the diffraction grating, the apparatus is adapted to spatially spread the spectrum of the incident light in the diffraction photograph. This in turn, allows the photographic dig- ital camera to capture a plurality of different fre- guencies of the incident light corresponding to a sin- gle spatial point of the incident light at different spatial points on the objective of the photographic = digital camera and, correspondingly, at different pix- N els of the photographic digital camera. The diffrac- = 30 tion grating is adapted to be positioned in front of 2 the objective of the photographic digital camera and Ek is therefore separate from the optical system of the * camera. This allows the apparatus to be removably at- 2 tached to the camera. For example, it can be adapted 3 35 to be retrofitted to the camera to enable the photo- S graphic digital camera to be used to capture a dif- fraction image for multi-/hyperspectral imaging.
It is noted that the apparatus can be used in passive operation for multi-/hyperspectral imaging. Consequently, the apparatus can be a passive apparatus or it may have both passive and active operating modes. Moreover, the apparatus is can be used for snapshot multi-/hyperspectral imaging and/or multi- /hyperspectral video imaging since it can be adapted to provide the diffraction image for a diffraction photograph to the photographic digital camera essen- tially instantaneously so that the intrinsic capabil- ity of the digital camera for snapshot imaging and/or video imaging is maintained also when the digital cam- era is used together with the apparatus to produce one Or more diffraction photographs for multi- /hyperspectral imaging.
In general, it is noted that the specific way a diffraction pattern is formed by a diffraction grat- ing depends on various factors such as the features of the diffraction grating, the optical geometry of the apparatus and the features of the objective of the digital camera. However, specific optical designs can be obtained following general optical design princi- ples known to a person skilled in the art. In accord- ance with the present disclosure, a number of specific implementations have been identified which may marked- ly improve the applicability of the apparatus for ena- bling a photographic digital camera to be used for = multi-/hyperspectral imaging. N In an embodiment, the diffraction grating has = 30 a grating constant smaller than 2000 nanometers, cor- 3 responding to a grating having more than 500 lines per = millimeter. While the optimal grating constant may de- * pend on the photographic digital camera and its objec- 2 tive, it has been found out that having a grating con- 3 35 stant smaller than 2000 nanometers, e.g. 1000-1500 na- S nometers, may in several embodiments provide a signif- icantly improved balance so that the incident light isdispersed enough but not too much.
A good balance also allows the sensor area of the digital camera to be op-
timally used.
In an embodiment, the diffraction grating is adapted to produce a diffraction pattern, where a dif- fraction angle between first order diffraction maxima (hereafter also "the first-order angle”), for at least part of a wavelength range 400-1000 nm corresponding to a wavelength of the incident light, is equal or smaller in comparison to a field-of-view angle of the digital camera.
As the field-of-view angle of the dig- ital camera corresponds to the field of view of the objective, or the lens, of the digital camera in the dimension of the diffraction pattern and as the dif- fraction pattern has two spatially opposite first or- der maxima in the dimension of the diffraction pat- tern, this allows both maxima for the part of the wavelength range to be fully captured in the diffrac- tion photograph.
When the diffraction grating is two dimensional, the diffraction grating may be adapted to produce a diffraction pattern, where a diffraction an- gle between first order diffraction maxima in both di- mensions, for at least part of a wavelength range 400- 1000 nm corresponding to a wavelength of the incident light, is equal or smaller in comparison to the corre- sponding field-of-view angles of the digital camera.
The first-order angle may also be substantially the 2 same as the field-of-view angle, in which case the N first order maxima extend substantially to the oppo- = 30 site edges of the diffraction photograph so that the 2 sensor area of the digital camera may be fully covered =E in the dimension or dimensions of the diffraction pat- > tern.
In some embodiments, the first-order angle for a 2 wavelength of the incident light of substantially 1000 3 35 nanometers is equal or smaller in comparison to a S field-of-view angle of the digital camera.
In other embodiments, the first-order angle for a wavelength ofthe incident light of substantially 700 nanometers is equal or smaller in comparison to a field-of-view an- gle of the digital camera, for example when the digi- tal camera, such as a camera phone, is adapted to fil- ter out wavelengths above 700 nanometers.
In yet other embodiments, the wavelength may be even smaller fur- ther, for example when the apparatus is adapted for a specific application.
In an embodiment, the apparatus comprises an optical element, such as a lens system comprising one or more lenses, adapted to scale the diffraction im- age.
The optical element may be adapted to be posi- tioned between the diffraction grating and the objec- tive of the digital camera.
The optical element may be adapted to scale the diffraction image up or down.
This allows the size of the diffraction image to be adjusted so that the use of the sensor area of the digital camera may be improved.
For example, the opti- cal element may be adapted to scale the diffraction image so that the first order maxima of the diffrac- tion pattern extend substantially to the opposite edg- es of the diffraction photograph in one or more dimen- sions.
In an embodiment, the diffraction grating is configured to disperse incident light in two dimen- sions.
This allows the freguency information of the incident light to be spread spatially even further = than with a one-dimensional diffraction grating.
N In an embodiment, the frame is configured for = 30 the diffraction grating to be positioned within 1 cen- 2 timeter to the surface of the objective of the digital Ek camera, when the apparatus is in use.
In further em- > bodiments, the frame may even be configured for the 2 diffraction grating to be positioned within 3-5 milli- 3 35 meters to the surface of the objective.
S In an embodiment, the apparatus comprises a border configured to limit the amount of incidentlight on the diffraction grating to reduce overlap in the diffraction image.
This allows separating a part of the diffraction pattern from the non-diffracted in- cident light in the diffraction image so that the dif- fraction image has one or more spatial regions com- prising only the part of the diffraction pattern and not any non-diffracted incident light.
This may areat- ly simplify the task of determining multi- /hyperspectral characteristics from the diffraction image.
In some embodiments, the border may define a hole, which can be smaller than the diffraction grat- ing.
The border may be part of the frame, for example a hole in the frame.
In an embodiment, the frame is made of plas- tic cardboard.
This allows a simple implementation, which may even be formed as a flat pack, for example for transportation, and assembled for use when neces- sary.
In an embodiment, the apparatus comprises a frequency filter positioned to filter the incident light before it arrives at the objective of the digi- tal camera.
One or more frequency filters can be posi- tioned before and/or after the diffraction grating.
The filter allows optimizing the apparatus for a spe- cific application.
The filter may be interchangeable and it may be removably attached to the apparatus.
The filter may be, for example, a band-pass filter or a = band-stop filter.
N In a second aspect, a photographic digital = 30 camera is used together with a diffraction grating for 3 dispersing incident light towards an objective of the =E digital camera to provide a diffraction image for a * diffraction photograph for use in multi-/hyperspectral 2 imaging.
This may be performed using an apparatus in 3 35 accordance with the first aspect or any combination of S its embodiments.
A photograph may be taken with the digital camera to produce a diffraction photograph.
Similarly, a sequence of photographs, such as a video, may be taken to produce a sequence of diffraction pho- tographs, such as a diffraction video. From the dif- fraction photograph, one or more multi-/hyperspectral characteristics may be determined.
In a third aspect, a method comprises receiv- ing image information corresponding to at least a part of a diffraction photograph obtained using a photo- graphic digital camera with means for dispersing inci- dent light for providing a diffraction image for the diffraction photograph. The means may be, for example, a diffraction grating or an apparatus in accordance with the first aspect or any combination of its embod- iments. The method further comprises determining at least one multi-/hyperspectral characteristic from the image information using a computational algorithm. In certain embodiments, this can be done by calculation using the laws of physics, and optics, in particular as the diffraction grating has spatially spread dif- ferent wavelength components in a deterministic man- ner. The image information can be the diffraction pho- tograph as such, but it can also be a modified or re- duced part of the diffraction photograph, for example. In an embodiment, the computational algorithm comprises a machine learning algorithm. In accordance with the disclosure, it has been discovered and veri- fied that the machine learning can be adapted to de- = termine multi-/hyperspectral characteristics from the N image information, which can notably improve the speed = 30 and capabilities of the computational algorithm. The 3 machine learning algorithm can be adapted to use dis- =E persion of incident light in the diffraction photo- * graph to create a multi-/hyperspectral image from the 2 image information. The machine learning algorithm may 3 35 be a deep neural network. It may comprise have an in- S put layer and an output layer but also multiple layers in between. The machine learning algorithm may betrained on image pairs where each pair comprises a first image corresponding to a multi-/hyperspectral photograph captured by a multi-/hyperspectral camera and a second image corresponding to a diffraction pho- tograph captured by a photographic digital camera. The photographs are captured at substantially the same lo- cation so that the first image can be used as ground truth.
In a further embodiment, the computational algorithm comprises a convolutional neural network (CNN). This has been found to provide an efficient al- gorithm for various different situations. In an embodiment, the computational algorithm is adapted to utilize one or more dilations for deter- mining multi-/hyperspectral characteristics from the image information. Dilation models, as such, are known to a person skilled in the art. In accordance with the present disclosure, it has been found, however, that a dilation model may be used in conjunction with the diffraction photograph to determine multi- /hyperspectral characteristics from the diffraction photograph where frequencies are spatially spread in a constant manner. For example, the one or more dila- tions may correspond to spatial displacements for se- lecting pixels from the image information. The spatial displacements may correspond to pixel differences in a diffraction pattern in the image information. The spa- = tial displacements may be determined with one or more N limiting points corresponding to a maximum of the dif- = 30 fraction pattern. For example, a first dilation may 2 correspond to a pixel difference between a zeroth or- =E der point of the diffraction pattern in the image in- > formation and a first diffraction component, corre- 2 sponding to a first-order maximum of the diffraction 3 35 pattern for a first frequency. A second dilation may S then correspond to a pixel difference between a zeroth order point of the diffraction pattern in the imageinformation and a second diffraction component, corre- sponding to a first-order maximum of the diffraction pattern for a second frequency, wherein the first fre- quency 1s larger than the second frequency.
The larg- est frequency may correspond to the smallest dilation and/or the smallest frequency may correspond to the largest dilation.
This allows a dilation model to be used for mapping the diffraction pattern to the image information for extracting one or more multi- /hyperspectral characteristics from the diffraction image.
The one or more dilations may be used to define indices for pixels in the image information.
When the computational algorithm comprises a CNN, the one or more dilations may be used for dilated convolutions in the CNN.
The dilated convolutions may be one or more dimensional, for example two dimensional.
In a further embodiment, the computational algorithm comprises a machine learning algorithm asso- ciated with a first resolution used for training the machine learning algorithm.
The computational algo- rithm further comprises determining a resolution cor- responding to the image information and scaling the one or more dilations if the resolution of correspond- ing to the image information is different from the first resolution used for training of the machine learning algorithm.
In this way, dilations can be used to adapt the computational algorithm to process also 2 image information having different resolution than N that corresponding to the underlying machine learning = 30 algorithm.
This allows the computational algorithm to 3 be used flexibly with different types of digital cam- =E eras.
For example, if the first resolution is smaller * than the resolution corresponding to the image infor- 2 mation, e.g. the resolution of the diffraction photo- 3 35 graph, the one or more dilations can be scaled up.
The S one or more dilations may, for example, be scaled by aconstant factor corresponding to the resolution corre-
sponding to the image information divided by the first resolution. In an embodiment, the method comprises using the at least one multi-/hyperspectral characteristic for any combination of the following: to produce a multi-/hyperspectral image from the image information, to segment one or more regions from the image infor- mation or to characterize one or more material proper- ties from the image information.
In a fourth aspect, a method for generating a computational algorithm for multi-/hyperspectral imag- ing using a photographic digital camera is disclosed. The method comprises using a photographic digital cam- era coupled to an apparatus to obtain a diffraction photograph corresponding to a scene from a first loca- tion. The apparatus may be an apparatus in accordance with the first aspect or any combination of its embod- iments. The method also comprises using a multi- /hyperspectral camera to obtain a multi-/hyperspectral photograph corresponding to the scene from a second location, wherein the second location is substantially the same as the first location. The diffraction photo- graph and the multi-/hyperspectral photograph can be obtained in any order and photographs can be obtained from a single scene or from different scenes, option- ally with varying conditions such as lighting condi- tions. The method further comprises using a computa- = tional algorithm to produce a multi-/hyperspectral im- N age by inverting diffraction from the diffraction pho- = 30 tograph and determining a difference value for a meas- 2 ure of difference between the multi-/hyperspectral, as =E obtained image using the photographic digital camera, * and the multi-/hyperspectral photograph. Finally, the 2 method comprises modifying one or more parameters of 3 35 the computational algorithm to reduce the difference S value. The computational algorithm may comprise anyfeatures described in connection of the third aspect or any of its embodiments.
In a fifth aspect, a computer program product comprises instructions which, when the program is exe- cuted by a computer, cause the computer to carry out the method of the fourth aspect and/or any combination of its embodiments.
As disclosed in accordance with any of the aspects or embodiments above, the multi-/hyperspectral information may, at least, be provided at any wave- length range within 400-1000 nanometers.
The upper limit may be reduced, for example, to 700 nanometers when the photographic digital camera comprises a fre- guency filter blocking wavelengths above 700 nanome- ters, but even in these cases the operation range may be extended if the filter is removed from the camera.
The diffraction photograph may be used to produce a multi-/hyperspectral image by inverting dif- fraction from the diffraction photograph.
However, multi-/hyperspectral information contained in the dif- fraction photograph may also be used without first producing the complete multi-/hyperspectral image.
One or more multi-/hyperspectral characteristics may be determined directly from the diffraction photograph and then used, for example to segment one or more spa- tial regions in the scene captured in the diffraction photograph.
Alternatively or additionally, they may be 2 used in material characterization i.e. to characterize N one or more material properties from the scene cap- = 30 tured in the diffraction photograph.
The diffraction 3 photograph may thus be used for long-range imaging, =E even satellite surveying, and/or quality control.
It * may also be used for medical imaging to diagnose bio- 2 logical subjects and/or for recognizing counterfeit 3 35 material.
Other possible uses include food quality as- S surance, gas and/or oil exploration and applications in agriculture.
It is noted that as the provision ofthe diffraction photograph allows multi-/hyperspectral imaging without forming the full multi-/hyperspectral image, the apparatus and methods as described herein may allow marked improvement in speed and efficiency for many specific applications, such as those involv- ing segmentation and/or characterization.
It is to be understood that the aspects and embodiments described above may be used in any combi- nation with each other. Several of the aspects and em- bodiments may be combined together to form a further embodiment.
BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are included to provide a further understanding and constitute a part of this specification, illustrate embodiments and together with the description help to explain the principles of the invention. In the drawings: Figs. la and 1b illustrate an apparatus ac- cording to an embodiment in a perspective view and an exploded perspective view, respectively, Fig. 2 illustrates a method according to an embodiment, Figs. 3a and 3b illustrate an example proce- dure for multi-/hyperspectral imaging, and © Fig. 4 illustrates another method according O to an embodiment.
N Like references are used to designate equiva- A 30 lent or at least functionally equivalent parts in the © accompanying drawings.
j © DETAILED DESCRIPTION 3 The detailed description provided below in > 35 connection with the appended drawings is intended as a description of the embodiments and is not intended torepresent the only forms in which the embodiment may be constructed or utilized. However, the same or equivalent functions and structures may be accom- plished by different embodiments.
Figures la and 1b show an example of an appa- ratus 100 for enabling a photographic digital camera 110 to be used for multi-/hyperspectral imaging. The apparatus 100 can be an accessory, which may be con- figured for one or more different types of digital cameras 110. The apparatus 100 comprises a frame 120, which can be made of, for example, metal, plastic or cardboard. The frame 120 is configured so that it can be coupled with a photographic digital camera 110, for example so that it may be removably attached to the digital camera 110. Specifically, the frame 120 may be adapted to be coupled with an objective 112 of the digital camera. The frame 120 may comprise coupling means such as a snap-on clip, a screw assembly, an ad- hesive surface or an adapter for coupling the appa- ratus 100 with the photographic digital camera 100 or the objective 112 in particular. In some embodiments, the frame 120 may be adapted to enclose the digital camera 110 so that the digital camera 110 is to be po- sitioned inside the frame 120 when the apparatus 100 is in use. In other embodiments, the frame 120 may smaller than the digital camera 110 allowing the appa- ratus 100 to function as a compact accessory. = The apparatus 100 further comprises a dif- N fraction grating 130, which is configured to disperse = 30 incident light towards the objective 122 of the digi- 8 tal camera 110, when the apparatus is in use, to pro- = vide a diffraction image on the objective 122, so that N the diffraction image can be captured by the digital 2 camera 110 to produce a diffraction photograph adapted 3 35 for use in multi-/hyperspectral imaging. The diffrac- N tion grating 130 is coupled to the frame 120, for ex- ample by removable or fixed attachment. The diffrac-
tion grating 130 may be a one-dimensional or two- dimensional diffraction grating. In particular, using a two-dimensional grating allows producing a two- dimensional diffraction pattern, which may, at least in some situations, make it easier to determine multi- /hyperspectral characteristics from the diffraction image. The diffraction grating 130 may be a transmis- sive or reflective diffraction grating. A transmissive diffraction grating 130 may be configured to be posi- tioned substantially perpendicularly with respect to the optical axis of the objective 112 of the digital camera 110, whereas a reflective diffraction grating 130 may, for example, be configured to be positioned substantially parallel with respect to the optical ax- is of the objective 112. This allows controlling the angle between the scene to be photographed and the digital camera 110. For this purpose but also for oth- er purposes, the apparatus 100 may comprise one or more additional optical elements for directing the in- cident light before it arrives at the objective 112 of the digital camera 110. However, in many embodiments these are not necessary and the apparatus 100 may be adapted so that the diffraction grating alone provides sufficient optical manipulation of the incident light to enable multi-/hyperspectral imaging with the digi- tal camera 110.
The diffraction grating 130 is adapted to = spread a plurality of wavelength bands of the incident N light to produce a diffraction image, which can be = 30 captured in a diffraction photograph for multi- 2 /hyperspectral imaging. The diffraction grating 130 Ek may have a grating constant larger than e.g. 350 or * 500 nanometers. It has been noted that in some embodi- 2 ments the grating constant needs to be smaller than 3 35 2000 nanometers to appropriately spread the spectrum S of incident light for multi-/hyperspectral imaging.
The diffraction grating 130 may be rectangular but itmay also be of another shape.
The diffraction grating 130 may be adapted to provide a diffraction image which, when captured in a diffraction photograph, co- vers at least 50-80 percent of the sensor area of the digital camera 110. In particular, it may be adapted to provide a diffraction image which comprises a dif- fraction pattern which, when captured in a diffraction photograph, covers at least 50-80 percent of the width and/or height of the sensor area of the digital camera 110, when the extent of the diffraction in one dimen- sion corresponds to the distance between the two op- posing first order diffraction maxima for a threshold wavelength.
The threshold wavelength may be 700-1000 nanometers so that the sensor area of the digital cam- era 110 can be optimally covered, but for some appli- cations the threshold wavelength may be smaller, al- lowing the apparatus 100 to be adapted for multi- /hyperspectral imaging focused on wavelengths below 700 nanometers.
To allow improved use of the sensor area of the digital camera 110, the apparatus 100 may be adapted for the diffraction pattern to cover, as defined above, at least 90 percent of the width and/or height of the sensor area of the digital camera 110 or substantially the whole width and/or height of the sensor area.
The diffraction grating may be substan- tially the same size as the surface 114 of the objec- tive 112 of the digital camera 110 or smaller, the = surface 114 of the objective 11? being the outer sur- N face of the objective 112 adapted to receive incident = 30 light into the digital camera 110 for photographing, 3 for example the surface of a lens.
Ek The frame 120 is configured for positioning a the diffraction grating 130 in front of an objective 2 112 of the digital camera 110, when the apparatus 100 3 35 is in use.
There, the diffraction grating 130 may be S substantially on the surface 114 of the objective 112. In some embodiments, the diffraction grating 130 maybe positioned further away from the surface 114 of the objective 112, for example when additional optical el- ements such as lenses and/or filters are used between the diffraction grating 130 and the digital camera
110. Coupling means, which may be part of the frame 120, may be adapted to align the diffraction grating 130 with the objective 112 of the digital camera 110 so that the photographic digital camera 110 can be used for multi-/hyperspectral imaging. Optionally, the apparatus 100 comprises one or more borders 140, which are configured to limit the amount of incident light on the diffraction grating
130. The one or more borders may be substantially opaque. The one or more borders 140 may be adapted to produce one or more spatial regions in the diffraction image, and consequently in the diffraction photograph, where the intensity of non-dispersed light is attenu- ated or where non-dispersed light is absent altogeth- er. The one or more borders 140 may define a hole 142, for example in the frame 120, for admitting incident light at the diffraction grating 130. In an embodi- ment, the hole 142 is a square hole, but in other em- bodiments it may be, for example, rectangular, round or oval. The hole 142, or the one or more borders 140, may be adapted to face the diffraction grating 130, for example coaxially. The hole 142, or the one or more borders 140, may be adapted to be positioned co- = axially with the optical axis of the objective 112 of N the digital camera 110, when the apparatus 100 is in = 30 use. The hole 142 may have width and/or height smaller 3 than that of the diffraction grating 130. For example, =E the hole 142 may have width and/or height smaller than * 1-2 centimeters. The hole 142 may have width and/or 2 height of at least 3 millimeters to prevent the dif- 3 35 fraction from the hole 142 itself to adversely affect S the diffraction image, e.g. by blurring the diffrac- tion image. Consequently, the hole 142 may, for exam-
ple, be at least 3 millimeters x 3 millimeters in size in both dimensions. It has been noted that the hole 142 may be relatively large, for example when the dig- ital camera is a system camera such as a DSLR, so that a hole 142 of size 20 millimeters times 20 millimeters may be used at least for some embodiments.
The apparatus 100 may comprise a chamber 122 for receiving incident light and directing it to the diffraction grating 130. The chamber 122 may be part of the frame 120 or it may be separately coupled to the frame 120, for example by releasable attachment.
The chamber 122 may be cylindrical but other shapes are possible as well. The chamber 122 may comprise one or more holes 142 as described above and the one or more holes 142 may be act as the sole source of inci- dent light on the diffraction grating 130. In one em- bodiment, the chamber 122 comprises exactly one hole 142, which can be a rectangular hole, for admitting incident light at the diffraction grating 130, the hole being adapted to be substantially coaxially aligned with both the diffraction grating 130 and the optical axis of the objective 112 of the digital cam- era 110. The chamber 122 may have a length of, for ex- ample, 1-50 centimeters. The chamber 122 may be adapted to the digital camera 110, in particular to the type of the digital camera 110. For example, when the digital camera 110 is a camera phone, the chamber 2 122 may have a length of 2-10 centimeters, whereas N when the digital camera 110 is a system camera, such = 30 as a DSLR, the chamber 122 may have a length of 5-30 2 centimeters. The length corresponds to the axial dis- Ek tance between the diffraction grating 122 and the * point of entry for the incident light into the chamber 2 122, e.g. the border 140 or the hole 142.
3 35 The digital camera 110 may be adapted to cap- S ture photographs when a shutter release 116 is acti- vated. The shutter release 116 may be, for example aphysical or a virtual button to be pressed to capture a photograph. The shutter release 116 may be integrat- ed to the digital camera 110 and/or it may be arranged as a remote trigger mechanism. Any measures available for the digital camera 110 to capture photographs may be made available also to capture diffraction photo- graphs, including, for example, a timer.
Figure 2 shows an example of a method for providing and using multi-/hyperspectral image infor- mation. The method comprises several parts which may be performed independently from each other. To produce a diffraction photograph for mul- ti-/hyperspectral imaging, an ordinary digital camera (a photographic digital camera 110) can be used. The photographic digital camera is coupled 210 with a dif- fraction grating, which may be the diffraction grating 130 as described above. In turn, the diffraction grat- ing may be part of an apparatus 100 as described above. The diffraction grating may be attached to the photographic digital camera for example by removable attachment. The attachment, such as a removable at- tachment, may be done, for example, by coupling means, such as a snap-on clip, a screw assembly, an adhesive surface or an adapter. The coupling means may be adapted to align the diffraction grating with an ob- jective of the digital camera so that the photographic digital camera can be used for multi-/hyperspectral = imaging. After the photographic digital camera is cou- N pled with the diffraction grating, the digital camera = 30 can be used to capture 220a scene to be photographed 2 through the diffraction grating to produce a diffrac- Ek tion photograph of the scene. This allows the scene to * be captured as a snapshot image. It is specifically 2 noted that the diffraction photograph for multi- 3 35 /hyperspectral imaging may be captured with the in- S trinsic image capturing mechanism of the digital cam- era, e.g. with a press of a button. This allows sub-
stantially instantaneous capture of two-dimensional diffraction photographs for multi-/hyperspectral imag- ing.
For this purpose, the digital camera may be adapted to be operated with a shutter release, for ex- ample a physical or a virtual button, to capture a photograph.
The scene may comprise one or more targets for which multi-/hyperspectral information is to be determined.
For example, the scene may comprise one or more objects for which one or more material parameters are to be determined from multi-/hyperspectral infor- mation.
Alternatively or in addition, the scene may comprise one or more surfaces for which one or more positions and/or one or more segments is to be deter- mined from multi-/hyperspectral information.
It is further noted that to determine the at least one mul- ti-/hyperspectral characteristic, no other information than information derivable directly from the diffrac- tion photograph is required, including any information regarding the intrinsic configuration of the diffrac- tion grating and/or its configuration with respect to the digital camera.
Instead, the at least one multi- /hyperspectral characteristic can be determined solely based on the diffraction photograph.
For example, res- olution of the image information corresponding to the diffraction photograph may be determined directly from the image information itself.
The image information may thus include only the pixels corresponding to the 2 at least part of the diffraction photograph.
However, N it is naturally possible also to provide supplemental = 30 information to the computational algorithm for deter- 2 mining the at least one multi-/hyperspectral charac- =E teristic.
Such supplemental information may include, * for example, information about lightning conditions 2 and/or any information obtained by the digital camera
3 35 pertaining to the diffraction photograph.
S Multi-/hyperspectral image information such as one or more multi-/hyperspectral characteristicscan be produced from a diffraction photograph.
This may be performed using a computer-implemented method.
For this purpose, a computing device comprising a pro- cessor may be used, where the computing device has at least one memory comprising computer program code.
The at least one memory and the computer program code can be configured to, with the at least one processor, cause the system to determine at least one multi- /hyperspectral characteristic from image information received by the computing device.
The computing device may be, for example, a computer server, an accessory to the digital camera or a mobile computing device such as a mobile phone.
The process of producing mul- ti-/hyperspectral information from the diffraction photograph may also be performed by distributed compu- ting with multiple computing devices.
First, image in- formation corresponding to at least a part of a dif- fraction photograph is received 230, for example by any of the computing devices as described above.
The image information is obtained using a photographic digital camera with means for dispersing incident light for providing a diffraction image for the dif- fraction photograph.
The means may be a diffraction grating or an apparatus 100 as described above.
The image information corresponds to a two-dimensional im- age having a diffraction pattern, for example compris- ing one or more diffraction maxima and/or minima. = Then, at least one multi-/hyperspectral characteristic N is determined 240 from the image information using a = 30 computational algorithm. 2 There exists various ways to determine multi- Ek /hyperspectral characteristics from the image infor- * mation, when the image information corresponds to at 2 least a part of a diffraction photograph.
In the con- 3 35 text of this disclosure, it has been found that one S particularly advantageous way to perform this, in con- junction with many embodiments, is to use a machinelearning algorithm such as a CNN as the computational algorithm.
These algorithms, as such, are known to a person skilled in the art so that they are readily available for use.
As a specific example, it has been found that one or more dilations such as dilated con- volutions may be used to determine 240 the at least one multi-/hyperspectral characteristic.
Dilated con- volutions, as such, are known by a person skilled in the art and a dilated convolution may be defined as a convolution where the elements of the kernel of the convolution are dilated by a dilation factor d, where only every (dt+l):th element is taken as input into the convolution operation, starting from the middle of the kernel.
For example, a dilation factor of d=0 would correspond to an unmodified convolution, and a dila- tion factor of d=1, would take every second element as input, whereas for d=2, the convolution would take every third element as input.
To determine multi- /hyperspectral characteristics, with or without convo- lutions, even large dilations may be used, so that the corresponding dilation factors may be at least 10, 50 or even larger.
As an example, dilation factors of 70- 130 may be used, where smaller dilations correspond to smaller wavelengths in the diffraction photograph and larger wavelengths correspond to larger wavelengths in the diffraction photograph.
Additionally, the dila- tions may depend on the size or resolution of an im- = age.
If the image is scaled down or up, also the dila- N tions may be scaled down or up.
The scaling factor for = 30 the dilations may be equal to the scaling factor for 2 the image.
Using a dilation model, optionally with =E large dilations, has been found to significantly im- * prove the image recognition performance for the dif-
2 fraction photograph. 3 35 When at least one multi-/hyperspectral char- S acteristic has been determined from the image infor- mation, the characteristic may be used 250 in variousways, depending on the specific application.
This may also be performed using a computer-implemented method.
For this purpose, the same computing device or a simi- lar computing device as described above may be used.
For example, the at least one multi-/hyperspectral characteristic may be used to produce a multi- /hyperspectral image.
A multi-/hyperspectral image may be produced by inverting diffraction from a diffrac- tion photograph or from image information correspond- ing to at least a part of a diffraction photograph.
The results may even be arranged to be displayed on a display of the digital camera 110, e.g. when the digi- tal camera 110 is a mobile phone.
This can also be done, when the multi-/hyperspectral characteristic is used to segment one or more regions from the image in- formation and/or to characterize one or more material properties from the image information.
Alternatively or in addition, the results may be used in a segmenta- tion and/or characterization device.
Figures 3a and 3b illustrate examples for de- termining one or more multi-/hyperspectral character- istics 310, 320 from a diffraction photograph 300 (al- ternatively, this may be image information correspond- ing to at least a part of a diffraction photograph). As one example, an at least three-dimensional multi- /hyperspectral tensor 310 is produced, where a first and a second dimension correspond to the two spatial x dimensions in the plane of the diffraction photograph N 300 and a third dimension is a spectral dimension cor- = 30 responding to wavelength of incident light.
This il- 2 lustrates also the multi-/hyperspectral characteris- =E tics 310, 320, which may be considered as the image * values, i.e. amount of light captured in the diffrac- 2 tion photograph, in the spectral dimension.
For multi- 3 35 and/or hyperspectral imaging, there are naturally more S than three of such image values in the spectral dimen-
sion for each spatial point, e.g. more than ten oreven more than one hundred.
For example, having six or more of such image values in the spectral dimension may be used in some applications for multispectral im- aging.
However, for hyperspectral imaging the number of such image values may be much larger, e.g. 50-300. A plurality of consecutive wavelength ranges (N, Az, .. An), each of which may be very small, then correspond to consecutive values in the spectral dimension of the multi-/hyperspectral tensor.
One wavelength range (e.g.
X) corresponds to one slice 320 in the multi- /hyperspectral tensor.
The wavelength ranges (N, Az, .. An) may be equally large.
The production of the multi- /hyperspectral tensor 310, or one or more slices 320 thereof, then corresponds to determining the at leastone multi-/hyperspectral characteristic.
In figure 3b, one example for producing in- formation for a multi-/hyperspectral tensor 310 is il- lustrated.
In this example, dilated convolutions (Di, Do, .. Dy) are be used.
The dilated convolutions are adapted to select correct values from the diffraction photograph 300 and deliver them forward in the compu- tational algorithm such as a machine learning algo- rithm comprising a neural network (CNN). In the left- most dashed region, it is illustrated how a dilated convolution may be used to select image values of the pixels in the diffraction photograph 300 (or at least a part of it). The selected image values are multi- 2 plied by weight factors corresponding to the dilated N convolutions and they are used to produce a new se- = 30 quence of convoluted image values, which may differ in 3 length from the number of selected image values.
The =E weight factors of the dilated convolutions are parame- * ters, which may be determined, for example, by a ma- 2 chine learning algorithm.
In this case, they can be 3 35 learned by calculating differences between a multi- S /hyperspectral photograph taken by a multi- /hyperspectral camera and a diffraction photographtaken during a learning process for the machine learn- ing algorithm.
In the rightmost dashed region, it is illustrated how the dilated convolutions may be used in the computational algorithm.
Several dilated convo- lutions are used and their number may be determined based on the resolution of the diffraction photograph.
In an embodiment, nine or more dilated convolutions are used allowing at least one multi-/hyperspectral characteristic to be determined.
In principle, there is no upper limit for the number of dilated convolu- tions but the largest pixel number corresponding to a single spatial dimension may be used as a practical upper limit.
The convoluted image values are concate- nated after which they may be fed into one or more re- sidual block modules or elements performing corre- sponding operations.
The operations to be performed include batch normalization and summation, for exampleas illustrated in the lower right part of Fig. 3a.
Figure 4 illustrates how a computational al- gorithm such as machine learning algorithm for multi- /hyperspectral imaging using a photographic digital camera may be generated.
This involves using a photo- graphic digital camera 110 coupled to an apparatus 100 as described above to obtain a diffraction photograph corresponding to a scene 410, e.g. as described above, from a first location.
The scene 410 may be set on a dark and/or single-colored background 410 to improve = contrast.
Alternatively or additionally, one or more N frames 450 may be used between the first location and = 30 the scene 410 to block stray light.
Before and/or af- 2 ter this, a multi-/hyperspectral camera 420 is used to =E obtain a multi-/hyperspectral photograph corresponding * to the scene 410 from a second location, the second 2 location being substantially the same as the first lo- 3 35 cation.
For this purpose the digital camera 110 and/or S the multi-/hyperspectral camera 420 may be coupled to a positioning device 430 such as a slide for aligningthem at substantially the same location. The digital camera 110 may be used to capture one or more diffrac- tion photographs of the scene 410. Also, the multi- /hyperspectral camera 420 may be used to capture one or more multi-/hyperspectral photographs of the scene
410. A computational algorithm is used to produce a multi-/hyperspectral image by inverting diffraction from the diffraction image. This may be done, for ex- ample, as described above. Then a difference value is determined for a measure of difference between the multi-/hyperspectral image and the multi- /hyperspectral photograph. The computational algorithm can then be optimized by minimizing this difference value, which involves modifying one or more parameters of the computational algorithm to reduce the differ- ence value. This may be performed repeatedly, for ex- ample until a threshold value for difference is reached. In the following, further detailed examples are provided. The computational algorithm may be a ma- chine learning algorithm. The computational algorithm may be based on a CNN used for complex image-to-image visual tasks, such as single-image super resolution (SISR). Multiple concurrent convolutions may be used, optionally with very large dilations to allow main- taining a large spatial range for convolution filters while using few parameters. It has been found that 2 such filters accurately model the underlying phenomena N of diffraction, and present a way of automatically de- = 30 tecting the filter dilatations based on empirical im- 2 age data. The computational algorithm can be adapted =E to use diffraction images of much higher spatial reso- * lution than ones seen during training, to output imag- 2 es of markedly improved quality while keeping training 3 35 feasible. S The functioning of the diffraction grating can be visualized by a narrow-band laser being pro-
jected at an object so that the resulting image is taken through a two-dimensional diffraction grating as incident light.
The laser is projected at a single point in the image, but due to the diffraction grating the first order diffraction pattern is diffracted to eight other positions as well, one in each major di- rection.
The specific location of these additional po- sitions depends on the wavelength of the light, but since in this example case only a narrow band of the spectrum is emitted from the laser, the diffraction pattern components are located according to the wave length of the laser.
In a first layer, multiple dilat- ed 2D convolutions may be used, for example ranging from a dilation of 70 to 130. For each dilation, one or more filters, e.g. 5 filters of size k, may be used, where k = 3, for example.
The resulting feature maps can then be concatenated in the channel dimen- sion.
This is eguivalent to having a single layer with multiple sizes of dilation that each produces a subset of the output channels.
The example with k = 3 cap- tures the first order diffraction components (the ze- ro:th ones being in the middle), but not the subse- guent ones, as they simply repeat the first components with lower intensity.
Since the diffraction pattern is around the actual point of interest in the image, aake a slightly larger image may be used than the one gen- erated in order to capture the diffraction pattern for x all parts of the image.
In such a case, the input to N the computational algorithm can be slightly larger = 30 than the output of the computational algorithm.
After 2 the first dilated convolutions, the feature maps can Ek be cropped to the real output size, while the width * and height of the image may be kept constant using ze- 2 ro-padding.
This has the advantage of not having to 3 35 lose information in the process of making the feature S maps smaller in the computational algorithm.
The re- sulting channels of feature maps, e.g. 300 channels,
are then forwarded to a residual network that consists of four blocks of 2D convolutional layers and batch normalization, along with an additive residual connec- tion.
The residual blocks may be modelled in single- image super-resolution, e.g. as disclosed in Ledig, Christian, et al. ”Photo-Realistic Single Image Super- Resolution Using a Generative Adversarial Network.” CVPR.
Vol. 2. No. 3. 2017. There, the input of the computational algorithm is a low resolution image and the output is a spatially higher resolution image but the input and output may here have substantially the same spatial resolution.
However, the output may have a larger spectral resolution.
The residual blocks canlearn to correct diffraction artefacts that otherwiseleak into the output image, improving visual quality.
To optimize for the quality of the recon- structed multi-/hyperspectral images, similarity met- rics for RGB images, well-known in the art, may be mixed with a similarity metric for multi- /hyperspectral data.
The quality of the multi- /hyperspectral image may be evaluated with respect to an image produced by an actual multi-/hyperspectral camera ("ground truth”). One or more of the following targets may be used.
For the first target, each depth slice of the output should match the ground truth vis- ually, as a monochrome image.
For the second target, the resulting spectrum in each pixel of the output = should match the ground-truth one as closely as possi- N ble.
For the third target, the resulting spectrum = 30 should be as smooth and non-noisy as the ones taken 3 with the multi-/hyperspectral camera.
For applications =E of multi-/hyperspectral images relying on the distinct * spectral signatures of different materials, the second 2 target may be emphasized.
For example, the Canberra 3 35 distance measure may be used between the spectra of S each pixel to make sure they match as closely as pos- sible, e.g. as disclosed in Deborah, Hilda, Nol Rich-
ard, and Jon Yngve Hardeberg. ”A comprehensive evalua- tion of spectral distance functions and metrics for hyperspectral image processing.” ”IEEE Journal of Se- lected Topics in Applied Earth Observations and Remote Sensing 8.6 (2015): 3224-3234”. To optimize with re- spect to the first target, the structural similarity measure (SSIM) may be used, for example as disclosed in Wang, Zhou, et al. “Image quality assessment: from error visibility to structural similarity.” IEEE transactions on image processing 13.4 (2004): 600-612. In some embodiments, SSIM may work extremely well vis- ually in terms of quality of detail, although it may fail to reconstruct the appropriate colors if used alone.
SSIM may be applied separately for each depth slice in the input.
Finally, to smooth and reduce the noisiness of the spectra, pixels may be regularized by computing the absolute error of subsequent spectral components and taking the mean.
An example of a loss function can be obtained by subtracting the SSIM from the Canberra distance and adding the regularized mean, wherein the regularized mean may be further multiplied by a scaling factor, e.g. 0.02. The computational al- gorithm may be trained in a supervised, straightfor- ward manner, for example with Adamax as the optimiza-
tion algorithm.
A dilation range can be specified based on the range of the diffraction.
The dilation range can 2 be used in convolutions, for example those of a CNN.
N The range of the diffraction may depend on the imaging = 30 setup, such as on the camera, lens, chosen resolution, 3 and on the specific diffraction grating used.
The di- Ek lation range may be adapted to be wide enough to cover * the extent of the diffraction pattern, e.g. at least 2 up to a first order maximum.
The dilation range may 3 35 also be limited to prevent introducing excess weights S in to the CNN.
One or more dilations may be determined visually from a diffraction photograph of a broadbandbut spatially narrow light source.
A suitable lamp, such as an incandescent filament lamp, may be placed behind a small opening to reveal the extent of dif- fraction.
The first dilation may then be determined as the pixel difference between the light source, i.e. the zeroth order point of the diffraction pattern, and the first diffraction component of the first order diffraction pattern, corresponding to the smallest in- teresting wavelength e.g. 400-1000 nm.
In an RGB pho- tograph this diffraction component would correspond to blue color.
The last dilation may be determined as the pixel difference between the light source, i.e. the zeroth order point of the diffraction pattern, and the last diffraction component of the first order diffrac- tion pattern, corresponding to the largest interesting wavelength e.g. 400-1000 nm or substantially 700 nm.
The dilation range may also be determined from the power cepstrum C(I) of the diffraction photograph.
A brief description and history on the use of the cepstrum can be found in Oppenheim, Alan V., and Ronald W.
Schafer. “From frequency to quefrency: A history of the cepstrum.” IEEE signal processing Maga- zine 21.5 (2004): 95-106. The power cepstrum C(I), for example of a 2D image, can be obtained by a Fast Fou- rier Transform (FFT): First, a FFT is taken of an in- put image to transform each color channel of the input image to the Fourier frequency domain, then a loga- 2 rithm of the magnitude of the previous taken to trans- N form products into log sums, where periodic frequency = 30 domain components are represented as sums, and final- 2 ly, an inverse FFT of the previous is taken to map the =E periodic frequency domain components into peaks in the * quefrency domain.
The result of this is the power 2 cepstrum C(I). The diffraction photograph comprises 3 35 convolutive components of the scene, where each compo- S nent can be thought of as being a convolution of a narrow spectral range of the scene.
These convolutivecomponents can further be thought of as echos of the original scene, shifted in one or more spatial dimen- sions, e.g. in two dimensions and in a total of 8 ma- jor directions when a two-dimensional diffraction grating is used.
The echos appear as periodic signals in the frequency domain, allowing the periodicity to be extracted using the cepstrum.
This information can be used to determine one or more dilations, for exam- ple for a CNN.
A logarithm of the magnitude of the power cepstrum C(I) may be taken to extract periodic components from the frequency domain, the result re- ferred hereafter as the logarithmic magnitude g<ue- frency Cim(I). An average of the logarithmic magnitude quefrency Cim(l) for a sufficient number of photographs can be taken to reduce the effect of noise for easy visual identification of the dilation range.
The num- ber of photographs to average over depends on the noise characteristics of the photographs.
The computa- tional cost of estimating the dilation range from the power cepstrum C(I) can be low, as clear candidates for dilation ranges may be visible from as a low num-
ber of images as five.
As an additional or an alternative method for determining the dilation range, point spread functions (PSF) for the individual spectral components of the multi-/hyperspectral image may be estimated.
This may be performed, for example, by training a CNN compris- = ing a PSF filter, which can be equal to the size of N the multi-/hyperspectral image slice for each spectrum = 30 component.
The CNN may further comprise a final convo- 2 lutional layer that performs a weighted sum to produce Ek the final diffraction photograph.
The CNN can be * trained by minimizing the sum of mean SSIM and 11 loss 2 for the estimated diffraction image against a known 3 35 diffraction image.
A Fast Fourier Transform may be em- S ployed when calculating the convolution between the multi-/hyperspectral image slices and PSFs.
The endresult is a CNN that estimates the diffraction photo-
graph given a multi-/hyperspectral photograph.
In ad-
dition, learned PSFs are obtained, one for each spec-
tral component.
The PSFs reveal the diffraction pat-
tern for each spectral component, where the dilationfrom the center is consecutively larger for consecu-
tively larger wavelengths, i.e. from the blue spectralrange to the red spectral range.
To determine therange of the dilations, a sum of the absolute valuesof all the PSFs can be taken.
Compared to employingthe power cepstrum for the dilation range estimation,
the PSF estimation method may be more costly, despiteefficient implementation of the FFT.
Both methods maybe used to produce approximately the same results byvisual inspection for the dilation ranges.
Estimationvia the power cepstrum can be performed independent ofthe multi-/hyperspectral photograph, thus it is notlimited by the resolution of the multi-/hyperspectralphotograph.
The dilation range may therefore, in somecases, be estimated for higher resolution diffraction photographs from the power cepstrum, which can be usedto adjust the dilation range for higher resolutiondiffraction photographs.
The PSFs for each wavelengthcan be modeled as a convolution on the multi-
/hyperpectral scene.
For this, the disclosure of Oka-
moto, Takayuki, and Ichirou Yamaguchi. ”Simultaneousacquisition of spectral image information.” Optics
2 letters 16.16 (1991): 1277-1279 and/or Descour, Mi- N chael, and Eustace Dereniak. ”Computed-tomography im- = 30 aging spectrometer: experimental calibration and re- 2 construction results.” Applied Optics 34.22 (1995): =E 4817-4826 may be used.
The resulting diffraction pho- * tograph is an integral over the spectrum weighed by 2 the spectral sensitivity of the digital camera.
This 3 35 convolution and integration process may result in loss S of information, albeit some of the information has been transformed from the spectral domain to the spa-
tial domain. Given data that has undergone a convolu- tion, the original data can be recoverable by means of deconvolution. The components of a weighted sum of convolutions may not be recoverable by means of decon- volution.
The computational method may be adapted to run inference on higher resolution images than the ones the model was trained on. For a scale factor (s), this may be achieved by increasing the dilations, e.g. those of the first layer, by the scale factor. The scale factor may be larger than 1, for example 2-3 or even larger. It may be constant. For example, the set of dilations may be denoted as di, do,.. An. To deter- mine multi-/hyperspectral characteristics on the dif- fraction photographs that are s times bigger than the ones trained on, the trained parameters may be used and the dilations may be multiplied by the scale fac- tor, i.e. scaled to sd, Ssd,,.. sd,. This allows in- creasing visual quality of the images, for example in terms of sharpness. Additionally, close aligning the original image pairs may remove artefacts such as col- or bleeding. A multi-/hyperspectral image may be produced from the image information corresponding to at least a part of a diffraction photograph by inverting diffrac- tion from the diffraction photograph. For visual in- spection, e.g. for quality assurance of the method, x RGB reconstruction of an image from a multi- N /hyperspectral image may be used, for example by de- = 30 termining a standard weighted sum over the different 3 wavelengths. For example, color matching function Ek (CMF) values may be used for this purpose. These can * be used for red, green and blue for the visible spec- 2 trum and they are based on the intensity perception to 3 35 a particular wavelength for particular cone cell type S (long, medium and short) for a typical human observer. For this, the techniques outlined in Fairchild, Mark
D. “Color appearance models.” John Wiley & Sons, 2013, may be used.
This allows viewing the multi- /hyperspectral photograph as a single color image, which can thus be an RGB image, as opposed to viewing multiple monochrome slices of the multi-/hyperspectral tensor.
The computing device as described above may be implemented in software, hardware, application log- ic or a combination of software, hardware and applica- tion logic.
The application logic, software or in- struction set may be maintained on any one of various conventional computer-readable media.
A "computer- readable medium” may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an in- struction execution system, apparatus, or device, such as a computer.
The exemplary embodiments can store in- formation relating to various processes described herein.
This information can be stored in one or more memories, such as a hard disk, optical disk, magneto- optical disk, RAM, and the like.
One or more databases can store the information used to implement the exem- 2 plary embodiments of the present inventions.
The data- N bases can be organized using data structures (e.g., = 30 records, tables, arrays, fields, graphs, trees, lists, 2 and the like) included in one or more memories or =E storage devices listed herein.
The databases may be * located on one or more devices comprising local and/or 2 remote devices such as servers.
The processes de- 3 35 scribed with respect to the exemplary embodiments can S include appropriate data structures for storing datacollected and/or generated by the processes of the de-
vices and subsystems of the exemplary embodiments in one or more databases.
All or a portion of the exemplary embodiments can be implemented using one or more general purpose processors, microprocessors, digital signal proces- sors, micro-controllers, and the like, programmed ac- cording to the teachings of the exemplary embodiments, as will be appreciated by those skilled in the comput- er and/or software art(s). Appropriate software can be readily prepared by programmers of ordinary skill based on the teachings of the exemplary embodiments, as will be appreciated by those skilled in the soft- ware art. In addition, the exemplary embodiments can be implemented by the preparation of application- specific integrated circuits or by interconnecting an appropriate network of conventional component cir- cuits, as will be appreciated by those skilled in the electrical art(s). Thus, the exemplary embodiments are not limited to any specific combination of hardware and/or software.
The different functions discussed herein may be performed in a different order and/or concurrently with each other.
Any range or device value given herein may be extended or altered without losing the effect sought, unless indicated otherwise. Also any embodiment may be combined with another embodiment unless explicitly = disallowed.
N Although the subject matter has been de- = 30 scribed in language specific to structural features 8 and/or acts, it is to be understood that the subject E matter defined in the appended claims is not neces- © sarily limited to the specific features or acts de- 3 scribed above. Rather, the specific features and acts 00 35 described above are disclosed as examples of imple- i menting the claims and other equivalent features andacts are intended to be within the scope of the claims.
It will be understood that the benefits and advantages described above may relate to one embodi- ment or may relate to several embodiments. The embod- iments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will fur- ther be understood that reference to 'an' item may re- fer to one or more of those items.
The term 'comprising' is used herein to mean including the method, blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above descrip- tion is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exem- plary embodiments. Although various embodiments have been described above with a certain degree of particu- larity, or with reference to one or more individual embodiments, those skilled in the art could make nu- merous alterations to the disclosed embodiments with- out departing from the spirit or scope of this speci- fication.
00
O N
N 0) oO
I jami a 00 0
O
O 00
O N

Claims (17)

1. An apparatus (100) for enabling a photographic digital camera (110) to be used for multi- and/or hyperspectral imaging, the apparatus (100) comprising: a frame (120) configured for coupling with the digital camera (110); and a diffraction grating (130) coupled to the frame (120) and configured for dispersing in- cident light, when the frame (120) is coupled to the digital camera (110), towards an ob- jective (112) of the digital camera (112) for providing a diffraction image for a diffrac- tion photograph (300) for use in multi- and/or hyperspectral imaging.
2. The apparatus (100) according to claim 1, wherein the diffraction grating (130) has a grating con- stant smaller than 2000 nanometers.
3. The apparatus (100) according to any preceding claims, wherein the diffraction grating (130) is adapted to produce a diffraction pattern, where a diffraction angle between first order diffrac- tion maxima, for at least part of a wavelength range 400-1000 nm corresponding to a wavelength © of the incident light, is equal or smaller in > comparison to a field-of-view angle of the digi- AN tal camera (110).
S 4. The apparatus (100) according to any preceding z claims, wherein the diffraction grating (130) is > configured to disperse incident light in two di- 5 mensions.
© 35 S 5. The apparatus (100) according to any preceding claims, wherein the frame (120) is configuredfor the diffraction grating (130) to be posi- tioned within 1 centimeter to the surface (114) of the objective (112) of the digital camera (110), when the apparatus (100) is in use.
6. The apparatus (100) according to any preceding claims comprising a border (140) configured to limit the amount of incident light on the dif- fraction grating (130) to reduce overlap in the diffraction image.
7. The apparatus (100) according to any preceding claims, wherein the frame (120) is made of card- board.
8. The apparatus (100) according to any preceding claims, comprising a freguency filter positioned to filter the incident light before it arrives at the objective (112) of the digital camera (110).
9. Using a photographic digital camera (110) togeth- er with a diffraction grating (130) for dispers- ing incident light towards an objective (112) of the digital camera (110) to provide a diffrac- tion image for a diffraction photograph (300) for use in multi- and/or hyperspectral imaging.
© > 10.A method comprising: N 30 receiving (230) image information correspond- n ing to at least a part of a diffraction pho- 7 tograph (300) obtained using a photographic & digital camera (110) with means (100) for x dispersing incident light for providing a 3 35 diffraction image for the diffraction photo- > graph (300); anddetermining (240) at least one multi- and/or hyperspectral characteristic (310, 320) from the image information using a computational algorithm.
11. The method according to claim 10, wherein the computational algorithm comprises a machine learning algorithm.
12. The method according to claim 10, wherein the computational algorithm comprises a convolution- al neural network.
13. The method according to any of claims 10-12, wherein the computational algorithm is adapted to utilize one or more dilations for determining multi- and/or hyperspectral characteristics (310, 320) from the image information.
14. The method according to claim 13, wherein the computational algorithm comprises a machine learning algorithm associated with a first reso- lution used for training the machine learning algorithm, and the computational algorithm fur- ther comprises: determining a resolution corresponding to the image information; and 00 scaling the one or more dilations if the res- N olution corresponding to the image infor- N 30 mation is different from the first resolu- 2 tion.
I E 15.A method according to any of claims 10-14, the 2 method comprising using (250) the at least one O 35 multi- and/or hyperspectral characteristic (310, > 320) for any combination of the following: to produce a multi- and/or hyperspectral image fromthe image information, to segment one or more regions from the image information or to charac- terize one or more material properties from the image information.
16. A method (400) for generating a computational algorithm for multi- and/or hyperspectral imag- ing using a photographic digital camera (110), the method comprising: using a photographic digital camera (110) coupled to an apparatus (100) according to any of claims 1-8 to obtain a diffraction photograph corresponding to a scene (410) from a first location; using a multi- and/or hyperspectral camera (420) to obtain a multi- and/or hyperspectral photograph corresponding to the scene (410) from a second location, wherein the second location is substantially the same as the first location; using a computational algorithm to produce a multi- and/or hyperspectral image by invert- ing diffraction from the first image; determining a difference value for a measure of difference between the multi- and/or hy- perspectral image and the multi- and/or hy- perspectral photograph; and 00 modifying one or more parameters of the com- N putational algorithm to reduce the difference N 30 value.
2 I
17. A computer program product comprising instruc- a. tions which, when the program is executed by a e computer, cause the computer to carry out the 3 35 method of any of claims 10-15.
R
FI20186038A 2018-12-03 2018-12-03 Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging FI20186038A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FI20186038A FI20186038A1 (en) 2018-12-03 2018-12-03 Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging
PCT/FI2019/050859 WO2020115359A1 (en) 2018-12-03 2019-12-02 Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20186038A FI20186038A1 (en) 2018-12-03 2018-12-03 Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging

Publications (1)

Publication Number Publication Date
FI20186038A1 true FI20186038A1 (en) 2020-06-04

Family

ID=68887061

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20186038A FI20186038A1 (en) 2018-12-03 2018-12-03 Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging

Country Status (2)

Country Link
FI (1) FI20186038A1 (en)
WO (1) WO2020115359A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11644682B2 (en) * 2020-06-11 2023-05-09 Carnegie Mellon University Systems and methods for diffraction line imaging
EP4405646A1 (en) * 2021-09-23 2024-07-31 Sony Group Corporation Apparatuses and methods for computed tomography imaging spectrometry
CN115790850A (en) * 2023-02-09 2023-03-14 浙江大学 High dynamic range high resolution split frame snapshot type hyperspectral imaging system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8947656B2 (en) * 2013-01-04 2015-02-03 The Board Of Trustees Of The University Of Illinois Smartphone biosensor
US9599533B2 (en) * 2014-05-22 2017-03-21 Abl Ip Holding Llc Accessory to configure portable device with camera (E.G. smartphone) as lighting meter
DE202015006402U1 (en) * 2015-09-11 2015-10-05 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Spectroscope, kit for its manufacture and combination of the spectroscope with a camera device
CN106840398B (en) * 2017-01-12 2018-02-02 南京大学 A kind of multispectral light-field imaging method

Also Published As

Publication number Publication date
WO2020115359A1 (en) 2020-06-11

Similar Documents

Publication Publication Date Title
Levin et al. 4D frequency analysis of computational cameras for depth of field extension
Krig Computer vision metrics
Bando et al. Extracting depth and matte using a color-filtered aperture
Krig Computer vision metrics: Survey, taxonomy, and analysis
US9142582B2 (en) Imaging device and imaging system
TWI579540B (en) Multi-point spectral system
Yakhdani et al. Quality assessment of image fusion techniques for multisensor high resolution satellite images (case study: IRS-P5 and IRS-P6 satellite images)
KR102139858B1 (en) Hyperspectral Imaging Reconstruction Method Using Prism and System Therefor
Imamoglu et al. Hyperspectral image dataset for benchmarking on salient object detection
FI20186038A1 (en) Apparatus for enabling a photographic digital camera to be used for multi- and/or hyperspectral imaging
TWI505693B (en) Image capture device
US20230306558A1 (en) Frequency domain-based method for removing periodic noise from reconstructed light field image
DE102013003778A1 (en) FAST AUTOFOCUS TECHNOLOGIES FOR DIGITAL CAMERAS
CN111386549A (en) Method and system for reconstructing mixed type hyperspectral image
Dümbgen et al. Near-infrared fusion for photorealistic image dehazing
JP6034197B2 (en) Image processing apparatus, three-dimensional imaging apparatus, image processing method, and image processing program
Kim et al. Aperture-encoded snapshot hyperspectral imaging with a lensless camera
Kınlı et al. Modeling the lighting in scenes as style for auto white-balance correction
US20240147032A1 (en) Imaging device and optical element
Sandoval Orozco et al. Source identification for mobile devices, based on wavelet transforms combined with sensor imperfections
Torkildsen et al. Characterization of a compact 6-band multifunctional camera based on patterned spectral filters in the focal plane
EP3043314B1 (en) Integrated extended depth of field (edof) and light field photography
Bradbury et al. Multi-spectral material classification in landscape scenes using commodity hardware
Varjo et al. Comparison of near infrared and visible image fusion methods
WO2022162801A1 (en) Imaging device and optical element

Legal Events

Date Code Title Description
FD Application lapsed