WO2021115550A1 - Separation of first and second image data - Google Patents
Separation of first and second image data Download PDFInfo
- Publication number
- WO2021115550A1 WO2021115550A1 PCT/EP2019/084206 EP2019084206W WO2021115550A1 WO 2021115550 A1 WO2021115550 A1 WO 2021115550A1 EP 2019084206 W EP2019084206 W EP 2019084206W WO 2021115550 A1 WO2021115550 A1 WO 2021115550A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- image data
- image
- sensor
- imaging
- Prior art date
Links
- 238000000926 separation method Methods 0.000 title description 3
- 238000003384 imaging method Methods 0.000 claims abstract description 56
- 238000001228 spectrum Methods 0.000 claims abstract description 25
- 230000006870 function Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000010801 machine learning Methods 0.000 claims description 3
- 238000002329 infrared spectrum Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 239000011343 solid material Substances 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 238000000034 method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the disclosure relates to an imaging system for capturing an input image of a first object, imaging sensor data comprising first image data originating from the first object and second image data originating from a second object.
- Such an image would comprise a front-view layer, comprising the image of the actual object, and a reflection layer, comprising e.g. reflections of fluorescent lighting, the two layers being superposition on top of each other.
- Prior art reflection removal solutions can be divided into two categories: single image-based solutions and multiple image-based solutions. Though there is remarkable progress in the area, prior art methods are still limited as they prevent the correct separation of structures in the input image. Furthermore, images have clipped regions due to overexposure and underexposure of very bright or very dark reflections, which results in saturated channels.
- an imaging system for capturing an input image of a first object
- the imaging system comprising an imaging sensor adapted for capturing imaging sensor data within a first electromagnetic spectrum, a hyperspectral sensor adapted for capturing hyperspectral sensor data within a second electromagnetic spectrum different from the first electromagnetic spectrum, and a processing arrangement, the imaging sensor data comprising first image data and second image data, the first image data comprising data originating from the first object, the second image data comprising data originating from a second object, the processing arrangement being adapted for deducting the second image data from the imaging sensor data, converting the input image to a first output image comprising only data originating from the first object.
- hyperspectral sensor allows image data from many discrete and narrow spectrums to be registered and used in order to estimate a correct output image.
- the hyperspectral image is not limited to producing the spectrum of the object, but may cover a wide range of spectrums from the visible spectrum to the longwave infrared spectrum. Since reflections from, e.g., fluorescent lighting has no information in the infrared spectrum, it is possible to easily estimate the image without any reflections, and increase the accuracy of the image processing.
- the processing arrangement generates a first estimate of the first image data by means of the hyperspectral sensor data, the second image data being deducted from the imaging sensor data by means of the first estimate.
- the hyperspectral sensor data lacks information about the reflection, wherefore such a solution facilitates a simple and reliable way of deducting the second image data from the input image.
- the first electromagnetic spectrum is within the range of 400-700 nm, allowing regular camera sensors to be used as a part of the imaging system.
- the second electromagnetic spectrum is within the range of 800-2500 nm. Electromagnetic waves in this spectrum have the best combination of energy, sensitivity, and absorption in order to be useful for quantitative measurements of solid materials.
- the hyperspectral sensor is a near-infrared sensor.
- the processing arrangement utilizes at least one of machine learning, deep neural networks, or signal estimation in order to generate the first estimate, allowing use of algorithms along with the previously mentioned sensors for estimating the reflections.
- the input image of the first object is captured through a transparent surface
- the second image data comprises data generated by means of a reflection of a second object in the transparent surface
- the imaging further comprises a second imaging sensor capturing a second input image of the first object, the processing arrangement estimating a first depth map by means of a first distance between the first imaging sensor and the first object, and a second depth map by means of a second distance between the second imaging sensor and the first object, the processing arrangement generating a third estimate of the first image data by means of the first depth map and a first function f(dl), as well as the second depth map and a second function f(d2), the processing arrangement deducting the third estimate from the first output image, generating a second output image.
- This use of stereo imaging further improves the image processing, since ay remaining erroneous information, such as infrared leakage from the object, can be removed.
- an electronic device comprising the imaging system according to the above, facilitating a compact and portable device comprising a camera which produces correct images even when taken through reflective and transparent surfaces such as windows.
- Fig. 1 shows a schematic illustration of an imaging system in accordance with an embodiment of the present invention.
- Fig. 1 shows an imaging system for capturing an input image of a first object 01.
- the imaging system comprises an imaging sensor 1 , adapted for capturing imaging sensor data within a first electromagnetic spectrum, a hyperspectral sensor 2, adapted for capturing hyperspectral sensor data within a second electromagnetic spectrum different from the first electromagnetic spectrum, and a processing arrangement 3.
- the hyperspectral sensor 2 registers narrow spectral bands over a continuous spectral range, producing the spectra of all pixels in the scene.
- a sensor with only 20 bands would be considered hyperspectral if covering the complete range from 500 to 700 nm with 20 bands, each band 10 nm wide.
- a sensor with 20 discrete bands covering the visible, near-infrared, Short-wavelength infrared, Mid-wavelength infrared, and Long- wavelength infrared spectrums would also be considered multispectral.
- the hyperspectral sensor 2 is a near-infrared sensor.
- the imaging sensor 1 may be a RGB sensor.
- the first electromagnetic spectrum may be within the range of 400-700 nm.
- the second electromagnetic spectrum may be within the range of 800-2500 nm. Electromagnetic waves in this spectrum have the best combination of energy, sensitivity, and absorption in order to be useful for quantitative measurements of solid materials.
- the imaging sensor data comprises first image data II and second image data 12.
- the first image data II comprises data originating from the first object 01
- the second image data 12 comprises data originating from a second object 02, as indicated in Fig. 1 by means of arrows.
- the input image of the first object 01 is captured through a transparent surface 5
- the second image data 12 comprises data generated by means of a reflection of a second object 02 in the transparent surface 5, the second object e.g. being fluorescent lighting.
- the processing arrangement 3 is adapted for deducting the second image data 12 from the imaging sensor data, converting the input image to a first output image comprising only data originating from the first object 01.
- the processing arrangement 3 generates a first estimate of the first image data II by means of the hyperspectral sensor data, the second image data 12 being deducted from the imaging sensor data by means of the first estimate.
- the processing arrangement 3 may utilize at least one of machine learning, deep neural networks, or signal estimation in order to generate the first estimate. This allows suitable algorithms to be used along with the sensors in order to estimate necessary image data.
- the imaging system further comprises a second imaging sensor 4 capturing a second input image of the first object 01.
- a second imaging sensor 4 capturing a second input image of the first object 01.
- the processing arrangement 3 estimates a first depth map dl by means of a first distance Dl between the first imaging sensor 1 and the first object 01, and a second depth map d2 by means of a second distance D2 between the second imaging sensor 1 and the first object 01.
- the processing arrangement 3 thereafter generates a third estimate of the first image data II by means of the first depth map dl and a first function fdl, as well as the second depth map d2 and a second function fd2.
- the third estimate is deducted from the first output image, generating a second output image.
- the present invention furthermore relates to an electronic device, such as a camera, smartphone, or tablet, comprising the imaging described above.
- a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid- state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
- a suitable medium such as an optical storage medium or a solid- state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
An imaging system for capturing an input image of a first object (O1), the imaging system comprising an imaging sensor (1) adapted for capturing imaging sensor data within a first electromagnetic spectrum, a hyperspectral sensor (2) adapted for capturing hyperspectral5 sensor data within a second electromagnetic spectrum different from the first electromagnetic spectrum, and a processing arrangement (3). The imaging sensor data comprises first image data (I1) and second image data (I2), the first image data (I1) comprising data originating from the first object (O1), the second image data (I2) comprising data originating from a second object (O2). The processing arrangement (3) is adapted for deducting the second image data (I2) from the imaging sensor data, converting the input image to a first output image comprising only data originating from the first object (O1). The use of a hyperspectral sensor allows image data from many discrete and narrow spectrums to be registered and used in order to estimate a correct output image.
Description
SEPARATION OF FIRST AND SECOND IMAGE DATA
TECHNICAL FIELD
The disclosure relates to an imaging system for capturing an input image of a first object, imaging sensor data comprising first image data originating from the first object and second image data originating from a second object.
BACKGROUND
There are great difficulties associated with capturing images through a transparent and reflective surface such as a window, since the window will reflect other objects such as interior ceiling lamps. Such an image would comprise a front-view layer, comprising the image of the actual object, and a reflection layer, comprising e.g. reflections of fluorescent lighting, the two layers being superposition on top of each other.
In order to generate a correct output image, the two layers need to be separated. Prior art reflection removal solutions can be divided into two categories: single image-based solutions and multiple image-based solutions. Though there is remarkable progress in the area, prior art methods are still limited as they prevent the correct separation of structures in the input image. Furthermore, images have clipped regions due to overexposure and underexposure of very bright or very dark reflections, which results in saturated channels.
SUMMARY
It is an object to provide an improved imaging system. The foregoing and other objects are achieved by the features of the independent claim. Further implementation forms are apparent from the dependent claims, the description, and the figures.
According to a first aspect, there is provided an imaging system for capturing an input image of a first object, the imaging system comprising an imaging sensor adapted for
capturing imaging sensor data within a first electromagnetic spectrum, a hyperspectral sensor adapted for capturing hyperspectral sensor data within a second electromagnetic spectrum different from the first electromagnetic spectrum, and a processing arrangement, the imaging sensor data comprising first image data and second image data, the first image data comprising data originating from the first object, the second image data comprising data originating from a second object, the processing arrangement being adapted for deducting the second image data from the imaging sensor data, converting the input image to a first output image comprising only data originating from the first object.
The use of a hyperspectral sensor allows image data from many discrete and narrow spectrums to be registered and used in order to estimate a correct output image. The hyperspectral image is not limited to producing the spectrum of the object, but may cover a wide range of spectrums from the visible spectrum to the longwave infrared spectrum. Since reflections from, e.g., fluorescent lighting has no information in the infrared spectrum, it is possible to easily estimate the image without any reflections, and increase the accuracy of the image processing.
In a possible implementation form of the first aspect, the processing arrangement generates a first estimate of the first image data by means of the hyperspectral sensor data, the second image data being deducted from the imaging sensor data by means of the first estimate. The hyperspectral sensor data lacks information about the reflection, wherefore such a solution facilitates a simple and reliable way of deducting the second image data from the input image.
In a further possible implementation form of the first aspect, the first electromagnetic spectrum is within the range of 400-700 nm, allowing regular camera sensors to be used as a part of the imaging system.
In a further possible implementation form of the first aspect, the second electromagnetic spectrum is within the range of 800-2500 nm. Electromagnetic waves in this spectrum
have the best combination of energy, sensitivity, and absorption in order to be useful for quantitative measurements of solid materials.
In a further possible implementation form of the first aspect, the hyperspectral sensor is a near-infrared sensor.
In a further possible implementation form of the first aspect, the processing arrangement utilizes at least one of machine learning, deep neural networks, or signal estimation in order to generate the first estimate, allowing use of algorithms along with the previously mentioned sensors for estimating the reflections.
In a further possible implementation form of the first aspect, the input image of the first object is captured through a transparent surface, and the second image data comprises data generated by means of a reflection of a second object in the transparent surface.
In a further possible implementation form of the first aspect, the imaging further comprises a second imaging sensor capturing a second input image of the first object, the processing arrangement estimating a first depth map by means of a first distance between the first imaging sensor and the first object, and a second depth map by means of a second distance between the second imaging sensor and the first object, the processing arrangement generating a third estimate of the first image data by means of the first depth map and a first function f(dl), as well as the second depth map and a second function f(d2), the processing arrangement deducting the third estimate from the first output image, generating a second output image. This use of stereo imaging further improves the image processing, since ay remaining erroneous information, such as infrared leakage from the object, can be removed.
According to a second aspect, there is provided an electronic device comprising the imaging system according to the above, facilitating a compact and portable device comprising a camera which produces correct images even when taken through reflective and transparent surfaces such as windows.
These and other aspects will be apparent from the embodiments described below.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following detailed portion of the present disclosure, the aspects, embodiments and implementations will be explained in more detail with reference to the example embodiments shown in the drawings, in which:
Fig. 1 shows a schematic illustration of an imaging system in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
Fig. 1 shows an imaging system for capturing an input image of a first object 01. The imaging system comprises an imaging sensor 1 , adapted for capturing imaging sensor data within a first electromagnetic spectrum, a hyperspectral sensor 2, adapted for capturing hyperspectral sensor data within a second electromagnetic spectrum different from the first electromagnetic spectrum, and a processing arrangement 3.
The hyperspectral sensor 2 registers narrow spectral bands over a continuous spectral range, producing the spectra of all pixels in the scene. A sensor with only 20 bands would be considered hyperspectral if covering the complete range from 500 to 700 nm with 20 bands, each band 10 nm wide. Furthermore, a sensor with 20 discrete bands covering the visible, near-infrared, Short-wavelength infrared, Mid-wavelength infrared, and Long- wavelength infrared spectrums would also be considered multispectral.
In one embodiment, the hyperspectral sensor 2 is a near-infrared sensor. The imaging sensor 1 may be a RGB sensor.
The first electromagnetic spectrum may be within the range of 400-700 nm. The second electromagnetic spectrum may be within the range of 800-2500 nm. Electromagnetic waves in this spectrum have the best combination of energy, sensitivity, and absorption in order to be useful for quantitative measurements of solid materials.
The imaging sensor data comprises first image data II and second image data 12. The first image data II comprises data originating from the first object 01, and the second image data 12 comprises data originating from a second object 02, as indicated in Fig. 1 by means of arrows.
In one embodiment, the input image of the first object 01 is captured through a transparent surface 5, and the second image data 12 comprises data generated by means of a reflection of a second object 02 in the transparent surface 5, the second object e.g. being fluorescent lighting.
The processing arrangement 3 is adapted for deducting the second image data 12 from the imaging sensor data, converting the input image to a first output image comprising only data originating from the first object 01.
In one embodiment, the processing arrangement 3 generates a first estimate of the first image data II by means of the hyperspectral sensor data, the second image data 12 being deducted from the imaging sensor data by means of the first estimate.
The processing arrangement 3 may utilize at least one of machine learning, deep neural networks, or signal estimation in order to generate the first estimate. This allows suitable algorithms to be used along with the sensors in order to estimate necessary image data.
In one embodiment, the imaging system further comprises a second imaging sensor 4 capturing a second input image of the first object 01. This use of stereo imaging improves the image processing, since ay remaining erroneous information, such as infrared leakage from the first object 01, can be removed.
In this case, the processing arrangement 3 estimates a first depth map dl by means of a first distance Dl between the first imaging sensor 1 and the first object 01, and a second depth map d2 by means of a second distance D2 between the second imaging sensor 1 and the first object 01. The processing arrangement 3 thereafter generates a third estimate of the first image data II by means of the first depth map dl and a first function fdl, as well as the second depth map d2 and a second function fd2. The third estimate is deducted from the first output image, generating a second output image.
The present invention furthermore relates to an electronic device, such as a camera, smartphone, or tablet, comprising the imaging described above.
The various aspects and implementations have been described in conjunction with various embodiments herein. However, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject- matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid- state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
The reference signs used in the claims shall not be construed as limiting the scope. Unless otherwise indicated, the drawings are intended to be read (e.g., cross-hatching, arrangement of parts, proportion, degree, etc.) together with the specification, and are to be considered a portion of the entire written description of this disclosure. As used in the description, the terms “horizontal”, “vertical”, “left”, “right”, “up” and “down”, as well as adjectival and adverbial derivatives thereof (e.g., “horizontally”, “rightwardly”,
“upwardly”, etc.), simply refer to the orientation of the illustrated structure as the particular drawing figure faces the reader. Similarly, the terms “inwardly” and “outwardly” generally refer to the orientation of a surface relative to its axis of elongation, or axis of rotation, as appropriate.
Claims
1. An imaging system for capturing an input image of a first object (01), said imaging system comprising
-an imaging sensor (1) adapted for capturing imaging sensor data within a first electromagnetic spectrum,
-a hyperspectral sensor (2) adapted for capturing hyperspectral sensor data within a second electromagnetic spectrum different from said first electromagnetic spectrum, and -a processing arrangement (3), said imaging sensor data comprising first image data (II) and second image data (12), said first image data (II) comprising data originating from said first object (01), said second image data (12) comprising data originating from a second object (02), said processing arrangement (3) being adapted for deducting said second image data (12) from said imaging sensor data, converting said input image to a first output image comprising only data originating from said first object (01).
2. The imaging system according to claim 1, wherein said processing arrangement (3) generates a first estimate of said first image data (II) by means of said hyperspectral sensor data, said second image data (12) being deducted from said imaging sensor data by means of said first estimate.
3. The imaging system according to at least one of the previous claims, wherein said first electromagnetic spectrum is within the range of 400-700 nm.
4. The imaging system according to at least one of the previous claims, wherein said second electromagnetic spectrum is within the range of 800-2500 nm.
5. The imaging system according to at least one of the previous claims, wherein said hyperspectral sensor (2) is a near-infrared sensor.
6. The imaging system according to at least one of the previous claims, wherein said processing arrangement (3) utilizes at least one of machine learning, deep neural networks, or signal estimation in order to generate said first estimate.
7. The imaging system according to at least one of the previous claims, wherein said input image of said first object (01 ) is captured through a transparent surface (5), and said second image data (12) comprises data generated by means of a reflection of a second object (02) in said transparent surface (5).
8. The imaging system according to at least one of the previous claims, further comprising a second imaging sensor (4) capturing a second input image of said first object (01), said processing arrangement (3) estimating a first depth map (dl) by means of a first distance (Dl) between said first imaging sensor (1) and said first object (01), and a second depth map (d2) by means of a second distance (D2) between said second imaging sensor (1) and said first object (01), said processing arrangement (3) generating a third estimate of said first image data (II) by means of said first depth map (dl) and a first function f(dl), as well as said second depth map (d2) and a second function f(d2), said processing arrangement (3) deducting said third estimate from said first output image, generating a second output image.
9. An electronic device comprising the imaging system according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2019/084206 WO2021115550A1 (en) | 2019-12-09 | 2019-12-09 | Separation of first and second image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2019/084206 WO2021115550A1 (en) | 2019-12-09 | 2019-12-09 | Separation of first and second image data |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021115550A1 true WO2021115550A1 (en) | 2021-06-17 |
Family
ID=68841117
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2019/084206 WO2021115550A1 (en) | 2019-12-09 | 2019-12-09 | Separation of first and second image data |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021115550A1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564540A (en) * | 2018-03-05 | 2018-09-21 | 广东欧珀移动通信有限公司 | Remove image processing method, device and the terminal device that eyeglass is reflective in image |
-
2019
- 2019-12-09 WO PCT/EP2019/084206 patent/WO2021115550A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564540A (en) * | 2018-03-05 | 2018-09-21 | 广东欧珀移动通信有限公司 | Remove image processing method, device and the terminal device that eyeglass is reflective in image |
Non-Patent Citations (2)
Title |
---|
SUN JUN ET AL: "Multi-Modal Reflection Removal Using Convolutional Neural Networks", IEEE SIGNAL PROCESSING LETTERS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 26, no. 7, 1 July 2019 (2019-07-01), pages 1011 - 1015, XP011726261, ISSN: 1070-9908, [retrieved on 20190528], DOI: 10.1109/LSP.2019.2915560 * |
YOSSRA H. ALI ET AL: "Reflection Removal Algorithms: A Review", RESEARCH JOURNAL OF APPLIED SCIENCES, vol. 14, no. 9, 15 September 2017 (2017-09-15), pages 347 - 351, XP055718951, ISSN: 2040-7459, DOI: 10.19026/rjaset.14.5075 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11354827B2 (en) | Methods and systems for fusion display of thermal infrared and visible image | |
Oh et al. | Do it yourself hyperspectral imaging with everyday digital cameras | |
US8594455B2 (en) | System and method for image enhancement and improvement | |
JP6586430B2 (en) | Estimation of vehicle position | |
US8509476B2 (en) | Automated system and method for optical cloud shadow detection over water | |
Schechner et al. | Generalized mosaicing: Wide field of view multispectral imaging | |
AU2011242408B2 (en) | Shape and photometric invariants recovery from polarisation images | |
US8810658B2 (en) | Estimating a visible vector representation for pixels in an infrared image | |
TW201942765A (en) | Information retrieval system and program | |
EP3215818A1 (en) | Spectral imaging method and system | |
CN110520768A (en) | EO-1 hyperion optical field imaging method and system | |
US20190297279A1 (en) | Dual-aperture ranging system | |
Nocerino et al. | Geometric calibration and radiometric correction of the maia multispectral camera | |
JP6095601B2 (en) | Method for detecting 3D geometric boundaries | |
Jain et al. | Multi-sensor image fusion using intensity hue saturation technique | |
US8854501B2 (en) | Image processing apparatus and image processing method for processing spectral image data | |
JP4985264B2 (en) | Object identification device | |
WO2021115550A1 (en) | Separation of first and second image data | |
CN109459405B (en) | Spectral index measuring method for removing soil background interference based on narrow-band image processing | |
Jabari et al. | Improving UAV imaging quality by optical sensor fusion: an initial study | |
Çeşmeci et al. | Hyperspectral change detection by multi-band census transform | |
Yang et al. | Fixed pattern noise pixel-wise linear correction for crime scene imaging CMOS sensor | |
Robles‐Kelly et al. | Imaging spectroscopy for scene analysis: challenges and opportunities | |
Yao et al. | Surface reconstruction based on the camera relative irradiance | |
US11290663B2 (en) | Thermal image sensing system and thermal image sensing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19817674 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19817674 Country of ref document: EP Kind code of ref document: A1 |