EP3682201A1 - Device for capturing a hyperspectral image - Google Patents
Device for capturing a hyperspectral imageInfo
- Publication number
- EP3682201A1 EP3682201A1 EP18830502.3A EP18830502A EP3682201A1 EP 3682201 A1 EP3682201 A1 EP 3682201A1 EP 18830502 A EP18830502 A EP 18830502A EP 3682201 A1 EP3682201 A1 EP 3682201A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- diffracted
- intensity
- sensor
- diffracted image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000013528 artificial neural network Methods 0.000 claims abstract description 18
- 230000001419 dependent effect Effects 0.000 claims abstract description 3
- 230000003595 spectral effect Effects 0.000 claims description 16
- 238000004587 chromatography analysis Methods 0.000 abstract 2
- 210000002569 neuron Anatomy 0.000 description 19
- 238000000034 method Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 210000004205 output neuron Anatomy 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000002513 implantation Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000011282 treatment Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000000701 chemical imaging Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000010412 perfusion Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/02—Details
- G01J3/0294—Multi-channel spectroscopy
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/12—Generating the spectrum; Monochromators
- G01J3/18—Generating the spectrum; Monochromators using diffraction elements, e.g. grating
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/11—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0075—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
- G01J2003/2826—Multispectral imaging, e.g. filter imaging
Definitions
- the present invention relates to a device for capturing a hyperspectral image.
- the invention finds a particularly advantageous application for on-board systems intended to acquire one or more hyperspectral images.
- the invention can be applied to all technical fields using hyperspectral images.
- the invention can be used in the medical field to achieve phenotyping; in the plant field for the detection of symptoms of stress, disease or species differentiation and in the field of chemical analysis, for concentration measurements.
- a hyperspectral image comprises three dimensions: two “spatial” dimensions and a third "spectral” dimension expressing the variations of the luminous reflectance for different wavelengths.
- a hyperspectral image is usually encoded as voxels.
- a voxel corresponds to a pixel of a focal plane of a scene observed for a particular wavelength.
- a voxel therefore has three coordinates: the abscissa x and the ordinate y (hereinafter named "spatial coordinates") illustrating the position of the pixel on the focal plane and a wavelength ⁇ (hereinafter named "spectral coordinate ").
- CTIS Computer Tomography Imaging Spectrometer
- This CTIS method also makes it possible to acquire instantly, that is to say in a single shot, an image containing all the information necessary to find the hyperspectral image.
- the digital sensor simultaneously acquires the original image and its diffractions, significantly reducing the number of pixels available for each of these elements.
- the spatial accuracy thus obtained is relatively low compared with the requirements of certain applications of hypere- perfusion imaging.
- this CTIS method is particularly complex to use because of the process of estimation of the hyperspectral image implemented from the diffractions. Indeed, the transfer function of the diffraction optics must be reversed to reconstitute the hyperspectral image. Unfortunately, the matrix of this transfer function is only partially defined and the result can only be iteratively approached by inversion methods that are expensive in computation time.
- This CTIS method has been the subject of numerous research projects aimed at improving its implementation. Recently, the scientific publication “Practical Spectral Photography", published in Eurographics, volume 31 (2012) number 2, proposed an optimized implantation strategy in which the time to obtain a hyperal- graphical image is 11 min. a powerful computer with 16 processor cores.
- the implementations of the CTIS method do not allow to quickly obtain precise hyperspectral images (from a spatial or spectral point of view).
- the traditional process of analysis involves acquiring the data in situ for later processing. This approach places many constraints on the acquisition procedures and the prior assessment of the quality of future hyperspectral images.
- the technical problem of the invention consists in improving the process of obtaining a hyperspectral image by diffraction of the focal plane.
- the present invention proposes to answer this technical problem by using the intrinsic non-linearity of a neural network to obtain the hyperspectral image resulting from the diffracted image.
- the invention proposes to couple the diffracted image with spatially accurate images, obtained using separate chromatographic filters. This "data fusion" improves the spatial accuracy of the hyperspectral image.
- the invention relates to a device for capturing a hyperspectral image, said device comprising:
- the invention is characterized in that the device comprises means for acquiring at least two non-diffracted images of said focal plane obtained with separate chromatographic filters.
- Said construction means integrate a neural network configured to calculate an intensity of each voxel of said hyperspectral image according to:
- the invention thus makes it possible to correlate the information contained in the diffraction diffractions of the diffracted image with information contained in non-diffracted images.
- a voxel corresponds to a pixel of the focal plane of the scene observed for a particular wavelength.
- a voxel thus has three coordinates: the abscissa x and the ordinate y (hereinafter named "spatial coordinates") illustrating the position of the pixel on the focal plane and a wavelength ⁇ (hereinafter referred to as " spectral coordinate ").
- the relevant pixels of the non-diffracted images are searched according to its coordinates.
- Spatial coordinates can be used directly while the spectral coordinate can weight the interest of the pixels. This weighting is performed as a function of the distance between the desired wavelength and that of the chromatographic filter used.
- the invention has several methods for determining the relevant pixels.
- an apprenticeship of the neural network makes it possible to indicate the position of each voxel in one or more diffractions.
- backpropagation learning of the gradient or its derivatives from calibration data can be used.
- the invention thus makes it possible to obtain each voxel of the hyperspectral image more rapidly than the matrix iterative resolution methods present in the state of the art.
- the determination of each voxel can be done independently. Since there is no interdependence between the estimates, the invention can thus perform all the calculations in parallel. It follows a facilitation implementation of the device for capturing the hyperspectral image in an embedded system.
- the invention makes it possible to obtain a hyperspectral image in real time between two acquisitions of the focal plane of the observed scene. In doing so, it is no longer necessary to defer the processing of the diffracted images and it is no longer necessary to store these diffracted images after obtaining the hyperspectral image.
- the intensity of each voxel is sought in eight chromatic representations according to the following relationship: X + X 0 ffsetX n + ⁇ . ⁇ siicex
- said intensity of the pixel in each diffraction of the diffracted image is sought by producing a convolution product between the intensity of the pixel of said diffracted image and the intensity of its near neighbors in said diffractions of the diffracted image. diffracted image.
- This embodiment makes it possible to limit the impact of the accuracy of the detection of the positioning of the pixel in the different diffractions.
- said diffracted image and said non-diffracted images are obtained by a set of semi-transparent mirrors so as to capture said focal plane on several sensors simultaneously. This embodiment makes it possible to instantly capture identical planes.
- said diffracted image and said non-diffracted images are obtained by several juxtaposed sensors, each sensor incorporating a pretreatment step for extracting a focal plane present on all the sensors.
- This embodiment makes it possible to conserve optical power with respect to the embodiment with the semi-transparent mirrors.
- three non-diffracted images are obtained by an RGB type sensor. This embodiment makes it possible to obtain several non-diffracted images with a single sensor.
- a non-diffracted image is obtained by an infrared sensor. This embodiment makes it possible to obtain information that is invisible to the human eye. According to one embodiment, a non-diffracted image is obtained by a sensor whose wavelength is between 10,000 nanometer and 20000 nanometers. This embodiment makes it possible to obtain information on the temperature of the observed scene. According to one embodiment, a non-diffracted image is obtained by a sensor whose wavelength is between 0.001 nanometers and 10 nanometers. This embodiment makes it possible to obtain information on the X-rays present on the observed scene. According to one embodiment, said diffracted image is obtained by a sensor comprising:
- a first converging lens configured to focus the information of a scene on an aperture
- a collimator configured to capture the rays passing through said opening and to transmit these rays on a diffraction grating
- a second convergent lens configured to focus the rays coming from the diffraction grating onto a capture surface.
- This embodiment is particularly simple to implement and can be adapted to an existing sensor.
- FIGS. 1 to 4 represent:
- FIG. 1 a schematic front view of a hyperspectral image capture device according to one embodiment of the invention
- FIG. 1 a schematic structural representation of the elements of the device of Figure 1;
- FIG. 4 a schematic representation of the architecture of the neural network of FIG. 2. WAY OF DESCRIBING THE INVENTION
- FIG. 1 illustrates a device 10 for capturing a hyperspectral image 15 comprising three juxtaposed sensors 11-13.
- a first sensor 11 makes it possible to obtain a diffracted image 14 'of a focal plane P11' of an observed scene.
- this first sensor 11 comprises a first convergent lens 30 which focuses the focal plane P11 'on an aperture 31.
- a collimator 32 captures the rays passing through the aperture 31 and transmits these rays to a lightning grating. diffraction 33.
- a second convergent lens 34 focuses these rays coming from the diffraction grating 33 onto a pick-up surface 35.
- This optical network is relatively similar to that described in the scientific publication "Computed tomography imaging spectrometer: experimental calibration and reconstruction results", published in APPLIED OPTICS, volume 34 (1995) number 22.
- This optical structure makes it possible to obtain a diffracted image 14, illustrated in FIG. 3, having several diffractions R0-R7 of the focal plane Pli 'arranged around a small non-diffracted image.
- the image diffracted has eight distinct R0-R7 diffractions obtained with two diffraction axes of the diffraction grating 33.
- three diffraction axes may be used on the diffraction grating 33 so as to obtain a diffracted image 14 with sixteen diffractions.
- the capture surface 35 may correspond to a CCD sensor (for "charge-coupled device” in the English literature, that is to say a charge transfer device), to a CMOS sensor (for "complementary metal -oxide-semiconductor "in the English literature, a technology for manufacturing electronic components), or any other known sensor.
- a CCD sensor for "charge-coupled device” in the English literature, that is to say a charge transfer device
- CMOS sensor for "complementary metal -oxide-semiconductor "in the English literature, a technology for manufacturing electronic components
- any other known sensor for example, the scientific publication Practical Spectral Photography, published in Eurographics, volume 31 (2012) number 2, proposes to associate this optical structure with a standard digital camera for sensor diffracted image.
- each pixel of the diffracted image 14 is coded on 8 bits thus making it possible to represent 256 colors.
- a second sensor 12 makes it possible to obtain a non-diffracted image 17 'of a local plane P12' of the same observed scene, but with an offset induced by the shift between the first 11 and the second sensor 12.
- This second sensor 12 corresponds to an RGB sensor, that is to say a sensor for coding the influence of the three colors of Red, Green and Blue of the focal plane P12 '. It makes it possible to account for the influence of the use of a blue filter F1, a green filter F2 and a red filter F3 on the observed scene.
- This sensor 12 can be realized by a CMOS or CCD sensor associated with Bayer filter. Alternatively, any other sensor can be used to acquire this RGB image 17 '. Preferably, each color of each pixel the RGB image 17 'is encoded on 8 bits. Thus, each pixel of the RGB image 17 'is coded on 3 times 8 bits.
- a third sensor 13 makes it possible to obtain an infrared image 18 ', IR, of a third focal plane P13' of the same observed scene with also an offset with the first 11 and the second sensor 12. This sensor 13 makes it possible to account for the influence of the use of an infrared filter F4 on the observed scene.
- each pixel of the IR image 18 is encoded on 8 bits.
- the distance between the three sensors 11-13 may be less than 1 cm so as to obtain a large overlap of the focal planes Pll'-P13 'by the three sensors 11-13.
- the topology and the number of the sensors can vary without changing the invention.
- the sensors 11-13 can acquire an image of the same scene observed by using semi-transparent mirrors to transmit the information of the observed scene to the different sensors 11-13.
- FIG. 1 illustrates a device 10 comprising three sensors 11-13.
- other sensors may be mounted on the device 10 to increase the information contained in the hyperspectral image.
- the device 10 can integrate a sensor whose wavelength is between 0.001 nanometer and 10 nanometers or a sensor whose wavelength is between 10,000 nanometer and 20000 nanometers.
- the device 10 also comprises means 16 for constructing a hyperspectral image 15 from the different diffractions R0-R7 of the diffracted image 14 and non-diffracted images 17, 18.
- a pretreatment step is carried out in order to extract a Pli -PI 3 focal plane present on each of the images 14 ', 17'-18' acquired by the three sensors 11-13.
- This pretreatment consists, for each focal plane P11-P13 ', in isolating the common part of the focal planes P11'-P13' and then extracting this common part to form the image 14, 17-18 of each focal plane P11 -P13 observed by the specific 11-13 sensor.
- the part of each image 14 ', 17'-18' to be insulated 25 can be defined directly in a memory of the sensor device 10 according to the positioning choices of the sensors 11-13 between them, or a learning step can be used to identify the part to be isolated 25.
- the 17'-18 'images from RGB and IR sensor are cut using a two-dimensional cross-correlation.
- the extraction of the focal plane of the diffracted image 14 ' is calculated by interpolation of the x and y offsets between the sensors 12-13 brought to the position of the sensor 11 of the diffracted image by knowing the distance between each sensor 11. 13.
- This pretreatment step is not always necessary, especially when the sensors 11-13 are configured to capture the same focal plane, for example with the use of semi-transparent mirrors.
- the construction means 16 implement a network of neurons 20 to form a hyperspectral image 15 from the information of these three images 14, 17-18 '.
- This neural network 20 aims to determine the intensity ⁇ , ⁇ , ⁇ of each voxel V x , y, xàz the hyperspectral image 15.
- the neural network 20 comprises an input layer 40, able to extract the information from the images 14, 17-18, and an output layer 41, able to process these images. information so as to create information for the voxel V xy x considered.
- the first neuron of the input layer 40 makes it possible to extract the intensity I ixy) of the IR image 18 as a function of the x and y coordinates of the desired voxel V ⁇ y ⁇ . For example, if the IR image 18 is 8-bit coded, this first neuron transmits to the output layer 41 the 8-bit value of the pixel of the IR image 18 at the desired x and y coordinates.
- the second neuron of the input layer 40 performs the same task for the red color of the RGB image 17.
- the desired intensity lR ( x> y is also The third neuron searches for the intensity Iv (_x, y) in the same way, and the fourth neuron searches for intensity ( ⁇ y) .
- the following neurons of the input layer 40 are more complex because each of the following neurons is associated with an R0-R7 diffraction of the diffracted image 14.
- This relation between the three coordinates of the voxel V xy xQt the position in x and y can be encoded in a memory during the integration of the neural network 20.
- a learning phase makes it possible to define this relation by using a known model whose parameters are sought from representations of known objects.
- An example of a model is defined by the following relation:
- a learning phase thus makes it possible to define the parameters A siice , ⁇ 5 ⁇ ⁇ , x offset x n , and offsetYn so that each neuron can quickly find the intensity of the corresponding pixel.
- other models are possible, in particular depending on the nature of the diffraction grating 33 used.
- the information related to the intensity of the pixel I n (x, y) sought by each neuron can be determined by a convolution product between the intensity of the pixel of the diffracted image 14 and its close neighbors in the different diffractions R0-R7.
- the output of these neurons from the input layer 40 is also coded on 8 bits.
- this output neuron 41 associates a weight to each information as a function of the wavelength ⁇ of the desired voxel. Following this modulation on the influence of the contributions of each image 17-18 and each diffraction R0-R7, this output neuron 41 can sum the contributions to determine an average intensity which will form the intensity ⁇ ⁇ ⁇ ⁇ ⁇ voxel V ⁇ ⁇ searched, for example coded on 8 bits. This process is repeated for all the coordinates of the voxel V xy X so as to obtain a hypercube containing all the spatial and spectral information from the non-diffracted images 17-18 and each diffraction R0-R7.
- the output neuron 41 will use the spatial information of the undifferentiated images obtained with blue and green F2 filters and the information of the different R0-R7 diffractions obtained as a function of the wavelength considered. It is possible to configure the neural network 20 so as not to take into account certain R0-R7 diffractions so as to limit the calculation time of the sum of the contributions. In the example of FIG. 3, the third diffraction R2 is not considered by the neuron of the output layer 41.
- the weight of each contribution as a function of the wavelength ⁇ of the desired Voxel V ⁇ y ⁇ can also be defined during the implantation of the neural network 20 or determined by a learning phase.
- Learning can be achieved by using known scenes picked up by the three sensors 11-13 and determining the weights of each contribution for each wavelength ⁇ so that the information from each known scene corresponds to the information contained therein. in the known scenes.
- This learning can be performed independently or simultaneously with the learning of the relations between the three coordinates of the voxel V ⁇ y ⁇ and the position in x and y on the diffracted image 14.
- This neural network 20 can be implemented in an embedded system so as to process in real time the images from the sensors 11-13 to define and store a hyperspectral image 15 between two acquisitions of the sensors 11-13.
- the embedded system may include a power supply for the sensors 11-13, a processor configured to perform the calculations of the neurons of the input layer 40 and the output layer 41 and a memory integrating the weights of each neuron of the input layer 40 as a function of the wavelength ⁇ .
- the different treatments can be performed independently on several electronic circuits without changing the invention.
- an acquisition circuit can acquire and transmit information from the neurons of the first layer 40 to a second circuit which contains the neuron of the second layer 41.
- the invention thus makes it possible to obtain a hyperspectral image 15 rapidly and with great discretization in the spectral dimension.
- the use of a neural network 20 makes it possible to limit the complexity of the operations to be performed during the analysis of the diffracted image 14.
- the neural network 20 also allows the association of the information of this diffracted image. 14 with non-diffracted images 17-18 to improve accuracy in the spatial dimension.
Landscapes
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Spectrometry And Color Measurement (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1758396A FR3071124B1 (en) | 2017-09-12 | 2017-09-12 | DEVICE FOR CAPTURING A HYPERSPECTRAL IMAGE |
PCT/FR2018/052215 WO2019053364A1 (en) | 2017-09-12 | 2018-09-11 | Device for capturing a hyperspectral image |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3682201A1 true EP3682201A1 (en) | 2020-07-22 |
Family
ID=61258301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18830502.3A Withdrawn EP3682201A1 (en) | 2017-09-12 | 2018-09-11 | Device for capturing a hyperspectral image |
Country Status (4)
Country | Link |
---|---|
US (1) | US20210250526A1 (en) |
EP (1) | EP3682201A1 (en) |
FR (1) | FR3071124B1 (en) |
WO (1) | WO2019053364A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020242485A1 (en) * | 2019-05-30 | 2020-12-03 | Hewlett-Packard Development Company, L.P. | Particle imaging |
FR3098962B1 (en) | 2019-07-17 | 2021-12-10 | Carbon Bee | Hyperspectral peculiarity detection system |
EP3922099A1 (en) | 2020-06-09 | 2021-12-15 | Carbon BEE | Device for controlling a hydraulic circuit of an agricultural machine for dispensing a liquid product for spraying |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2374040C (en) * | 1999-07-02 | 2010-10-19 | Hypermed Imaging, Inc. | Integrated imaging apparatus |
US9117133B2 (en) * | 2008-06-18 | 2015-08-25 | Spectral Image, Inc. | Systems and methods for hyperspectral imaging |
-
2017
- 2017-09-12 FR FR1758396A patent/FR3071124B1/en active Active
-
2018
- 2018-09-11 WO PCT/FR2018/052215 patent/WO2019053364A1/en unknown
- 2018-09-11 EP EP18830502.3A patent/EP3682201A1/en not_active Withdrawn
- 2018-09-11 US US16/645,142 patent/US20210250526A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
FR3071124A1 (en) | 2019-03-15 |
WO2019053364A1 (en) | 2019-03-21 |
US20210250526A1 (en) | 2021-08-12 |
FR3071124B1 (en) | 2019-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Koundinya et al. | 2D-3D CNN based architectures for spectral reconstruction from RGB images | |
Arad et al. | Sparse recovery of hyperspectral signal from natural RGB images | |
Oh et al. | Do it yourself hyperspectral imaging with everyday digital cameras | |
EP3714399A1 (en) | Hyperspectral detection device | |
KR20190075057A (en) | Restore supersponse data from images | |
EP3387824B1 (en) | System and method for acquiring visible and near infrared images by means of a single matrix sensor | |
Kaya et al. | Towards spectral estimation from a single RGB image in the wild | |
Jia et al. | Fourier spectral filter array for optimal multispectral imaging | |
WO2020127422A1 (en) | Hyperspectral detection device | |
FR3071124B1 (en) | DEVICE FOR CAPTURING A HYPERSPECTRAL IMAGE | |
EP3356800A1 (en) | Method for determining the reflectance of an object and associated device | |
WO2020182840A1 (en) | Agricultural treatment control device | |
EP3763116B1 (en) | Process of reconstructing a color image acquired by an image sensor covered with a color filter mosaic | |
WO2011131898A1 (en) | Digital processing for compensating signals emitted by photosites of a colour sensor | |
CA2778676A1 (en) | Device and method for adjusting the raised pattern of hyper-spectral images | |
FR3093613A1 (en) | AGRICULTURAL TREATMENT CONTROL DEVICE | |
EP3956712B1 (en) | Device for hyperspectral holographic microscopy by sensor fusion | |
FR3098962A1 (en) | System for detecting a hyperspectral feature | |
FR3091380A1 (en) | Hyperspectral detection device by fusion of sensors | |
FR2966257A1 (en) | METHOD AND APPARATUS FOR CONSTRUCTING A RELIEVE IMAGE FROM TWO-DIMENSIONAL IMAGES | |
Pushparaj et al. | Reconstruction of hyperspectral images from RGB images | |
FR3091382A1 (en) | HYPERSPECTRAL ACQUISITION DETECTION DEVICE | |
FR2895823A1 (en) | Human, animal or object surface`s color image correcting method, involves forcing output of neuron of median layer in neural network, during propagation steps of learning phase, to luminance value of output pixel | |
EP3427225A1 (en) | Method for processing images | |
Monakhova | Physics-Informed Machine Learning for Computational Imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200305 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RIC1 | Information provided on ipc code assigned before grant |
Ipc: A61B 5/00 20060101ALI20211208BHEP Ipc: G01J 3/18 20060101ALI20211208BHEP Ipc: G01J 3/02 20060101ALI20211208BHEP Ipc: G01J 3/28 20060101AFI20211208BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20220124 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20220604 |