NL2015804B1 - Hyperspectral 2D imaging device. - Google Patents
Hyperspectral 2D imaging device. Download PDFInfo
- Publication number
- NL2015804B1 NL2015804B1 NL2015804A NL2015804A NL2015804B1 NL 2015804 B1 NL2015804 B1 NL 2015804B1 NL 2015804 A NL2015804 A NL 2015804A NL 2015804 A NL2015804 A NL 2015804A NL 2015804 B1 NL2015804 B1 NL 2015804B1
- Authority
- NL
- Netherlands
- Prior art keywords
- image
- imaging device
- imaging
- hyper
- spectral
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000000701 chemical imaging Methods 0.000 claims abstract description 22
- 238000000576 coating method Methods 0.000 claims description 16
- 230000003287 optical effect Effects 0.000 claims description 15
- 230000017531 blood circulation Effects 0.000 claims description 12
- 239000011248 coating agent Substances 0.000 claims description 10
- 230000008081 blood perfusion Effects 0.000 claims description 4
- 229910000530 Gallium indium arsenide Inorganic materials 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims 2
- 239000003086 colorant Substances 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000003595 spectral effect Effects 0.000 description 20
- 238000005259 measurement Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 230000010412 perfusion Effects 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000001914 filtration Methods 0.000 description 7
- 238000006213 oxygenation reaction Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 102000001554 Hemoglobins Human genes 0.000 description 2
- 108010054147 Hemoglobins Proteins 0.000 description 2
- 244000007853 Sarothamnus scoparius Species 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000000601 blood cell Anatomy 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000000287 tissue oxygenation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/02—Details
- G01J3/0205—Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
- G01J3/021—Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using plane or convex mirrors, parallel phase plates, or particular reflectors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
- A61B5/1455—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/02—Details
- G01J3/0205—Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/30—Measuring the intensity of spectral lines directly on the spectrum itself
- G01J3/36—Investigating two or more bands of a spectrum by separate detectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/11—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
- G01J2003/2826—Multispectral imaging, e.g. filter imaging
Landscapes
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Human Computer Interaction (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Optics & Photonics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
The invention provides an imaging device comprising: a dichroic prism assembly configured to receive light from an object image through an entrance face of the dichroic prism assembly and to disperse said light through at least three exit faces, wherein a first exit face of the dichroic prism assembly is provided with an imaging sensor suitable for visible light and at least a second exit face and a third exit face of the dichroic prism assembly are each provided with a hyperspectral imaging sensor. The invention also provides a method for obtaining a hyperspectral image in an imaging device.
Description
Hyperspectral 2D imaging device Field of the invention [0001] The present invention relates to an imaging device for delivering hyperspectral datacubes at real-time speed. A hyperspectral datacube has two spatial dimensions and one wavelength/frequency dimension. The imaging device enables real-time spectral image processing with full resolution visible structural images coupled with overlays of data based on image data in the wavelength range of 400 - 1700 nm. The imaging device is suitable for use in a medical apparatus, such as a laparoscope or endoscope.
Background of the invention [0002] All natural objects absorb and reflect light one way or another, based on physical properties. Sensing the light frequency can tell a lot about an object and what it is made of. Spectral imaging or hyperspectral imaging from a physics standpoint has shown great capabilities in visualizing the “status” of the object of reference. For example, when satellites were brought into space with a hyperspectral push broom scanner, the light from the earth could be observed at many wavelength ranges, and vegetation differences, urbanization and other effects could be visualized and studied.
[0003] The same occurs in the medical field, where oxygenation and blood flow can be measured by looking at blood cells, and it can also be observed how oxyhemaglobin and hemoglobin have spectrally different signatures and hence, when measuring the right frequencies one can measure the oxygenation. Such a method for determining oxygenation is described in Pittman R.N., “Regulation of Tissue Oxygenation” (San Rafael (CA): Morgan & Claypool Life Sciences; 2011), chapter 10 “Measurement of Oxygen”, which is hereby incorporated by reference.
[0004] Hyperspectral imaging is the process of scanning, processing and displaying images within a spectral range of 200 to above 2500 nm. A hyperspectral datacube, also known as hypercube (both terms might be used interchangeably throughout this description) is a spectral cube or spectral volume that includes spatial and spectral data. In the present disclosure, datacubes will refer to structure with two spatial dimensions and one spectral dimension. It can be seen as a set of 2D images layered on top of one another, wherein each image represents one particular wavelength band. Each pixel in a hyperspectral image consists of a spectrum over an appropriate spectral region or wavelength interval.
[0005] Known applications and spectral imaging techniques are based on point measurements (spectrometer), push broom technology (hyperspectral linescanners (ID) which scan over a surface), and filter wheel techniques, where a sequence of images is made by changing the filter wheel in front of the imager to take images of a different wavelength (frequency of light). It is also known to employ pixel filtering techniques, where filters are placed in front of pixels (either through a filter on a substrate or directly on the pixels) to form a type of mosaic, which result in either a small amount of specific wavelengths, or a very large mosaic where a low spatial resolution is obtained, or specific optical solutions which use a special solution which creates super pixels that are then optically dispersed and projected onto a 2D imaging sensor (an example is disclosed in WO 2008/103918 Al) which will result in either a lower spatial resolution or lower number of frequency bands.
[0006] As a result, the current state of the art for 2D hyperspectral imaging presents limitations that result in low resolutions (spatial or spectral) and are all bound to deliver a reduced dataset (hypercube) at a reduced resolution (i.e. 100 x 100 pixels x 40 bands) and at a reduced framerate (4-5 fps). Increasing one will decrease the other.
[0007] All of the above implementations or techniques use various means of technology, but they all come down to either changing the frequency of light being projected on the object to be of a specific wavelength, as in patent publication WO 2010/019515 A2, or filtering the wavelength reaching the sensor. Various different technologies have been designed, which are based on grating technology and lenses that have multi object projection onto a slit, generating a super pixel, which in turn is run through a slit and prism, to project a spectral line of this “super pixel”, as in the earlier mentioned publication WO 2008/103918 Al.
[0008] Other patents rely on the light projected on the object to be changed in frequency and take a plurality of images where the images are then placed in sequence, as in patent US 8320996 B2, resulting in time difference measurements, which produce accuracy problems, motion artifacts and low frequency image streams.
[0009] All of these technologies have a direct relation with the linearity between the wavelength and frequency. In WO 2008/103918 Al, blue is at a higher spectral resolution, red is at a lower spectral resolution, and the overall spectral range is limited by the length of the slit, because when increased, the number of super pixels to be “sensed” is reduced as the number of frequencies detected is also reduced. Also the framerate has a direct relation, as well as movement artifacts, which limit the ability to spectrally sense the required wavelengths in order to get the right data.
[0010] One additional big drawback of these techniques is that they are limited to the 400 - 1000 nm range, or another wavelength range (i.e. 900-1700 or others), but they cannot cover a complete spectral range all together, i.e. the range 400 - 2500 nm cannot be covered in a single optical axis setup, as this requires different sensor materials in order to be sensitive to different wavelengths. Another problem is that with increasing frequency resolution, the pixel grids become too big to be handled effectively.
[0011] Furthermore even in the visible range of 400-1000 nm, the earlier mentioned mosaic filtering techniques are limited to either the 400 - 650 range (VIS) or 650 -1000 nm range (NIR) due to the difficulty in manufacturing filter coatings that work over this complete range, taking out a single wavelength. That is, it is currently impossible to create a single mosaic filter, capable of sensing 10 nm bands over the full range of 400 - 1000 nm. The VIS filters (i.e. sensing 450 nm +- 5 nm) are also sensitive in the NIR range (i.e. 700 - 1000 nm) making it impossible to detect the 450 nm as this filter is sensitive to light from 700+ nm as well.
[0012] It is a goal of the invention to provide a more convenient and/or cost-effective system for sensing a 3D matrix (datacube with two spatial and one spectral dimension). Another goal of the invention is to provide a system which can operate at a higher framerate than known systems, along with a comparable or higher spatial and spectral resolution.
Summary of the invention [0013] The invention provides an imaging device comprising a dichroic prism assembly configured to receive light from an object image through an entrance face of the dichroic prism assembly and to disperse said light through at least three exit faces, wherein a first exit face of the dichroic prism assembly is provided with an imaging sensor suitable for visible light and at least a second exit face and a third exit face of the dichroic prism assembly are each provided with a hyperspectral imaging sensor.
[0014] An imaging device according the invention enables single light input setups with multiple optical paths and multiple sensors, in particular with one sensor for a high resolution visible image and two hyperspectral sensors for measuring datacubes at a lower spatial resolution. The advantage of a single light input is that it is much easier to align the different sensor outputs to the same spatial coordinates. If needed, software is used to solve lens deformation, multi aperture angles of view, etc.
[0015] In recent times, new technology has been developed that further improves pixel filtering for hyperspectral sensors.. I.e. putting a wavelength filter on a pixel, as was done with the original Bayer filters, results in each pixel being sensitive to a certain specific narrow band wavelength. Spectral filter structures may be provided in a mosaic layout, where pixels are grouped in a matrix or “Bayer-like” array and each pixel within the matrix is sensitive to a certain wavelength, i.e. 4 x 4 matrix or 5 x 5 matrix, generating a “super pixel”. In this context, a super pixel is understood to refer to a pixel with sub-pixels sampled at different wavelengths.
[0016] Versions of mosaic filters exist that are sensitive in the Visible range and the Near Infrared (NIR) and will most likely also be extended to the shortwave infrared (SWIR) range. However, no versions exist that are sensitive to the whole range of 400 - 1000 nm, as for narrow band filters (i.e. 10 nm) this would require 60 different filters, resulting in an 8 x 8 matrix creating one super pixel. This setup would have a very low 2D spatial resolution, or require a very large image sensor, and filtering techniques are not able to create individual pixel filters that are sensitive to just the visible light. Common knowledge of filtering techniques shows that most visible filters are also sensitive to light from the NIR region, as explained above. Furthermore even if it was possible, to reach a 2000 x 2000 image resolution, this would require a 16000 x 16000 image sensor. At 5 um per pixel, the physical size of the silicon chip would be impractical for most applications, but also the framerate would be very low, or would require a fast amount of electrical interfaces. Hence, the downside of this technique is that with the number of wavelengths added, the resolution of the system goes down.
[0017] When having for instance 4 wavelengths, this will result in a 2x2 mosaic matrix, each cell having its own wavelength. In this case the sensor resolution is divided by 4. Currently 4x4 matrixes and 5x5 matrixes for Visible and Near Infra Red (NIR) are available, but they reduce the optical resolution significantly and not the complete range of VIS and NIR will be expandable in a single chip - mosaic combination. Therefore, the multiple light paths in the present imaging device allow for the simultaneous use of multiple mosaic-based sensors.
[0018] Even then, it may be that too much detail in the visible area is lost by the mosaic based sensors. That is why the first light path is connected to a “standard” visible light sensor to provide a high resolution spatial image that can be used by the user (e.g. a medical specialist). This allows the invention to be used as a surgical tool, as the specialist would be otherwise incapable of using the images as replacement of his “eyes”.
[0019] In other words, the current invention overcomes these drawbacks and limitations and solves this in an inventive manner, while still allowing for hyperspectral imaging at high resolution, and at the same time still having the largest resolution of a standard image sensor available (sensor SI) to be used as the main sensor for general imaging. As an example, in the case of a medical application, a standard high resolution sensor may be used like any other imaging device a surgeon is used to and accepts, while at the same time, other image sensors may “grab” or capture a datacube image at a slightly lower resolution but synchronized with and at the same same frame rate as the high resolution image. In this way, it is possible to maintain a high amount of wavelength channels (40+) at full video framerates, and at the same time to have the capability of software detecting different features from the datacube and overlaying these over an exactly matched image (sensor SI) that can be used for standard imaging (i.e. high resolution imaging for surgery, or stereo imaging feature mapping).
[0020] The present invention further provides a method for obtaining a hyperspectral image in an imaging device, the method comprising: receiving light from an object image by a prism or prism assembly through an entrance face, and dispersing light through at least three exit faces, wherein the first exit face is provided with an imaging sensor suitable for visible light and at least the second exit face and the third exit face are each provided with a hyperspectral imaging sensor.
[0021] The method and device according the present invention use a filtering technique with a single optical path, a beam splitting prism module and a way of aligning such that, depending on the number of channels, the resolution can be increased and at the same time images with larger hyperspectral datacubes can be generated, along with an extension into the SWIR range that other solutions currently do not provide. This results in high resolution hyperspectral datacubes, with a predetermined bandwidth which is consistent over the whole range of frequencies, while at the same time being able to provide high resolution super pixels over the complete wavelength range of 400 - 1000 or 1700 nm and at the same time providing high resolution images that can be used for general image processing or provide means of interpolation. This will overcome one of the biggest downsides of the current state of the art technology, opening up a lot of new applications in the medical field as well as food and agro applications.
Brief description of the Figures [0022] On the attached drawing sheets, • figure 1 schematically shows an imaging device comprising a prism for splitting light in accordance with an embodiment of the present invention; • figure 2 shows diagrams of wavelength detection for different sensors in a prism according an embodiment of the present invention; • figure 3 shows a diagram of a method for image creation in accordance with an embodiment of the present invention; • figure 4 shows a flow diagram for obtaining a hyperspectral image in accordance with an embodiment of the present invention; • figure 5 schematically shows a prism for splitting light in accordance with another embodiment of the present invention; • figure 6 shows a flow diagram for obtaining a hyperspectral image in accordance with another embodiment of the present invention; and • figure 7 schematically shows a processing device according to an embodiment of the present invention.
Detailed description [0023] Figure 1 shows a prism assembly for splitting light in accordance with an embodiment of the present invention. The light enters the prism through the entrance face. Cl and C2 are (different) optical coatings placed on two intermediate faces of the prism, each of them having the same or different reflectance and wavelength sensitivity. SI-S3 are imaging sensors that receive the light coming from within the prism. According to an embodiment of the present invention, SI receives light reflected by coating Cl, S2 receives light reflected by coating S2 and S3 receives light that has traversed the prism unhindered.
[0024] An exemplary imaging device of the present invention uses several pixel coated sensors (or sensors with a pixel coated pattern on a glass substrate which is aligned over the sensor to create a similar solution), optionally sensitive to different frequencies of light (i.e. VIS, NIR, SWIR, etc) and aligns them in various ways to receive optimal resolution required for the particular application, with full movie speed (> 15 fps) imaging capabilities, along with a high resolution color or monochrome image. Furthermore, the device of the present invention uses a “super pixel” hypercube where at least two independent pixels in the “super pixel” are sensitive to a different frequency of light and are perfectly spatially overlaid.
[0025] The device of the present invention is based on beam splitting light into a plurality of beams by means of beam splitters or dichroic prisms. When a light beam enters a single optical lens, it enters the prism and is dispersed into different directions according to wavelength or energy. Then the dispersed light enters the detectors (sensors) resulting in a 2D image, wherein one dimension represents the spectral axis and the other dimension represents the spatial information. Imaging sensors are attached to the prism surfaces through which light components exit the prism. Behind the lens, before the prism and/or before the imaging sensors, specific cleanup filters can be placed depending on the application requirements and filter specifications.
[0026] The device of the present invention presents an alignment mechanism of sensors that provides increased resolution depending on the number of channels.
[0027] The sensors in the device may be separate monochrome sensors such as Charge-Coupled Devices (CCD), Complementary Metal-Oxide Semiconductor (CMOS), scientific CMOS sensors (sCMOS) or other sensors, such as InGaAs sensors.
[0028] The imaging device can be used in a laparoscope, endoscope or open lens system to acquire the object image. The object image is then sent through the prism before reaching the sensors. The optical coatings Cl and C2 may have the same or different characteristics. A first configuration according to a first embodiment of the present invention is using a 3-channel beamsplit prism such as the prism from figure 1 where the energy of light is split evenly. The coating C1 may present a reflectance of approximately 33% and C2 may present a reflectance of approximately 50%, both of them in a range between 400 and 1000 nm. This configuration results in Cl reflecting 33% of the incident light, which exits the prism through the first exit face and is detected by image sensor SI. The 66% of the light that passes through Cl is then split in C2, wherein 50% will be reflected, exiting the prism through the second exit face and being detected by S2. The 33% of the light that passes through C2 exits the prism through the third exit face and is detected by S3.
[0029] According to an embodiment of the present invention, SI may be a standard high resolution imaging sensor, suitable for detecting visible light (VIS), S2 may be a hyperspectral sensor suitable for detecting visible light, and S3 might be a hyperspectral sensor suitable for detecting near infrared (NIR) light. With this configuration, the hyperspectral datacubes created from the hyperspectral sensors can be used to complement the high resolution image created from the high resolution sensor, providing a high resolution image that may range from 400 to 1000 nm.
[0030] According to another embodiment of the present invention, the coating Cl may present a reflectance of approximately 33% in a range between 400 and 1000 nm. C2 may present a reflectance of approximately 100% in a range between 400 and 650 nm, and a reflectance of approximately 0% in a range between 650 and 1000 nm. Cl therefore reflects 33% of the incident light within the whole range 400 - 1000 nm, which exits the prism through the first exit face and is detected by image sensor S1. The 66% of the light passes through Cl, wherein the light in the range between 400 and 650 nm is completely reflected by C2, exiting the prism through the second exit face and being detected by S2. The light in a range between 650 and 1000 nm passes completely through C2, exits the prism through the third exit face and is detected by S3.
[0031] . Filters may be placed before the appropriate sensors to narrow band the wavelengths reaching the sensors, providing a clean delimited frequency band and therefore allowing a cleaner image. According to an embodiment of the present invention, the high resolution color image sensor may receive a color pass filter FI (400 - 650 nm bandpass for instance), the VIS hyperspectral sensor may receive the same filter F2, while the NIR sensor will receive a 650 nm longpass filter F3. It should be noted that the present invention is not limited to this configuration, and that other configurations with dichroic coatings can also be designed to split the light energy in a smart manner. Is should also be noted that for clarity a gap is shown between prisms, filters F1-F3 and sensors S1-S3, but in most practical embodiments these items will be tightly connected to each other.
[0032] In the base configuration a high resolution filter sensor SI in the visible range is placed on the first exit face, having a 4x4 “Bayer-like” spectral mosaic array that covers 16 spectral bands, and on the second exit face a 5x5 mosaic NIR sensor S2 is placed. For each sensor, sensitive to a different frequency range, a hyperspectral datacube is created. This will result in a hyperspectral datacube of 41 bands being acquired at full image speed of the sensor. On the third exit face a standard monochrome or color sensor S3 can be placed so that the high resolution image sensor delivers a color or monochrome image. It is understood that sensors with different mosaic patters and bands can be placed. Furthermore, the 4x4 image can be downsampled or the 5x5 sensor upsampeld such that both sensors provide an evenly sized image.
[0033] The imaging device therefore creates a standard image from the high resolution sensor SI and a hyperspectral datacube image from each of the hyperspectral sensors S2 and S3.
[0034] The imaging device includes a display output and a user interface. Information about the hyperspectral datacubes may be displayed, such as specific ratios that can be used to calculate parameters like blood flow or perfusion. A user may select, through the user interface, specific ratios or quantities from among the hyperspectral datacube image. In this way, a number N of channels from among the datacube channels can be selected whose information is of relevance for a specific measurement. The selected ratios or quantities will be then mapped to create an intermediate two-dimensional image, and then a final image will be created wherein the intermediate image overlays the standard image from the high resolution sensor. A user, for example a medical specialist, may in this way select those channels that contain information of interest from among the whole wavelength range and overlay the selected information over the standard image to provide a detailed and descriptive image. A detailed description of the method of creation and display of the image will be disclosed later in relation to figure 3.
[0035] Figure 2 shows how hyperspectral sensors S2 and S3 detect different wavelengths, this is, they receive a different spectral image, but they receive the same 2D spatial image. Let us consider the wavelengths detected by S2 to be {λι S2, 'ki_S2, ..., λιβ S2} and the wavelengths detected by S3 to be {λι S3, ki_S3, ..., λ25 S3}. In order to obtain an optimized resolution, λι S2 must see the same spatial image as λι S3, ki_s2 must see the spatial image as λ2_S3, and so forth. In this manner it is guaranteed that S2 and S3 see the same images, and are properly aligned and synchronized.
[0036] The device of the present invention therefore provides high resolution while not requiring relatively large matrices that make the size of the system impractical.
[0037] The generated datacube with 41 bands will be rescaled and oversampled so that all images will have the same resolution, and interpolation or decimation may be applied.
[0038] Figure 3 schematically illustrates a method for image creation in accordance with an embodiment of the present invention. The imaging device creates a standard image 31 from the high resolution sensor SI. From each of the hyperspectral sensors S2 and S3, a hyperspectral datacube image, 32 and 33, is created. According to an embodiment of the present invention, the hyperspectral datacube created from S2 comprises images within a wavelength range between 400 and 650 nm, and the hyperspectral datacube created from S3 comprises images within a wavelength range between 650 and 1000 nm.
[0039] When a user selects specific channels or quantities from a hyperspectral datacube to be displayed, a method 34 is applied to create an intermediate image 35 by combining the selected channels. For example, the user may select channels λ82,1 and λ82,2 from the S2 hyperspectral datacube, and channel λ83,2 from the S3 hyperspectral datacube. These channels are combined and down or up sampled so that they all have the same spatial resolution, and an intermediate 2D image can be created. In an embodiment according the invention, 8 channels are selected from the hyperspectral datacubes in order to provide sufficient information for the measurement of blood flow and perfusion. In an embodiment according the present invention, the selected channels may be a subset of 8 channels of 470 nm, 500 nm, 510 nm, 520 nm, 540 nm, 550 nm, 560 nm, 570 nm, 580 nm, 590 nm, 600 nm, chosen so that the best wavelength discrimination between filters is achieved and overlap is reduced In an embodiment according the present invention, the wavelengths used are 470 nm, 540 nm, 550 nm, 560 nm, 570 nm, 580 nm, 590 nm, 600 nm. It is noted that the wavelength numbers given above do not refer to exact wavelengths, but to central wavelengths in an appropriate wavelength range. The exact wavelength range will depend on e.g. available filter materials and can be determined by a skilled person so as to include the above given central wavelengths.
[0040] In any case, the above central wavelengths are only one example, and a skilled person will understand that a different number of channels with different central wavelengths may be selected.
[0041] The standard image and the intermediate image are then displayed together 36, wherein the intermediate image overlays the standard image, therefore providing additional information about specific useful parameters. The images may be displayed in an RGB color display, so that the different channels can be easily identified and the information can efficiently be interpreted.
[0042] Figure 4 shows a flow diagram for obtaining a hyperspectral datacube in accordance with an embodiment of the present invention. The light enters 41 the prism and, depending on the light wavelength, and on the characteristics of coating Cl, part of the light energy is reflected in a first surface and reaches 42 sensor SI. Another part of the light energy that passes through the first surface is reflected in a second surface and reaches 43 sensor S2. The light that traverses the prism unhindered reaches 44 sensor S3. Once the light has been detected by the sensors, a standard image is created 45 from the first high resolution image sensor SI, and one hyperspectral datacube image is obtained from sensors S2 and S3 46. S2 may be a 4 x 4 mosaic sensor sensitive to visible light and S3 may be a 5 x 5 mosaic sensor sensitive to NIR. The first frequency band of S2 “sees” the same spatial image as the first frequency band of S3, the second frequency band of S2 “sees” the same spatial image as the second frequency band of S3, and so forth. In this manner, the images obtained by the two hyperspectral sensors are spatially perfectly aligned. Specific channels or quantities may then be selected from among the resulting hyperspectral datacube of 41 bands, so specific information or specific ratios for calculating key parameters may be selected. Perfusion and blood flow can for example be measured using this type of ratios. The selected channels are then combined and an intermediate image is created 47. This intermediate image is then displayed 48 spatially overlaying the standard image created from SI, being rescaled and oversampled, if needed, so that all the images have the same resolution. For perfusion or blood flow measurement, the calculated ratios can be displayed on top of the high resolution color image obtained by SI.
[0043] Figure 5 schematically shows a prism for splitting light in accordance with a second embodiment of the present invention. In this configuration according to a second embodiment of the present invention, a 5-channel prism is used, where each sensor receives 20% of the light evenly. The two additional channels in relation to the first embodiment can be used to purposely misalign the image sensors with a 2 x 2 offset in X and Y direction, in such a way that the resulting images have a higher resolution. In a 4 x 4 matrix the resolution will be divided by 4 in each direction, so assuming an image sensor of2000 x 2000 pixels, the resulting image will be a 500x500 resolution image. By intentionally misaligning the image with an offset of 2 x 2 the resulting image will be divided by 2 instead of 4, resulting in a 1000 x 1000 image. Since the pixels in between are now “closer together” than in the first embodiment, interpolation is possible, without generating excessively large artifacts. Even a 1000 x 1000 image data cube of 40 bands is much larger than current state of the art hyperspectral imagers.
[0044] Figure 6 shows a flow diagram for obtaining a hyperspectral datacube in accordance with the second embodiment of the present invention. The light enters the prism 56 and it is then split in different paths depending on its wavelength and on the characteristics of the optical coatings. In steps 62 and 63, part of the light energy is reflected such as in the first embodiment of the present invention and it reaches sensors SI and S2, respectively. Part of the light is then reflected in steps 64 and 65 and reaches sensors S3 and S4 that have a 2x2 offset in the X and Y directions. Lastly, the part of the light that traverses unhindered the prism reaches 66 sensor S5. Once the light has been detected by the sensors, a standard image is created 67 from the first high resolution image sensor SI, and one hyperspectral datacube is created 68 from sensors S2, S3, S4 and S5. S2 may be a 4 x 4 mosaic sensor sensitive to visible light and S3 may be a 5 x 5 mosaic sensor sensitive to NIR. S4 and S5 might be similar sensors to S2 and S3 but with a 2 x 2 offset in the X and Y directions. The first frequency band detected by SI “sees” the same spatial image as the first frequency band detected by S2, the second frequency band detected by S1 “sees” the same spatial image as the second frequency band detected by S2, and so forth. In this manner, the images obtained by SI and S2 are spatially perfectly aligned. Specific images may be selected from among the resulting hyperspectral datacube, so specific information or specific ratios for calculating key parameters may be selected. The selected channels are then combined and an intermediate image is created 69. This intermediate image is then displayed 610 spatially overlaying the standard image created from SI, being rescaled and oversampled, if needed, so that all the images have the same resolution.
[0045] As these datacubes can involve a large amount of data, with resulting difficulties to transport this data in real time, the proposed device preferably has onboard processing to process datacubes based on the desired measurement. For example, blood flow and oxygenation can be measured at the same time by processing the datacube and outputting an image which can be overlaid on top of the visible image to show the amount of perfusion.
[0046] Figure 7 schematically shows a processing device comprised in the imaging device according to an embodiment of the present invention. The sensor units 71, 72 and 73, which contain imaging sensors SI, S2 and S3, output an analogous signal that is digitized by Analog-to-Digital-Convertors (ADCs) 74, 75 and 76, respectively. The digital signals are then analyzed by processing unit 77. In the processing unit, the standard image from the high resolution sensor SI is created, and the hyperspectral datacubes from each of the hyperspectral sensors are also created. The processing unit is connected to a display output 78 and a user interface unit 79, through which a user, for example a medical specialist, may select specific ratios or channels from the hyperspectral datacubes to be used for measuring specific parameters, such as blood flow of perfusion. The processing unit may create an intermediate two-dimensional image by combining the selected channels, and the intermediate image can be then displayed overlaying the standard image from the high resolution sensor, so for instance the amount of perfusion can be displayed overlaying the visible image, in this way aiding the doctor or specialist to better analyze and interpret the information.
[0047] The imaging device according an embodiment of the present invention may also include a camera, and the prism assembly may be located inside the camera. In an embodiment according the invention, the imaging device may also include a broad-beam light source.
[0048] In an advantageous embodiment of the present invention, the device comprises three sensors. Two hyperspectral sensors for two different wavelength ranges, and one color sensor for providing a high resolution (real-time) color image to guide the surgeon. At the same time, ratios in the hyperspectral datacubes can be used to calculate key parameters. For example, is known that perfusion and blood flow can be measured using these types of ratios. The calculated ratios can be shown on top of the color image obtained by the color sensor.
[0049] With respect to the perfusion and blood flow example, it may be noted that in known systems, perfusion and blood flow is measured by using just 3 specific wavelengths. Although this gives an estimate of perfusion, it is the only thing that can be measured and needs a very controlled environment. Furthermore it has no ability to overlay the information over a color image that can be used for i.e. surgery. Secondly the perfusion/oxygenation is measured by 3 small bands at specific wavelengths which reduces accuracy. The device proposed in the present invention would provide a vast improvement over this system, since the hyperspectral sensors used on the prism assembly can provide a datacube wherein the wavelength range 600 - 900 nm will be divided up in bands of roughly 15 nm, giving an exact measurement of the hemoglobin absorption curve allowing for exact measurement of oxygenation. Furthermore, from the datacube the information from 8 specific bands in the visible region (440 - 600 nm) can be used to do a similar oxygenation measurement and both results can be used to increase accuracy of the total measurements, increasing the outcome accuracy of the measurement.
[0050] In the foregoing description of the figures, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the scope of the invention as summarized in the attached claims.
[0051] In particular, combinations of specific features of various aspects of the invention may be made. An aspect of the invention may be further advantageously enhanced by adding a feature that was described in relation to another aspect of the invention.
[0052] It is to be understood that the invention is limited by the annexed claims and its technical equivalents only. In this document and in its claims, the verb "to comprise" and its conjugations are used in their non-limiting sense to mean that items following the word are included, without excluding items not specifically mentioned. In addition, reference to an element by the indefinite article "a" or "an" does not exclude the possibility that more than one of the element is present, unless the context clearly requires that there be one and only one of the elements. The indefinite article "a" or "an" thus usually means "at least one".
Claims (34)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2015804A NL2015804B1 (en) | 2015-11-17 | 2015-11-17 | Hyperspectral 2D imaging device. |
PCT/NL2016/050804 WO2017086788A1 (en) | 2015-11-17 | 2016-11-17 | Hyperspectral 2d imaging device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2015804A NL2015804B1 (en) | 2015-11-17 | 2015-11-17 | Hyperspectral 2D imaging device. |
Publications (1)
Publication Number | Publication Date |
---|---|
NL2015804B1 true NL2015804B1 (en) | 2017-06-02 |
Family
ID=55802423
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2015804A NL2015804B1 (en) | 2015-11-17 | 2015-11-17 | Hyperspectral 2D imaging device. |
Country Status (2)
Country | Link |
---|---|
NL (1) | NL2015804B1 (en) |
WO (1) | WO2017086788A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020123722A1 (en) | 2018-12-14 | 2020-06-18 | Spectral Md, Inc. | System and method for high precision multi-aperture spectral imaging |
BR112021011132A2 (en) | 2018-12-14 | 2021-08-31 | Spectral Md, Inc. | MACHINE LEARNING SYSTEMS AND METHODS FOR WOUND ASSESSMENT, PREDICTION AND WOUND TREATMENT |
CN114397255B (en) * | 2021-11-12 | 2023-09-01 | 中国科学院西安光学精密机械研究所 | Wide-spectrum high-resolution video spectrum imaging system and method |
CN116630148B (en) * | 2023-07-25 | 2023-09-26 | 芯视界(北京)科技有限公司 | Spectral image processing method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6215597B1 (en) * | 1999-11-17 | 2001-04-10 | Duncan Technologies, Inc. | Apparatus for forming a plurality of subimages having different characteristics |
US20090021598A1 (en) * | 2006-12-06 | 2009-01-22 | Mclean John | Miniature integrated multispectral/multipolarization digital camera |
WO2014007625A1 (en) * | 2012-07-05 | 2014-01-09 | Quest Photonic Devices B.V. | Method and device for detecting fluorescence radiation |
NL2010883C2 (en) * | 2013-05-29 | 2014-12-08 | Quest Photonic Devices B V | Two-dimensional imaging system, device comprising an imaging system, and method for calculating a parameter for a two-dimensional range. |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2631564A1 (en) | 2004-11-29 | 2006-06-01 | Hypermed, Inc. | Medical hyperspectral imaging for evaluation of tissue and tumor |
US8315692B2 (en) | 2007-02-22 | 2012-11-20 | Sheinis Andrew I | Multi-spectral imaging spectrometer for early detection of skin cancer |
WO2010019515A2 (en) | 2008-08-10 | 2010-02-18 | Board Of Regents, The University Of Texas System | Digital light processing hyperspectral imaging apparatus |
-
2015
- 2015-11-17 NL NL2015804A patent/NL2015804B1/en not_active IP Right Cessation
-
2016
- 2016-11-17 WO PCT/NL2016/050804 patent/WO2017086788A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6215597B1 (en) * | 1999-11-17 | 2001-04-10 | Duncan Technologies, Inc. | Apparatus for forming a plurality of subimages having different characteristics |
US20090021598A1 (en) * | 2006-12-06 | 2009-01-22 | Mclean John | Miniature integrated multispectral/multipolarization digital camera |
WO2014007625A1 (en) * | 2012-07-05 | 2014-01-09 | Quest Photonic Devices B.V. | Method and device for detecting fluorescence radiation |
NL2010883C2 (en) * | 2013-05-29 | 2014-12-08 | Quest Photonic Devices B V | Two-dimensional imaging system, device comprising an imaging system, and method for calculating a parameter for a two-dimensional range. |
Also Published As
Publication number | Publication date |
---|---|
WO2017086788A1 (en) | 2017-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9880054B2 (en) | Simplified compressive sensing spectral imager | |
KR102543392B1 (en) | Brightfield image processing method for depth acquisition | |
US9420241B2 (en) | Multi-spectral imaging | |
Koundinya et al. | 2D-3D CNN based architectures for spectral reconstruction from RGB images | |
US8553222B2 (en) | Coded aperture snapshot spectral imager and method therefor | |
Geelen et al. | A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic | |
CA2594105C (en) | A system for multi- and hyperspectral imaging | |
JP6064290B2 (en) | Imaging apparatus, spectroscopic system, and spectroscopic method | |
NL2015804B1 (en) | Hyperspectral 2D imaging device. | |
US10072970B1 (en) | Hyperspectral notch filter imaging | |
US20100309467A1 (en) | Single-Shot Spectral Imager | |
US20150116705A1 (en) | Spectral imager | |
WO2007087372A2 (en) | Spatial image modulation to improve performance of computed tomography imaging spectrometer | |
KR20170102459A (en) | Spectral imaging method and system | |
TW202101035A (en) | Light field imaging device and method for 3d sensing | |
US20150229851A1 (en) | Hyperspectral single pixel imager with fabry perot filter | |
WO2022104467A1 (en) | Diffraction-grating-based systems and methods for stereoscopic and multiscopic imaging | |
US20070097252A1 (en) | Imaging methods, cameras, projectors, and articles of manufacture | |
US10395134B2 (en) | Extraction of spectral information | |
WO2007070610A2 (en) | Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy | |
EP3190394A2 (en) | System and method for spectral imaging | |
US20230145952A1 (en) | Hyperspectral Imaging Device | |
KR102287082B1 (en) | Compact hyperspectral image sensor using multi-stage heterogeneous filter system | |
WO2022048727A1 (en) | Imaging device comprising fabry-perot interferometer | |
JP2024524943A (en) | Method and system for manufacturing and using spectral basis filters - Patents.com |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM | Lapsed because of non-payment of the annual fee |
Effective date: 20181201 |