WO2018085841A1 - Calibration method and apparatus for active pixel hyperspectral sensors and cameras - Google Patents

Calibration method and apparatus for active pixel hyperspectral sensors and cameras Download PDF

Info

Publication number
WO2018085841A1
WO2018085841A1 PCT/US2017/060409 US2017060409W WO2018085841A1 WO 2018085841 A1 WO2018085841 A1 WO 2018085841A1 US 2017060409 W US2017060409 W US 2017060409W WO 2018085841 A1 WO2018085841 A1 WO 2018085841A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
pixel array
imaging system
hyperspectral imaging
semiconductor pixel
Prior art date
Application number
PCT/US2017/060409
Other languages
French (fr)
Inventor
Rik FRANSENS
Kurt CORNELIS
Original Assignee
BioSensing Systems, LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BioSensing Systems, LLC filed Critical BioSensing Systems, LLC
Priority to EP17867023.8A priority Critical patent/EP3535553A4/en
Publication of WO2018085841A1 publication Critical patent/WO2018085841A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/12Generating the spectrum; Monochromators
    • G01J3/26Generating the spectrum; Monochromators using multiple reflection, e.g. Fabry-Perot interferometer, variable interference filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/63Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/67Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
    • H04N25/671Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
    • H04N25/673Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction by using reference sources

Definitions

  • Embodiments of the present application relate broadly to calibration of hyperspectral imaging systems, and more specifically to methods and apparatus for calibrating active pixel (such as but not limited to CMOS based) hyperspectral sensors and cameras.
  • active pixel such as but not limited to CMOS based
  • Hyperspectral imaging systems have been used in various applications for many decades. Typically, such systems have been bulky and expensive, which has limited their wide adoption. More recently, sensors based on mounting or depositing Fabry-Perot filters directly onto CMOS digital imaging arrays have been developed, reducing dramatically the size and weight of the hyperspectral cameras based on such new sensors. Because such sensors can be manufactured directly onto silicon wafers using photolithographic processes, there is the potential to drastically reduce the cost by increasing the manufacturing volume.
  • IMEC IMEC sensor
  • line-scan "tiled” and “mosaic”
  • patents US2014/0175265 (Al) for line- scan, US2014267849 (Al) for tiled and US2015276478 (Al) for mosaic US2014/0175265 (Al) for line- scan
  • US2014267849 Al
  • US2015276478 Al
  • the IMEC LSI 00-600- 1000 line-scan sensor is based on a CMOS chip designed by CMOSIS, which has 2048 pixels by 1088 pixels.
  • IMEC deposits Fabry-Perot filters onto each wafer, to provide about 100 spectral bands ranging from 600 nm to 1000 nm. The filter for each band is positioned above a "row" of 8 pixels by 2048 pixels.
  • An example of cameras that incorporate the IMEC sensor is manufactured by Ximea.
  • the camera is small and light, for example having a dimension of 26x26x21 millimeters (mm) and weighing 32 grams, and provides a USB3.0 digital output capable of providing 170 frames per second.
  • An example of camera systems which can incorporate the Ximea camera based on the IMEC sensor is the IMEC Evaluation System, which includes also halogen lights, a linear translation stage controlled by a computer software to move the camera or the sample being scanned, and calibration and processing software to acquire images and perform certain transformation of the raw data to demonstrate various applications.
  • the hyperspectral sensors can provide much finer spectral resolution, which can be used to distinguish between objects that may appear the same to a human eye despite differences in their chemical or biochemical make up.
  • hyperspectral imaging during brain surgery, to help the surgeon distinguish between various tissues while performing the surgery.
  • software is needed to quickly and reliably process the raw hyperspectral data and provide a virtual image that can be superimposed in the field of view of the operating microscope, providing extra information to the surgeon.
  • Small changes in the oxygen concentration in small capillaries can be magnified and even the boundaries between a tumor and healthy tissues can be more clearly visible when the output of the hyperspectral system can superimpose additional information that would not be readily visible to the naked human eye.
  • Embodiments of the present invention combine several measurements which need to be performed to best calibrate the hyperspectral imaging system. While one can perform only some of the measurements described herein, the combination of many, most or all of the measurements can provide information that, when used by the image processing software, reduces the variances introduced by the hyperspectral measurement system and improves the accuracy and the benefits of the processed information produced by the hyperspectral analysis software.
  • Embodiments of the invention provide a systematic way to measure non-ideal characteristics of the hyperspectral camera systems, otherwise referred to herein as a
  • hyperspectral imaging system that use an active pixel sensor based on Fabry-Perot filters applied onto the active pixel array.
  • the deviation of the actual performance of a given system from the ideal performance i.e. that of a theoretical system that would perform perfectly and consistently
  • This pre-processing software uses the calibration information to apply a correction to the raw data recorded using the camera system, reducing the variance or noise introduced by the actual system as compared to the ideal system.
  • the sources of variance can be within a given sensor; or can be due to external factors affecting a given sensor, such as its temperature.
  • the sensor may also respond differently to light rays coming thru the lens' axis and rays coming from other directions.
  • the pairing of a given sensor with the same model lens may yield different results because the lenses are not identical.
  • a given sensor paired with a specific lens may provide different raw data than another senor and lens of identical model and specifications simply because of the small manufacturing variances between the nominally identical items.
  • Any digital camera is programmed to convert electromagnetic radiation (photons) into digital frames through the reading of the semiconductor pixel array based on the lens aperture and the integration time (a longer integration time implies that more photons are collected before the conversion into electrons is read into a digital reading for each pixel).
  • the goal of embodiments of the invention is to minimize all of the above sources of variance due to the hyperspectral system, so that the processing software can more efficiently be trained to recognize specific "spatial - spectral signatures" in a robust and reliable way. This is especially important if many camera systems are used to scan hundreds or thousands of acres to detect a crop disease or if many surgeons rely on such system to perform surgeries and to compare data with that of other colleagues.
  • Embodiments of the invention comprises of multiple measurements to quantify several non-ideal characteristics of such system.
  • the measurements can be taken according to the sequence presented here, or can be taken in an alternate sequence. Some of the measurements can be skipped or all of the measurements can be combined in a single calibration process.
  • Fabry -Perot filters mounted on a semiconductor pixel array or deposited directly onto the chip, such as a CMOS digital pixel arrays with Fabry-Perot filters, than with older and more conventional hyperspectral systems based on other technologies and techniques.
  • Figure 1 illustrates a variation of dark current levels at the center of a semiconductor pixel array including one or more Fabry-Perot filters as a function of the integration time and sensor temperature, in accordance with certain aspects of the present disclosure.
  • Figure 2 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 0.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 3 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 1.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 4 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.0 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 5 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 6 illustrates filter transmission approximation for a semiconductor pixel array indicating the non-linearity of the sensor before application of a non-linear compensation parameter to the raw data generated by the semiconductor ("Original") and for the corrected output that applies a non-linear compensation parameter to the raw data to compensate for the non-linear gain response of the semiconductor pixel array, in accordance with certain aspects of the present disclosure.
  • Figure 7 illustrates the estimated non-linear gain determined using an evaluation system as described herein at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm, in accordance with certain aspects of the present disclosure.
  • Figure 8 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels, in accordance with certain aspects of the present disclosure.
  • PID proportional- integral-derivative
  • Figure 9 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels and an integrating sphere, in accordance with certain aspects of the present disclosure.
  • PID proportional- integral-derivative
  • Figure 10 illustrates an estimated modulation map Ml , in accordance with certain aspects of the present disclosure.
  • Figure 11 illustrates reconstructions (up to scale) of identical light spectra using a HSI system without compensation for modulation between sensor columns. Clearly, the reconstructed spectrum varies strongly across the sensor, in accordance with certain aspects of the present disclosure.
  • Figure 12 illustrates measured emission spectra of the halogen light source in the horizontal and vertical orientation, in accordance with certain aspects of the present disclosure.
  • Figure 13 illustrates the estimated halogen light front glass transmittance spectrum, in accordance with certain aspects of the present disclosure.
  • Figure 14 illustrates the typical transmission spectrum of soda-lime glass provided for reference.
  • Figure 15 illustrates the estimated modulation M 2 , in accordance with certain aspects of the present disclosure.
  • Figure 16 illustrates a light source spectrum reconstruction with and without M 2 modulation correction, in accordance with certain aspects of the present disclosure.
  • Figure 17 illustrates a graphical representation of the composition matrix for a sensor as measured by IMEC, in accordance with certain aspects of the present disclosure.
  • Figure 18 illustrate a close-up of a spectral region of the composition matrix where a discontinuity artifact is present, in accordance with certain aspects of the present disclosure.
  • Figure 19 illustrates a composition matrix after application of calibration information such as a spatial modulation compensation parameter generated using techniques including those described herein, in accordance with certain aspects of the present disclosure.
  • Figure 20 illustrates an example of spectral correction of a synthetic vegetation spectrum with IMEC's method and our Wiener pseudo-inverse without training on example spectra. Notice significant differences above 875 nm, in accordance with certain aspects of the present disclosure.
  • Figures 21 A and 21B show spectral measurement of Tungsten-Halogen light before (Fig. 21 A) and after (Fig. 2 IB) camera calibration performed according to embodiments of the present invention (HSI lines can camera using a 600-1000nm sensor), in accordance with certain aspects of the present disclosure.
  • Figures 22A and 22B show spectral measurement of Calibration Target CSTM-MC-010 before (Fig. 22A) and after (Fig. 22B) camera calibration performed, in accordance with certain aspects of the present disclosure.
  • Figures 23 A, 23B, 23 C, and 23D illustrate flow diagrams of calibration operations, in accordance with certain aspects of the present disclosure.
  • Figures 24A, 24B, and 24C illustrate examples of extensive use of the IMEC sensor, Ximea camera, and IMEC Evaluation System, including the use of the hyperspectral system in agricultural fields, with the camera positioned on a linear stage between two tripods or onto a small drone, in accordance with certain aspects of the present disclosure.
  • FIGS 25A, 25B, and 25C illustrate filter groups, in accordance with certain aspects of the present disclosure.
  • a full system calibration system and method for hyperspectral imaging (HSI) cameras and sensors which addresses multiple aspects, including any one or more of knowledge of both sensor related aspects such as dark current, sensor non- linearity and quantum efficiency (QE), and system related aspects such as lens aberrations, lens transmission and camera-lens interactions.
  • the full system calibration method and system of the present invention removes intra & inter sensor variations, and renders HSI camera invariant to the effects of, for example, temperature and specific lens settings.
  • the full system calibration system and method of embodiments of the present invention addresses both the camera and the lens(es), which provides particular advantage since both sensor and system related aspects are addressed. This innovative approach removes intra & inter sensor variations. Correct spectral measurements are obtained in a repeatable manner, irrespective of where the object appears in the camera's field of view (FOV) or which particular camera is being used, and independent from changing lighting conditions and operational mode of the particular HSI system.
  • FOV field of view
  • the full system calibration system and method of embodiments of the present invention are well suited for critical applications, for example including the medical or life science domain, which demand reliable spectral measurements, for applications which apply trained classifiers to hyperspectral image data, or applications which apply carefully chosen thresholds to specific hyperspectral data attributes. More specifically, embodiments of the calibration system and method of the present invention are configured to enable an HSI camera system to operate robustly in one or more of the following situations: changes in lens settings; switches of HSI camera system or optics; altering light conditions, e.g. caused by decaying lighting systems; and /or variable operational conditions, e.g. changes in environmental temperature.
  • An hyperspectral imaging system may include one or more active pixel sensors, otherwise referred to herein as a semiconductor pixel array.
  • Each of the one or more active pixel sensors includes Fabry-Perot filters mounted on a semiconductor pixel array or deposited directly onto the pixel array.
  • the semiconductor pixel array including the one or more Fabry-Perot filters may be formed to have any capture and filter arrangement including but not limited to, a linescan arrangement, a snapshot tiled arrangement, and a snapshot mosaic arrangement using techniques including those known in the art.
  • the hyperspectral imaging system may also include one or more lens used to project an image on each of the one or more semiconductor pixel arrays.
  • one lens may be used to project an image on more than one semiconductor pixel array.
  • the hyperspectral imaging system may also include one or more filters and other components for capture images.
  • the hyperspectral imaging system is configured to generate one or more images or raw output data, including five or greater number of spectral bands, each having a nominal spectral bandwidth less than 100 nm, for example 8 bands of less than 50 nm, or 10 bands of less than 40 nm; or 20 bands of less than 30 nm; or 40 bands of less than 20 nm; or more than 50 bands each with less than 15 nm nominal bandwidth. Please note that such bands may be adjacent, or may be selected from a plurality of bands based on a careful selection of the most important bands identified during a classification training process.
  • a hyperspectral imaging system is configured to generate one or more images or raw output data, including five or greater spectral bands having a spectral bandwidth of up to 1000 nanometers (nm).
  • An embodiment for an apparatus for calibrating a hyperspectral imaging system includes an evaluation system configured to generate calibration information for the hyperspectral imaging system, the calibration information including generated based on at least two or more calibration parameters including a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array, a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array, a spatial modulation compensation parameter based on a response variation of pixels positioned under a filter having the nominal central frequency, and a spectral modulation parameter based on a pixel response variation due to light wavelength outside the nominal width of the spectral filter in front of that pixel in the semiconductor pixel array using techniques including those described herein.
  • Such calibration can be done at different levels of the hyperspectral imaging system and not necessarily at the very end using the complete final system.
  • the calibration can be done at the (1) wafer level, (2) chip level, (3) chip + PC Board level (4) Chip + PC board + lens (full system) level
  • Dark current is an electrical effect that causes a semiconductor pixel array to return non-zero digital values for each pixel even when there is no light (i.e. no external electromagnetic radiation in the spectral range of the sensor) reaching the sensor.
  • a conventional approach to measuring the dark current involves: covering the lens with a cap to prevent any light from entering it; acquiring a series of digital images and averaging the values of each pixel to produce an average dark reference image. Using a conventional approach, such as the approach implemented in the IMEC Evaluation System software, this dark reference is then subtracted from the actual raw data for each image. Based on measurements with actual sensors from IMEC, the inventors have concluded that a novel and more comprehensive approach is necessary. Embodiments of the invention not only assume that the dark current varies across the area of the sensor, but its magnitude also vary as a function of the sensor's temperature and integration time.
  • Dark current can be measured by taking images when the lens is blocked with a metal lens cap. To eliminate the effect of noise, many images can be taken successively, and averaged on a pixel-by-pixel basis. The effect of temperature and integration time variation on dark current can be directly measured by taking dark frames at a sparse selection of temperature- integration time points. Bilinear interpolation of the resulting lookup table would result in highly accurate dark current predictions. However, the required size of the lookup table, which is typically stored in non-volatile memory inside the camera or camera system, could be too large in practice.
  • the storage space requirements can be reduced by replacing the lookup tables per pixel by a polynomial model.
  • a third order polynomial for each pixel proved sufficient for reliable dark current predictions.
  • a third order polynomial for each pixel proved sufficient for reliable dark current predictions.
  • a separate polynomial model is fitted through its recorded data.
  • the dark current response is measured again at several temperature - integration time points, to have data which was generated independently from the training phase.
  • each recorded temperature - integration time point the difference is determined between (a) the actual recorded pixel response and (b) the response predicted by the polynomial model for that pixel.
  • Figures 2 to 5 present a view on the magnitude of these differences across the sensor. Each Figure shows the entire image sensor array in which a pixel is either rendered as white or as black. For a selected threshold value, if the difference between actual recorded pixel value and predicted value is less than the mentioned threshold, for all temperature - integration time points for that pixel, it is rendered as a white pixel in the figure. If not, it is rendered as a black pixel.
  • Figure 1 illustrates a variation of dark current levels at the center of a semiconductor pixel array including one or more Fabry-Perot filters as a function of the integration time and sensor temperature.
  • Figure 2 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 0.5 for all recorded temperature - integration time points, generated using techniques described herein.
  • Figure 3 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 1.5 for all recorded temperature - integration time points, generated using techniques described herein.
  • Figure 4 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.0 for all recorded temperature - integration time points, generated using techniques described herein.
  • Figure 5 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.5 for all recorded temperature - integration time points, generated using techniques described herein.
  • various embodiments are configured to generate a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array.
  • Non-linearity is an electrical effect that causes the response (i.e. digital output) of a sensor pixel to be non-linearly related to the amount of incident light reaching it.
  • embodiments of the systems and methods described herein measure and take into account other variables leading to a greater amount of non-linearity than accounted for by the current systems. For example, a non-linear effect is present across the full dynamic range (and not only in the range of the most significant bit of each pixel) and that this effect varies with sensor temperature and integration time.
  • Embodiments of the invention use monochromatic light and optical filters interposed between the light source and the sensor, to create pairs of measurements (e.g., digital reading from each pixels) from incident light beams that are in precisely known intensity ratio.
  • embodiments are configured to measure sensor non-linearity by taking image pairs of monochromatic light at a wide range of unknown light intensities. The difference between the first and second image of each pair is that the second image is taken with light at a precisely known intensity fraction of the first image.
  • the known intensity fraction can be generated by placing an optical filter in front of the light source with precisely known transmittance at the monochromatic wavelength.
  • the monochromatic light can be generated with a broadband light source filtered through a narrowband filter.
  • such a calibration system is configured to use narrowband filters with fwhm (full width at half maximum) less than 3 nm.
  • a narrowband filter is mounted in front of the camera such that all of the light entering the camera is of the same monochromatic wavelength or closely thereto.
  • the system is configured to prevent the camera from observing light that has not passed through the known optical filter in front of the light source. For example, this can be achieved with adequate shielding of stray light and/or recording an extra image in which only the stray light is measured.
  • the light is measured without a lens, from a sufficient distance to make sure that the light rays transmitted through the narrowband filter are approximately parallel. This is necessary, because the central wavelength of optical narrowband filters varies with the angle of incident light.
  • I 0 , Ii represent the (dark current subtracted) pixel responses without and with the additional known optical filter
  • B is the response to only the stray light
  • T is the known filter transmittance at the wavelength of the monochromatic light.
  • p represents the parameter vector of the non-linearity correction function
  • the system is configured to calculate a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array.
  • Figure 6 illustrates filter transmission approximation for a semiconductor pixel array indicating the non-linearity of the sensor before application of a non-linear compensation parameter to the raw data generated by the semiconductor ("Original") and for the corrected output that applies a non-linear compensation parameter to the raw data to compensate for the non-linear gain response of the semiconductor pixel array.
  • the non-linearity estimation results, as illustrated in Figure 6, were determined at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm as determined using a system according to embodiments described herein.
  • Figure 7 illustrates the estimated non-linear gain determined using an evaluation system as described herein at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm.
  • a non-linear correction can reduce the variance due to this non-linear effect by a factor of 3 to 5 times.
  • embodiments of the evaluation system are configured to generate a non-linear compensation parameter, such as a correction function, that varies as well to compensate the raw data.
  • a non-linear compensation parameter such as a correction function
  • the evaluation system is configured to repeat the measurements at a plurality of pre-defined sensor temperatures to generate one or more non-linear compensation parameters, and use interpolation techniques to correct images, the raw data, taken at different temperatures.
  • a similar technique can be used to accommodate a range of different integration times.
  • Figure 8 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels.
  • PID proportional- integral-derivative
  • Such an embodiment includes intensity reduction filter configured to be moved in and out automatically (e.g., filter wheel), such as under the control of a processor or computer to gather image pairs at two light intensities, of which the second level is known as a fraction of the first intensity level.
  • the narrow band pass filter is configured to automatically switch between several different filter bands (e.g., filter wheel) under the control of a processor or computer to measure the camera responses at different filter bands.
  • the evaluation system also includes a halogen light DC controller under the control of a processor or computer to gather image pairs at different base intensity levels to compute a non-linear gain correction curve over a desired response range, such as the entire response range of the semiconductor pixel array.
  • a PID temperature controller under the control of a processor or computer to automatically sample the raw data of a semiconductor pixel array over a temperature range, such as the entire operating temperature range of the semiconductor pixel array.
  • a control program on a controller or a computer can cycle automatically through all relevant settings (e.g., temperature, exposure time (integration time), recording mode), and can at each setting record all raw data for the image pairs used to compute the non-linear gain correction function over a desired response range of a hyperspectral imaging system.
  • a hyperspectral imaging (“HSI”) system includes but is not limited to, one or more of any of a semiconductor pixel array, a lens, a filter, a housing, and other optical or electrical components.
  • Figure 9 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels and an integrating sphere.
  • the evaluation system is configured to reduce the intensity of a light source, such as Halogen light, using a DC controller.
  • the evaluation system is configured to measure the reduced intensity by a spectrometer to gather raw data of the hyperspectral imaging system for image pairs for at least two light intensities, of which the second intensity level is a known fraction of the first intensity level.
  • the evaluation system is configured to use many image pairs at different base intensity levels to compute a non-linear gain correction curve over a desired response of the range, such as the entire response range of the HSI system.
  • the evaluation system also includes a monochromator configured to be under control of a processor or a computer to measure the HSI system responses at one or more filter bands. Further, the evaluation system includes a PID temperature controller under the control of a processor or computer to automatically sample the raw data of a semiconductor pixel array over a temperature range, such as the entire operating temperature range of the semiconductor pixel array.
  • a control program on a controller or a computer can cycle automatically through all relevant settings (e.g., temperature, exposure time (integration time), recording mode), and can at each setting record all raw data for the image pairs used to compute the non-linear gain correction function over a desired response range of a hyperspectral imaging system.
  • the evaluation system is configured to use a non-monochrome light source to illuminate spectrally uniform diffuse grey reflectance targets (i.e., with a known, flat spectral reflectance). These targets could be observed with a lens mounted on the HSI system, and all pixels imaging the targets could then be organized in intensity tuples (with as many entries as reflectance targets). This has the potential to greatly simplify the optical measuring set-up.
  • the reflectance properties of the targets need to be known with sufficient accuracy.
  • Embodiments of the present invention are based on the experimental observation that the semiconductor pixel array, such as a CMOS sensor array, with Fabry-Perot filters exhibit a very significant response modulation between sensor pixel columns (as much as a 20% variance between column on the left and on the right of the mid line) and that such variation can be corrected by measuring the Mi modulation map and applying the required correction to the images.
  • the semiconductor pixel array such as a CMOS sensor array
  • Fabry-Perot filters exhibit a very significant response modulation between sensor pixel columns (as much as a 20% variance between column on the left and on the right of the mid line) and that such variation can be corrected by measuring the Mi modulation map and applying the required correction to the images.
  • the response variation across each row is measured while observing the same physical point.
  • Such measure is performed by taking a series of images of an illuminated white tile, with the camera moving sideways on a computer controlled linear translation stage.
  • the translation speed and the geometry of the translation stage are selected so that the modulation map can be straightforwardly computed by dividing each measurement by its corresponding measurement on the central column.
  • the measure is repeated for different physical points, and the computation of the modulation map is performed by estimating the optimal correction factor per pixel for mapping its response onto the corresponding average of the N central columns using linear least squares.
  • the N central columns should correspond with those columns used to measure the response composition matrix for the specific CMOS sensor (see definition of response composition matrix described herein).
  • the reflectance of the white tile is approximately Lambertian.
  • the translation speed and camera frame rate are precisely tuned so that the image of the white tile moves horizontally over a fixed integer number of pixels between any two consecutive images.
  • the optical axis of the camera is perpendicular to the white tile surface.
  • the translation is parallel with the sensor rows.
  • the lens distortion is either negligible or corrected for.
  • the integration time of the camera is the same for all frames.
  • the illumination spectrum and intensity remains the same throughout the scanning process.
  • a frame filling uniform light source (e.g. generated by an integrating sphere) can be observed. This entails the following assumptions: the spectrum and intensity of the light source is the same across the camera's field of view.
  • Figure 10 illustrates an estimated modulation map Ml .
  • the modulation ranges between 0.80 and 1.19. This suggests an HSI system without calibration information applied to the raw output of the system may be distorted by as much as 20 percent when measured at the left or right end of the sensor.
  • Figure 11 illustrates reconstructions (up to scale) of identical light spectra using a HSI system without compensation for modulation between sensor columns. Clearly, the reconstructed spectrum varies strongly across the sensor.
  • Embodiments of the invention use images to measure the reflectance from a white tile that is approximately Lambertian, and multiple points on the tile are observed and measured as the linear translation stages moves the camera at a speed and frame rate that is precisely selected so that the images move vertically over a fixed integer number of pixels between consecutive frames.
  • the reflectance of the white tile is approximately Lambertian.
  • the translation speed and camera frame rate are precisely tuned so that the image of the white tile moves vertically over a fixed integer number of pixels between any two consecutive images
  • the optical axis of the camera is perpendicular to the white tile surface, o
  • the translation is parallel with the sensor columns,
  • the lens distortion is either negligible or corrected for.
  • the spectrum of the light source is the same across the camera's field of view o
  • the intensity of the light source is constant across the camera's field of view
  • corresponding responses in IMEC's response composition matrix calibration set-up can be expressed as a sensor row-dependent scaling on the central sensor columns.
  • the measurements are processed by the software according to the following methodology:
  • b indicates the spectral band on the sensor for which we describe the response
  • f(b, ) is the spectral response function of the sensor band
  • M_2 (b) is the intensity modulation induced by the camera-lens system at the location of the band.
  • f(b) is the Q-element row vector on the row of the response composition matrix corresponding with the observed band
  • T_L®L is the Q-element row vector obtained by point-wise multiplying the lens transmittance vector with the light spectrum.
  • Ii(b) M 2 (b)(f(b) - (T L ® Ti ® R ® L)) , where R is the spectral reflectance vector of the physical point on the white tile. Given that we know R (for a white tile, this should be approximately a constant across the spectrum), we obtain the same non-linear optimization problem as described above for the uniform light source case. In practice, we average the measurements over many physical points of the white tile to improve the signal-to-noise ratio.
  • the transmission curve of the front glass was derived from accurate spectral radiant flux measurements of our halogen lamp, measured by
  • FIG. 12 shows that the reconstructed light source spectrum with and without the M_2 modulation reconstruction, demonstrating the importance of this calibration step and of the correction applied as a result of it.
  • Figure 12 illustrates measured emission spectra of the halogen light source in the horizontal and vertical orientation. These measurements were used for estimating the transmittance of the front glass element, and are accurate up to 0.67 percent.
  • Figure 13 illustrates the estimated halogen light front glass transmittance spectrum.
  • Figure 14 illustrates the typical transmission spectrum of soda-lime glass provided for reference (from Wikipedia).
  • Figure 15 illustrates the estimated modulation M 2 .
  • Figure 16 illustrates a light source spectrum reconstruction with and without M 2 modulation correction. In both cases the spectral correction matrix computed by IMEC was used for spectral correction.
  • an evaluation system is configured to use an image-side telecentric lens, or direct measurement of the response composition matrix of the HSI system using a monochromator set-up. Further the evaluation system is configured to avoid possible inaccuracies inherent in the optical filter-based measuring set-up by replacing the filters with observation of diffuse reflectance targets with accurately known spectral reflectance.
  • spectral radiance then changing the spectral content of this uniform radiance does not change the ratio of the responses between any two pixels belonging to the same filter group.
  • the spatial variation between pixels of the same filter group can then be modeled as only a variation in magnitude of response which needs to be calibrated.
  • Reasons for this can e.g. be:
  • Pre-calibration is available but not valid for all pixels. E.g. only a part of the sensor has been pre-calibrated and the other pixels were assumed to behave similarly but don't. E.g. because the spectral quantum efficiency curve per pixel changes depending on its location on the sensor.
  • spectral content received by pixels normally designed to receive the same spectral content E.g. because the lens introduced chromatic aberrations, or caused light to have a different angle of incidence for pixels at different locations causing a shift in spectral filtering, or causing multiple reflections between sensor and lens which are location dependent.
  • Magnitude calibration is comprised of two consecutive parts, specifically:
  • ⁇ B the values obtained by applying the pre-calibrated spectral response curves to the input light spectra.
  • Inter-filter calibration determines these difference in magnitude in order to correct B so that they match A, the real recorded responses.
  • Intra-filter calibration is also provided in some embodiments. Under all prior assumptions and previously performed calibrations (dark current + non-linearity), pixels within the same filter group, but lying at different locations on the sensor, will now give the same response value up to an unknown scale. The goal is to determine these scale differences in order to get complete identical responses for all pixels of the same filter group, when an object is viewed by the system, irrespective of the location on the sensor for these pixels. Therefore the physical setup below is designed to provide all pixels within the same filter group with the same spectral radiance. This can be achieved in several ways as described below.
  • the camera system is physically translated with respect to a calibration target e.g. a white tile.
  • the calibration target is designed to reflect light as a Lambertian surface such that from a point p on this calibration target, the same spectral radiance is emitted in all directions.
  • the manner of translation therefore depends on the layout of the pixels. E.g. if all pixels of a single filter group are within the same sensor row then the translation direction will be parallel to this row such that the same calibration point p is seen consecutively by all pixels on the same row.
  • a calibration target e.g. a white tile.
  • the calibration target is designed to reflect light as a Lambertian surface such that from a point p on this calibration target, the same spectral radiance is emitted in all directions.
  • the translations must also be according to the directions of this grid and in sync with the grid spacing such that the image of the same physical calibration target point visits all pixels of that filter group.
  • the camera system is looking into an integrating sphere.
  • the purpose of the integrating sphere is present very low spatial variation of emitted radiance in the entire field of view of the camera system. As such all pixels of the same filter group can be assumed to receive the same spectral radiance. Such a system would not require translations but puts more stringent constraints on the spatial uniformity required of the integrating sphere.
  • Inter-filter calibration is provided. After all prior calibrations, all pixels within the same filter group should now behave identically._Also assumed is that the spectral response curve of each of these groups has been pre-calibrated, e.g. by the manufacturer, and that these spectral response curves are available.
  • the inter-filter calibration takes care of changes in scale which can be seen between actual recorded values of each filter group and those predicted by the pre-calibration when viewing input light spectra. These differences in scaling can e.g. be due to the vignetting of the lens or other changes which has been introduced into the system after the pre-calibration.
  • the changes in scale can be determined directly by comparing the recorded response of each filter group with those predicted by applying the pre-calibrated spectral response curves of each filter group with the known input light spectrum. This would require a prior knowledge of the spectrum emitted by a calibrated light source and the spectral response curve of each additional element introduced into the path between light source and the pre-calibrated system such as e.g. a lens which was added or changed. If prior calibration data is available of all elements this approach is feasible.
  • Shape of input light spectrum is determined by less parameters, P, than there are filter groups N.
  • P the number of parameters per light source
  • Making K recordings, each at a different setting of the light source we then obtain KxN recorded filter group responses and have N+KxP unknowns.
  • the unknowns come from the N unknown scales to calibrate for and the KxP unknowns coming from the different setting of the light source per recording.
  • the equations to retrieve the unknowns can then be solved as soon as we have enough independent light sources (e.g., halogen lights driven at different current levels),
  • Shape of input light is unknown but can be modeled with P parameters which are kept fixed during K recordings, but between each recording an object is placed in the path between the light source and the camera system of which the introduced spectral change is known a-priori.
  • P parameters which are kept fixed during K recordings, but between each recording an object is placed in the path between the light source and the camera system of which the introduced spectral change is known a-priori.
  • a set of filters with known calibrated transmittance curves a set of diffuse calibrated Lambertian reflectance tiles.
  • the number of recorded filter group responses is KxN, but the number of unknowns now remains fixed at N+P.
  • ⁇ for the unknowns scales to calibrate for and P to model the unknown fixed light source.
  • the physical embodiment is comprised of the same current embodiment as mentioned for intra-filter calibration but with the following changes:
  • Alternative embodiments are further provided and are comprised of the same as current embodiment but with different known reflective tiles between recordings or a combination of this with known filters.
  • the alternative embodiment described above may be used for intra- filter calibration, but in which variation between the K recordings is introduced by inserting K different known filters somewhere in the light path, e.g. between light source and integrating sphere at entrance port of the integrating sphere, or e.g. in front of camera lens.
  • the envisioned embodiment is to use an integrating sphere + monochromator + spectrometer/photodiode to perform full spectral calibration.
  • the purpose of the integrating sphere is to provide the camera system with an as spatially uniform spectrum as possible to all pixels on the sensor.
  • the purpose of the monochromator is to scan through the wavelength range and determine the response of each pixel individually for each wavelength in the range.
  • the purpose of the spectrometer/photodiode is to look inside the integrator sphere in order to know the spectral radiance of the light being directed towards the camera system. This knowledge, together with the recorded responses for each individual pixel allows determining the spectral response curve for each individual pixel.
  • the response composition matrix (or F-matrix) describes the response of each of the hyperspectral camera's bands to monochromatic light at the full range of wavelengths to which the sensor is sensitive. It is a N X Q matrix, in which each of the N rows corresponds with a specific band of the sensor, and each column corresponds with a certain wavelength. Typically, there are around 105 bands for an IMEC line-scan sensor, and the wavelengths cover the range from 400 to 1000 nm in 1 nm steps. Right-multiplying the F-matrix with a column vector representing a light spectrum produces a N-dimensional vector representing the expected response of each of the sensor's bands to the light.
  • IMEC has measured the F-matrix by exposing the naked sensor (without the camera enclosure and additional bandpass filter) to a monochromatic light beam from a light source (produced by a monochromator). Sensor readings (images) were taken at each of the
  • Figure 17 illustrates a graphical representation of the composition matrix for a sensor as measured by IMEC. Response curves for different sensor bands are plotted in different colors. The composition matrix was measured without an additional 600 - 1000 nm bandpass filter.
  • Figure 18 illustrate a close-up of a spectral region of the composition matrix where a discontinuity artifact is present.
  • Figure 19 illustrates a composition matrix after application of calibration information such as a spatial modulation compensation parameter generated using techniques including those described herein.
  • Spectral correction refers to the process of correcting a spectral signal captured by a non- ideal filter bank (i.e., filters consisting of non-ideal peaks or multiple peaks). Since the response of the filter bands on Ximea's hyperspectral cameras with the IMEC sensor strongly exhibits such imperfections, spectral correction is necessary to obtain reliable estimates of the incoming light spectrum.
  • a non- ideal filter bank i.e., filters consisting of non-ideal peaks or multiple peaks. Since the response of the filter bands on Ximea's hyperspectral cameras with the IMEC sensor strongly exhibits such imperfections, spectral correction is necessary to obtain reliable estimates of the incoming light spectrum.
  • N_s and N_v be the number of sensor filters and the number of virtual filters, respectively.
  • F_s (N_s x Q_ ad F_V (N_v x Q) be the response composition matrices of the sensor filters and virtual filters, respectively. Construct a new N_v x N_s correction matrix as follows:
  • the Wiener optimization can be tuned to optimally reconstruct plant spectra by using a training set that provide a range of plant spectra, or it can be tuned to optimally reconstruct the spectra of brain tissues by using a training set that provides a range of spectra typically found in such tissues.
  • the covariance matrix K_c can be modeled as first-order Markov process covariance matrix, which depends on the correlation parameter p E [0,1] between neighboring values of the spectrum. This model essentially assumes that the spectra that need to be corrected belong to the family of smooth curves.
  • K_c can be computed as the second order moment matrix (i.e. covariance matrix about zero) of a set of training spectra.
  • the training spectra should be selected as representative for the intended application (e.g., vegetation and soil reflectance spectra illuminated by sunlight for agricultural applications).
  • Figure 20 Example of spectral correction of a synthetic vegetation spectrum with IMEC's method and our Wiener pseudo-inverse without training on example spectra. Notice significant differences above 875 nm.
  • one camera may be used to measure the illumination spectrum, while the other camera measures the spectrum reflected from a sample.
  • the other camera measures the spectrum reflected from a sample.
  • white balancing it is necessary to apply white balancing to the measurements of the second camera, based on the illumination spectrum measured by the first camera.
  • An interesting practical problem is the transfer of spectral measurements from a camera with mounted diffuser system to a regular hyperspectral camera with a lens.
  • the camera with diffuser may be configured in an upwards viewing orientation to sample the hemispherical illumination spectrum, while the camera with lens may be used in a downward orientation to measure the light reflected from a sample.
  • intensity level calibration may be achieved by comparing a spectrally calibrated measurement of an illuminated white tile produced by the camera-lens system to a calibrated illumination spectrum measurement with the diffuser system placed at the position of the white tile, facing the light source.
  • the scale factor between both cameras can be straightforwardly deduced from the intensity difference between both measurements.
  • the calibration process steps are performed sequentially (For spatial variation calibration there is a split in the process). Each step generates its own calibration data which will be already employed in the further calibration steps. In the end all generated calibration data is tagged for traceability, digitally signed for authenticity, and further compressed to minimize footprint for easy of exchange or storage in non-volatile memory of the camera system.
  • a simple compression technique consists of subsampling the dimensions, so in essence throwing part of the data away and relying on interpolation to represent the discarded data accurate enough.
  • a main aim of compression is to have a small memory footprint which is useful when storing this data in non-volatile memory of a camera system or when exchanging this data as files.
  • the data will be used it will be decompressed (and perhaps even expand beyond original size) to a format which consumes more memory again but is more efficient in processing efficiency.
  • the calibration usage software will verify signature and camera system IDs to guarantee authenticity and extract traceability data which can be added to all processed image content such that it is traceable.
  • the electronic camera device can be queried, it is possible to automatically verify its ID and its settings to ensure that the calibration data is valid for this camera and its settings.
  • Some elements in the camera system are passive and cannot be automatically queried for its actual settings.
  • the lens used by an industrial camera is often a passive device but e.g. its aperture setting can have an influence on how the calibration is performed. Since these settings cannot be automatically verified this type of settings have to be explicitly set and confirmed by the user.
  • Traceable calibrated image content can be generated without additionally passing it through the spectral correction step.
  • the traceable calibrated image content is independent of the application whereas the spectral corrected data can be application specific since spectral correction can use prior knowledge of typical spectra to expect from the objects which are seen in the data. This is one main reason for making a distinction between both types of generated data.
  • Figures 21 A and 21B show spectral measurement of Tungsten-Halogen light before (Fig. 21 A) and after (Fig. 2 IB) camera calibration performed according to embodiments of the present invention (HSI lines can camera using a 600-1 OOOnm sensor).
  • Figures 22A and 22B show spectral measurement of Calibration Target CSTM-MC-010 before (Fig. 22A) and after (Fig. 22B) camera calibration performed according to embodiments of the present invention.
  • the inventors have demonstrated that the raw images acquired with a CMOS imaging sensor with Fabry-Perot filters can be corrected to take into account multiple sources of variance. These corrections are based on the measurements obtained with the novel calibration apparatus and methods, and can be applied to the raw images using calibration files embedded into the camera system (e.g., inside the camera module and in the processing computer). These calibration files are unique to each sensor, lens, and camera system, and enable the pre-processing of the raw hyperspectral images, so that the processing software is more efficient, robust and reliable in extracting useful information.

Abstract

The present invention relates to the calibration of hyperspectral sensors and camera systems. More specifically, the invention relates to the apparatus and methods to be used to measure the characteristics of camera systems based on active pixel sensors with Fabry-Perot filters deposited directly onto the active pixel array, to improve the usage of such systems in various applications, including agriculture, medicine, and other fields of use that benefit from a better calibrated hyperspectral system. The use of the calibration information enhances the efficiency of the software processing of the raw images collected with such sensor, and the quality and usefulness of the processed output of such camera system.

Description

CALIBRATION METHOD AND APPARATUS FOR ACTIVE PIXEL
HYPERSPECTRAL SENSORS AND CAMERAS
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims benefit of priority to U.S. Provisional Application
No. 62/418,755, filed November 7, 2016, the contents of which are expressly incorporated herein by reference in its entirety.
BACKGROUND
1. FIELD OF THE DISCLOSURE
[0002] Embodiments of the present application relate broadly to calibration of hyperspectral imaging systems, and more specifically to methods and apparatus for calibrating active pixel (such as but not limited to CMOS based) hyperspectral sensors and cameras.
2. DESCRIPTION OF RELATED ART
[0003] Hyperspectral imaging systems have been used in various applications for many decades. Typically, such systems have been bulky and expensive, which has limited their wide adoption. More recently, sensors based on mounting or depositing Fabry-Perot filters directly onto CMOS digital imaging arrays have been developed, reducing dramatically the size and weight of the hyperspectral cameras based on such new sensors. Because such sensors can be manufactured directly onto silicon wafers using photolithographic processes, there is the potential to drastically reduce the cost by increasing the manufacturing volume.
[0004] An example of such sensor is the IMEC sensor, which is available since 2014 in various configurations ("line-scan", "tiled" and "mosaic"), see patents US2014/0175265 (Al) for line- scan, US2014267849 (Al) for tiled and US2015276478 (Al) for mosaic. For example, the IMEC LSI 00-600- 1000 line-scan sensor is based on a CMOS chip designed by CMOSIS, which has 2048 pixels by 1088 pixels. IMEC deposits Fabry-Perot filters onto each wafer, to provide about 100 spectral bands ranging from 600 nm to 1000 nm. The filter for each band is positioned above a "row" of 8 pixels by 2048 pixels.
[0005] An example of cameras that incorporate the IMEC sensor is manufactured by Ximea. The camera is small and light, for example having a dimension of 26x26x21 millimeters (mm) and weighing 32 grams, and provides a USB3.0 digital output capable of providing 170 frames per second.
[0006] An example of camera systems which can incorporate the Ximea camera based on the IMEC sensor is the IMEC Evaluation System, which includes also halogen lights, a linear translation stage controlled by a computer software to move the camera or the sample being scanned, and calibration and processing software to acquire images and perform certain transformation of the raw data to demonstrate various applications.
[0007] The potential applications for these technologies are diverse, ranging from their use in agriculture (for example, in drones detecting crop diseases, as described in detail in United States Patent application serial number 15/001,112 tiled APPARATUSES AND METHODS FOR BIO- SENSING USING UNMANNED AERIAL VEHICLES, filed on January 19, 2016, the entire disclosure of which is hereby incorporated by reference), to medicine (for example, in distinguishing between various tissues during surgery). Any such application can potentially enable better, faster and cheaper analysis of the chemical or biochemical conditions of various objects, because the spectral reflectance, combined with the spatial variations within and between objects observed with the camera system, can provide "signatures" of various molecules which absorb or reflect light differently at various wavelength. [0008] Unlike the human eye, which sees colors based on the output of only three types of photoreceptor cones (each with a relatively broad spectral response, but with peak sensitivities around red, green and blue wavelengths), the hyperspectral sensors can provide much finer spectral resolution, which can be used to distinguish between objects that may appear the same to a human eye despite differences in their chemical or biochemical make up.
[0009] For example, spinach crops often get contaminated with mildew fungi. This type of disease quickly changes the metabolism of the leaves, which then produce spores dispersed by the wind. It is not unusual for many acres to be lost in a few days, at great economic cost. Human inspection of the crops is often too slow, because the initial infection is not visible to the human eye, and the disease progresses very quickly. By the time the mildew lesions are clearly visible to the inspector, the whole field may already have been contaminated. The present inventors have designed a system that leverages the IMEC sensor and Ximea camera to detect mildew on spinach leaves, enabling drones to inspect more acres faster, detecting the diseased leaves at an earlier stage. This type of application requires a significant amount of software processing of the hyperspectral raw images, so that small changes in the leaves that are correlated with the early onset of mildew can be detected quickly.
[0010] Another example is the use of hyperspectral imaging during brain surgery, to help the surgeon distinguish between various tissues while performing the surgery. Here again, software is needed to quickly and reliably process the raw hyperspectral data and provide a virtual image that can be superimposed in the field of view of the operating microscope, providing extra information to the surgeon. Small changes in the oxygen concentration in small capillaries can be magnified and even the boundaries between a tumor and healthy tissues can be more clearly visible when the output of the hyperspectral system can superimpose additional information that would not be readily visible to the naked human eye.
[0011] It is well known to those skilled in the art of using hyperspectral imaging systems that many applications, including the examples above, require a significant amount of software processing. However, it is not always fully appreciated that the calibration of the sensor and the camera system is critical to the pre-processing of the raw digital output of the active pixel hyperspectral sensor. Without precise calibration, the processing is much more complex and less reliable, as small differences between sensors, lenses, lighting conditions, temperature, etc. can introduce variations that prevent or hinder the detection of the targeted "signatures".
[0012] Because of their manufacturing processes, there are inherent variations within each sensor chip; between chips from a given wafer or different wafer; between cameras and lenses; and between lighting conditions; which introduce significant "noise" in the software processing pipeline and reduce the ability to reliably detect the "signal" used to classify objects and make decisions. In short, without correcting for such variations by using the information measured during a comprehensive calibration procedure, the potential of such new technologies may not be realized.
[0013] Thus, there is an important need to provide a novel apparatus and method to calibrate such hyperspectral system.
[0014] Through extensive use of the IMEC sensor, Ximea camera, and IMEC Evaluation System, including the use of the hyperspectral system in agricultural fields, with the camera positioned on a linear stage between two tripods or onto a small drone (see figures 24A-C), it was found that the existing software provided with these products do not provide adequate calibration methods and results and that the ability to detect mildew on spinach leaves could be greatly enhanced by the present invention.
SUMMARY
[0015] Through extensive use of the IMEC sensor, Ximea camera, and IMEC Evaluation System, including the use of the hyperspectral system in agricultural fields, with the camera positioned on a linear stage between two tripods or onto a small drone (see figures), it was found that the existing software provided with these products do not provide adequate calibration methods and results and that the ability to detect mildew on spinach leaves could be greatly enhanced by the present invention. The benefits of embodiments of the present invention are of course not limited to such application and one skilled in the art will readily be able to use the invention to other active pixel Fabry-Perot sensors, cameras, field of use, and applications.
[0016] Embodiments of the present invention combine several measurements which need to be performed to best calibrate the hyperspectral imaging system. While one can perform only some of the measurements described herein, the combination of many, most or all of the measurements can provide information that, when used by the image processing software, reduces the variances introduced by the hyperspectral measurement system and improves the accuracy and the benefits of the processed information produced by the hyperspectral analysis software.
[0017] Embodiments of the invention provide a systematic way to measure non-ideal characteristics of the hyperspectral camera systems, otherwise referred to herein as a
hyperspectral imaging system, that use an active pixel sensor based on Fabry-Perot filters applied onto the active pixel array. The deviation of the actual performance of a given system from the ideal performance (i.e. that of a theoretical system that would perform perfectly and consistently) is measured and the information is then used by the pre-processing software. This pre-processing software uses the calibration information to apply a correction to the raw data recorded using the camera system, reducing the variance or noise introduced by the actual system as compared to the ideal system.
[0018] The sources of variance can be within a given sensor; or can be due to external factors affecting a given sensor, such as its temperature. The sensor may also respond differently to light rays coming thru the lens' axis and rays coming from other directions. The pairing of a given sensor with the same model lens may yield different results because the lenses are not identical. A given sensor paired with a specific lens may provide different raw data than another senor and lens of identical model and specifications simply because of the small manufacturing variances between the nominally identical items. Any digital camera is programmed to convert electromagnetic radiation (photons) into digital frames through the reading of the semiconductor pixel array based on the lens aperture and the integration time (a longer integration time implies that more photons are collected before the conversion into electrons is read into a digital reading for each pixel).
[0019] The goal of embodiments of the invention is to minimize all of the above sources of variance due to the hyperspectral system, so that the processing software can more efficiently be trained to recognize specific "spatial - spectral signatures" in a robust and reliable way. This is especially important if many camera systems are used to scan hundreds or thousands of acres to detect a crop disease or if many surgeons rely on such system to perform surgeries and to compare data with that of other colleagues.
[0020] Embodiments of the invention comprises of multiple measurements to quantify several non-ideal characteristics of such system. The measurements can be taken according to the sequence presented here, or can be taken in an alternate sequence. Some of the measurements can be skipped or all of the measurements can be combined in a single calibration process. One skilled in the art will recognize that each of these measurements pose different challenges and require novel techniques when applied to a sensor based on a semiconductor pixel array with Fabry -Perot filters mounted on a semiconductor pixel array or deposited directly onto the chip, such as a CMOS digital pixel arrays with Fabry-Perot filters, than with older and more conventional hyperspectral systems based on other technologies and techniques.
BRIEF DESCRIPTION OF THE FIGURES
[0021] Figure 1 illustrates a variation of dark current levels at the center of a semiconductor pixel array including one or more Fabry-Perot filters as a function of the integration time and sensor temperature, in accordance with certain aspects of the present disclosure.
[0022] Figure 2 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 0.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
[0023] Figure 3 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 1.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
[0024] Figure 4 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.0 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure. [0025] Figure 5 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
[0026] Figure 6 illustrates filter transmission approximation for a semiconductor pixel array indicating the non-linearity of the sensor before application of a non-linear compensation parameter to the raw data generated by the semiconductor ("Original") and for the corrected output that applies a non-linear compensation parameter to the raw data to compensate for the non-linear gain response of the semiconductor pixel array, in accordance with certain aspects of the present disclosure.
[0027] Figure 7 illustrates the estimated non-linear gain determined using an evaluation system as described herein at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm, in accordance with certain aspects of the present disclosure.
[0028] Figure 8 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels, in accordance with certain aspects of the present disclosure.
[0029] Figure 9 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels and an integrating sphere, in accordance with certain aspects of the present disclosure.
[0030] Figure 10 illustrates an estimated modulation map Ml , in accordance with certain aspects of the present disclosure. [0031] Figure 11 illustrates reconstructions (up to scale) of identical light spectra using a HSI system without compensation for modulation between sensor columns. Clearly, the reconstructed spectrum varies strongly across the sensor, in accordance with certain aspects of the present disclosure.
[0032] Figure 12 illustrates measured emission spectra of the halogen light source in the horizontal and vertical orientation, in accordance with certain aspects of the present disclosure.
[0033] Figure 13 illustrates the estimated halogen light front glass transmittance spectrum, in accordance with certain aspects of the present disclosure.
[0034] Figure 14 illustrates the typical transmission spectrum of soda-lime glass provided for reference.
[0035] Figure 15 illustrates the estimated modulation M2, in accordance with certain aspects of the present disclosure.
[0036] Figure 16 illustrates a light source spectrum reconstruction with and without M2 modulation correction, in accordance with certain aspects of the present disclosure.
[0037] Figure 17 illustrates a graphical representation of the composition matrix for a sensor as measured by IMEC, in accordance with certain aspects of the present disclosure.
[0038] Figure 18 illustrate a close-up of a spectral region of the composition matrix where a discontinuity artifact is present, in accordance with certain aspects of the present disclosure.
[0039] Figure 19 illustrates a composition matrix after application of calibration information such as a spatial modulation compensation parameter generated using techniques including those described herein, in accordance with certain aspects of the present disclosure.
[0040] Figure 20 illustrates an example of spectral correction of a synthetic vegetation spectrum with IMEC's method and our Wiener pseudo-inverse without training on example spectra. Notice significant differences above 875 nm, in accordance with certain aspects of the present disclosure.
[0041] Figures 21 A and 21B show spectral measurement of Tungsten-Halogen light before (Fig. 21 A) and after (Fig. 2 IB) camera calibration performed according to embodiments of the present invention (HSI lines can camera using a 600-1000nm sensor), in accordance with certain aspects of the present disclosure.
[0042] Figures 22A and 22B show spectral measurement of Calibration Target CSTM-MC-010 before (Fig. 22A) and after (Fig. 22B) camera calibration performed, in accordance with certain aspects of the present disclosure.
[0043] Figures 23 A, 23B, 23 C, and 23D illustrate flow diagrams of calibration operations, in accordance with certain aspects of the present disclosure.
[0044] Figures 24A, 24B, and 24C illustrate examples of extensive use of the IMEC sensor, Ximea camera, and IMEC Evaluation System, including the use of the hyperspectral system in agricultural fields, with the camera positioned on a linear stage between two tripods or onto a small drone, in accordance with certain aspects of the present disclosure.
[0045] Figures 25A, 25B, and 25C illustrate filter groups, in accordance with certain aspects of the present disclosure.
DETAILED DESCRIPTION
[0046] In some embodiments, a full system calibration system and method for hyperspectral imaging (HSI) cameras and sensors is provided which addresses multiple aspects, including any one or more of knowledge of both sensor related aspects such as dark current, sensor non- linearity and quantum efficiency (QE), and system related aspects such as lens aberrations, lens transmission and camera-lens interactions. In some embodiments, the full system calibration method and system of the present invention removes intra & inter sensor variations, and renders HSI camera invariant to the effects of, for example, temperature and specific lens settings. The full system calibration system and method of embodiments of the present invention addresses both the camera and the lens(es), which provides particular advantage since both sensor and system related aspects are addressed. This innovative approach removes intra & inter sensor variations. Correct spectral measurements are obtained in a repeatable manner, irrespective of where the object appears in the camera's field of view (FOV) or which particular camera is being used, and independent from changing lighting conditions and operational mode of the particular HSI system.
[0047] The full system calibration system and method of embodiments of the present invention are well suited for critical applications, for example including the medical or life science domain, which demand reliable spectral measurements, for applications which apply trained classifiers to hyperspectral image data, or applications which apply carefully chosen thresholds to specific hyperspectral data attributes. More specifically, embodiments of the calibration system and method of the present invention are configured to enable an HSI camera system to operate robustly in one or more of the following situations: changes in lens settings; switches of HSI camera system or optics; altering light conditions, e.g. caused by decaying lighting systems; and /or variable operational conditions, e.g. changes in environmental temperature.
[0048] More particularly, embodiments of a calibration method and apparatus for active pixel hyperspectral sensors and cameras, otherwise referred to herein as a hyperspectral imaging system are described herein. An hyperspectral imaging system may include one or more active pixel sensors, otherwise referred to herein as a semiconductor pixel array. Each of the one or more active pixel sensors includes Fabry-Perot filters mounted on a semiconductor pixel array or deposited directly onto the pixel array. The semiconductor pixel array including the one or more Fabry-Perot filters may be formed to have any capture and filter arrangement including but not limited to, a linescan arrangement, a snapshot tiled arrangement, and a snapshot mosaic arrangement using techniques including those known in the art.
[0049] The hyperspectral imaging system may also include one or more lens used to project an image on each of the one or more semiconductor pixel arrays. In addition, one lens may be used to project an image on more than one semiconductor pixel array. The hyperspectral imaging system may also include one or more filters and other components for capture images.
[0050] For various embodiments the hyperspectral imaging system is configured to generate one or more images or raw output data, including five or greater number of spectral bands, each having a nominal spectral bandwidth less than 100 nm, for example 8 bands of less than 50 nm, or 10 bands of less than 40 nm; or 20 bands of less than 30 nm; or 40 bands of less than 20 nm; or more than 50 bands each with less than 15 nm nominal bandwidth. Please note that such bands may be adjacent, or may be selected from a plurality of bands based on a careful selection of the most important bands identified during a classification training process. For various embodiments, a hyperspectral imaging system is configured to generate one or more images or raw output data, including five or greater spectral bands having a spectral bandwidth of up to 1000 nanometers (nm).
[0051] An embodiment for an apparatus for calibrating a hyperspectral imaging system includes an evaluation system configured to generate calibration information for the hyperspectral imaging system, the calibration information including generated based on at least two or more calibration parameters including a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array, a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array, a spatial modulation compensation parameter based on a response variation of pixels positioned under a filter having the nominal central frequency, and a spectral modulation parameter based on a pixel response variation due to light wavelength outside the nominal width of the spectral filter in front of that pixel in the semiconductor pixel array using techniques including those described herein. Such calibration can be done at different levels of the hyperspectral imaging system and not necessarily at the very end using the complete final system. For example, the calibration can be done at the (1) wafer level, (2) chip level, (3) chip + PC Board level (4) Chip + PC board + lens (full system) level
Dark Current:
[0052] In embodiments of the invention, one measures the dark current in the sensor. Dark current is an electrical effect that causes a semiconductor pixel array to return non-zero digital values for each pixel even when there is no light (i.e. no external electromagnetic radiation in the spectral range of the sensor) reaching the sensor. A conventional approach to measuring the dark current involves: covering the lens with a cap to prevent any light from entering it; acquiring a series of digital images and averaging the values of each pixel to produce an average dark reference image. Using a conventional approach, such as the approach implemented in the IMEC Evaluation System software, this dark reference is then subtracted from the actual raw data for each image. Based on measurements with actual sensors from IMEC, the inventors have concluded that a novel and more comprehensive approach is necessary. Embodiments of the invention not only assume that the dark current varies across the area of the sensor, but its magnitude also vary as a function of the sensor's temperature and integration time.
[0053] Dark current can be measured by taking images when the lens is blocked with a metal lens cap. To eliminate the effect of noise, many images can be taken successively, and averaged on a pixel-by-pixel basis. The effect of temperature and integration time variation on dark current can be directly measured by taking dark frames at a sparse selection of temperature- integration time points. Bilinear interpolation of the resulting lookup table would result in highly accurate dark current predictions. However, the required size of the lookup table, which is typically stored in non-volatile memory inside the camera or camera system, could be too large in practice.
[0054] According to embodiments of the invention, the storage space requirements can be reduced by replacing the lookup tables per pixel by a polynomial model. Experimentally, it was found that a third order polynomial for each pixel proved sufficient for reliable dark current predictions. For example in some embodiments, during a training phase, for each pixel its dark current response is measured at several temperatures - integration time points. Per pixel, a separate polynomial model is fitted through its recorded data. During an evaluation phase, for each pixel the dark current response is measured again at several temperature - integration time points, to have data which was generated independently from the training phase. Per pixel, and at each recorded temperature - integration time point the difference is determined between (a) the actual recorded pixel response and (b) the response predicted by the polynomial model for that pixel. Figures 2 to 5 present a view on the magnitude of these differences across the sensor. Each Figure shows the entire image sensor array in which a pixel is either rendered as white or as black. For a selected threshold value, if the difference between actual recorded pixel value and predicted value is less than the mentioned threshold, for all temperature - integration time points for that pixel, it is rendered as a white pixel in the figure. If not, it is rendered as a black pixel. When all pixels are rendered as white it therefore means that the entire sensor can be modeled by a polynomial model per pixel and have a difference between actual and predicted values less than the mentioned threshold, for all recorded temperature - integration time points. For a threshold value of 2.5 the presented Figure is practically completely white. The total digital range is 1024 and therefore a threshold difference of 2.5 corresponds to an approximation error introduced by the use of a third order polynomial model of less than 0.25% of the total range. Current active pixel array manufactures recommend using these sensors only to a maximum response range of 512. In this scenario the approximation error introduced by the use of a third order polynomial model per pixel is less than 0.5% of the used range. This reduces the size of the model to more manageable levels (around 80 Mbytes for the IMEC line-scan sensor described above) and as illustrated in the Figures.
[0055] Figure 1 illustrates a variation of dark current levels at the center of a semiconductor pixel array including one or more Fabry-Perot filters as a function of the integration time and sensor temperature.
[0056] Figure 2 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 0.5 for all recorded temperature - integration time points, generated using techniques described herein.
[0057] Figure 3 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 1.5 for all recorded temperature - integration time points, generated using techniques described herein.
[0058] Figure 4 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.0 for all recorded temperature - integration time points, generated using techniques described herein.
[0059] Figure 5 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.5 for all recorded temperature - integration time points, generated using techniques described herein.
[0060] Based on the techniques described above various embodiments are configured to generate a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array.
Non-linearity:
[0061] Non-linearity is an electrical effect that causes the response (i.e. digital output) of a sensor pixel to be non-linearly related to the amount of incident light reaching it. The greater the amount of non-linearity, the more important it is to take it into account in processing the raw images received from the hyperspectral camera.
[0062] In current evaluation systems, software assumes that the non-linear effect is negligible below the most significant bit (e.g., when capturing the pixel response with 10 bits, the values below 511 are assumed to be linearly related to the incident radiation and only the value above 51 1 are assumed to exhibit a non-linear saturation effect). These assumptions are used to map the raw image data by adjusting the integration time to avoid saturation and to interpolate linearly between the value corresponding to the dark current and the value obtained with a white calibration tile.
[0063] In contrast to current systems, embodiments of the systems and methods described herein measure and take into account other variables leading to a greater amount of non-linearity than accounted for by the current systems. For example, a non-linear effect is present across the full dynamic range (and not only in the range of the most significant bit of each pixel) and that this effect varies with sensor temperature and integration time.
[0064] Embodiments of the invention use monochromatic light and optical filters interposed between the light source and the sensor, to create pairs of measurements (e.g., digital reading from each pixels) from incident light beams that are in precisely known intensity ratio.
[0065] Specifically, embodiments are configured to measure sensor non-linearity by taking image pairs of monochromatic light at a wide range of unknown light intensities. The difference between the first and second image of each pair is that the second image is taken with light at a precisely known intensity fraction of the first image. The known intensity fraction can be generated by placing an optical filter in front of the light source with precisely known transmittance at the monochromatic wavelength. The monochromatic light can be generated with a broadband light source filtered through a narrowband filter. For various embodiments, such a calibration system is configured to use narrowband filters with fwhm (full width at half maximum) less than 3 nm. Since the spectrum of the broadband light source does not need to be known, we can generate image pairs at various light intensities by using, for example, a halogen light source of which the electrical current is regulated with a DC power supply. [0066] For these embodiments, a narrowband filter is mounted in front of the camera such that all of the light entering the camera is of the same monochromatic wavelength or closely thereto. Furthermore, in the second image, the system is configured to prevent the camera from observing light that has not passed through the known optical filter in front of the light source. For example, this can be achieved with adequate shielding of stray light and/or recording an extra image in which only the stray light is measured.
[0067] The light is measured without a lens, from a sufficient distance to make sure that the light rays transmitted through the narrowband filter are approximately parallel. This is necessary, because the central wavelength of optical narrowband filters varies with the angle of incident light.
[0068] Assuming for example that one records the digital output from pixels that are located behind a given spectral filter, in response to a monochromatic light at a wavelength
corresponding to the center of the filter's narrow bandpass, and that the responses of these pixels are recorded with and without the presence of an additional filter with a transmittance ("T") interposed in the incoming light beam. If the sensor response was perfectly linear, the transmittance would equal:
Figure imgf000019_0001
where I0, Ii represent the (dark current subtracted) pixel responses without and with the additional known optical filter, B is the response to only the stray light, and T is the known filter transmittance at the wavelength of the monochromatic light.
[0069] If the sensor response is not linear, then the equation above will not hold for all light intensities, and we aim to estimate a correction function Ic = f(I) so that
fik) - f{ ) _
fOo) - f(B) To estimate the full non-linearity correction function, we measure image pairs at many intensity levels, model the non-linearity function as a polynomial, and perform the following non-linear least squares optimization:
Figure imgf000020_0001
where p represents the parameter vector of the non-linearity correction function, and Wi is a weighting factor for each pixel. Since the non-linearity correction function can only be known up to an arbitrary scale factor, we add the extra constraints f(p,0)=0 and f(p,511 )=511.
[0070] Based on the above techniques, the system according to embodiments is configured to calculate a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array.
[0071] Figure 6 illustrates filter transmission approximation for a semiconductor pixel array indicating the non-linearity of the sensor before application of a non-linear compensation parameter to the raw data generated by the semiconductor ("Original") and for the corrected output that applies a non-linear compensation parameter to the raw data to compensate for the non-linear gain response of the semiconductor pixel array. The non-linearity estimation results, as illustrated in Figure 6, were determined at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm as determined using a system according to embodiments described herein.
[0072] Figure 7 illustrates the estimated non-linear gain determined using an evaluation system as described herein at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm. [0073] As shown on the Figures 6 and 7 there exists a significant non-linear response (up to 3.5%) across the full dynamic range of the pixels' response, and a non-linear correction can reduce the variance due to this non-linear effect by a factor of 3 to 5 times.
[0074] Because the raw data generated by a semiconductor pixel array is temperature dependent, embodiments of the evaluation system are configured to generate a non-linear compensation parameter, such as a correction function, that varies as well to compensate the raw data. For various embodiments, the evaluation system is configured to repeat the measurements at a plurality of pre-defined sensor temperatures to generate one or more non-linear compensation parameters, and use interpolation techniques to correct images, the raw data, taken at different temperatures. A similar technique can be used to accommodate a range of different integration times.
[0075] Figure 8 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels. Such an embodiment includes intensity reduction filter configured to be moved in and out automatically (e.g., filter wheel), such as under the control of a processor or computer to gather image pairs at two light intensities, of which the second level is known as a fraction of the first intensity level. The narrow band pass filter is configured to automatically switch between several different filter bands (e.g., filter wheel) under the control of a processor or computer to measure the camera responses at different filter bands. The evaluation system also includes a halogen light DC controller under the control of a processor or computer to gather image pairs at different base intensity levels to compute a non-linear gain correction curve over a desired response range, such as the entire response range of the semiconductor pixel array. A PID temperature controller under the control of a processor or computer to automatically sample the raw data of a semiconductor pixel array over a temperature range, such as the entire operating temperature range of the semiconductor pixel array. For various embodiments, a control program on a controller or a computer can cycle automatically through all relevant settings (e.g., temperature, exposure time (integration time), recording mode), and can at each setting record all raw data for the image pairs used to compute the non-linear gain correction function over a desired response range of a hyperspectral imaging system. For example, a hyperspectral imaging ("HSI") system, includes but is not limited to, one or more of any of a semiconductor pixel array, a lens, a filter, a housing, and other optical or electrical components.
[0076] Figure 9 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels and an integrating sphere. For such an embodiment, the evaluation system is configured to reduce the intensity of a light source, such as Halogen light, using a DC controller. The evaluation system is configured to measure the reduced intensity by a spectrometer to gather raw data of the hyperspectral imaging system for image pairs for at least two light intensities, of which the second intensity level is a known fraction of the first intensity level. For various embodiments, the evaluation system is configured to use many image pairs at different base intensity levels to compute a non-linear gain correction curve over a desired response of the range, such as the entire response range of the HSI system. The evaluation system also includes a monochromator configured to be under control of a processor or a computer to measure the HSI system responses at one or more filter bands. Further, the evaluation system includes a PID temperature controller under the control of a processor or computer to automatically sample the raw data of a semiconductor pixel array over a temperature range, such as the entire operating temperature range of the semiconductor pixel array. For various embodiments, a control program on a controller or a computer can cycle automatically through all relevant settings (e.g., temperature, exposure time (integration time), recording mode), and can at each setting record all raw data for the image pairs used to compute the non-linear gain correction function over a desired response range of a hyperspectral imaging system.
[0077] For embodiments, instead of using monochromatic light set-up with known filters as described here, the evaluation system is configured to use a non-monochrome light source to illuminate spectrally uniform diffuse grey reflectance targets (i.e., with a known, flat spectral reflectance). These targets could be observed with a lens mounted on the HSI system, and all pixels imaging the targets could then be organized in intensity tuples (with as many entries as reflectance targets). This has the potential to greatly simplify the optical measuring set-up. The reflectance properties of the targets need to be known with sufficient accuracy.
Modulation between sensor columns ( Mj_):
[0078] Current evaluation systems ignore modulation between sensor columns of a line-scan sensor (i.e. a collection of pixels from one end of the chip to the other end of the chip in the direction that is perpendicular to the Fabry-Perot filters). This leads the processing software to erroneously assume that incident light with a certain spectrum content will produce the same response irrespective of which columns of pixels is sampled to measure it.
[0079] Embodiments of the present invention are based on the experimental observation that the semiconductor pixel array, such as a CMOS sensor array, with Fabry-Perot filters exhibit a very significant response modulation between sensor pixel columns (as much as a 20% variance between column on the left and on the right of the mid line) and that such variation can be corrected by measuring the Mi modulation map and applying the required correction to the images.
[0080] According to embodiments of the invention, the response variation across each row is measured while observing the same physical point. Such measure is performed by taking a series of images of an illuminated white tile, with the camera moving sideways on a computer controlled linear translation stage. The translation speed and the geometry of the translation stage are selected so that the modulation map can be straightforwardly computed by dividing each measurement by its corresponding measurement on the central column.
[0081] For greater precision and noise tolerance, the measure is repeated for different physical points, and the computation of the modulation map is performed by estimating the optimal correction factor per pixel for mapping its response onto the corresponding average of the N central columns using linear least squares. Ideally, the N central columns should correspond with those columns used to measure the response composition matrix for the specific CMOS sensor (see definition of response composition matrix described herein).
[0082] Such measurements assume that all recorded images have been corrected by dark current subtraction and non-linearity correction. Other assumptions include the following:
The reflectance of the white tile is approximately Lambertian.
The translation speed and camera frame rate are precisely tuned so that the image of the white tile moves horizontally over a fixed integer number of pixels between any two consecutive images.
The optical axis of the camera is perpendicular to the white tile surface.
The translation is parallel with the sensor rows.
The lens distortion is either negligible or corrected for. The integration time of the camera is the same for all frames.
The illumination spectrum and intensity remains the same throughout the scanning process.
[0083] In other embodiments a frame filling uniform light source (e.g. generated by an integrating sphere) can be observed. This entails the following assumptions: the spectrum and intensity of the light source is the same across the camera's field of view.
[0084] Figure 10 illustrates an estimated modulation map Ml . The modulation ranges between 0.80 and 1.19. This suggests an HSI system without calibration information applied to the raw output of the system may be distorted by as much as 20 percent when measured at the left or right end of the sensor.
[0085] Figure 11 illustrates reconstructions (up to scale) of identical light spectra using a HSI system without compensation for modulation between sensor columns. Clearly, the reconstructed spectrum varies strongly across the sensor.
Modulation between sensor rows ( M? ):
[0086] Current evaluation systems typically ignore the effect of response modulation between pixel rows of a line-scan sensor, and assume that a uniform light source of a given spectrum will produce the same response on the central pixel columns of the sensor (except for a constant scaling factor) when observed with a naked sensor or with the sensor integrated with a lens mounted on the camera.
[0087] Embodiments of the invention use images to measure the reflectance from a white tile that is approximately Lambertian, and multiple points on the tile are observed and measured as the linear translation stages moves the camera at a speed and frame rate that is precisely selected so that the images move vertically over a fixed integer number of pixels between consecutive frames.
[0088] We can generate images in which each sensor pixel on the same column was illuminated with an external light source of the same spectrum, but filtered by the camera-lens system. In practice, this may be achieved in two ways:
[0089] By taking a series of images of an illuminated white tile, with the camera moving along its vertical image axis on a linear translation stage. This entails the following assumptions:
o The reflectance of the white tile is approximately Lambertian.
o If multiple points on the white tile are observed to reduce noise, they must reflect light of the same spectrum (but not necessarily the same intensity). This implies that the illumination must be spectrally uniform across the white tile,
o The translation speed and camera frame rate are precisely tuned so that the image of the white tile moves vertically over a fixed integer number of pixels between any two consecutive images,
o The optical axis of the camera is perpendicular to the white tile surface, o The translation is parallel with the sensor columns,
o The lens distortion is either negligible or corrected for.
o The integration time of the camera is the same for all frames,
o The illumination spectrum and intensity remains the same throughout the
scanning process.
[0090] By observing a frame- filling uniform light source (e.g., generated by an integrating sphere). This entails the following assumptions:
o The spectrum of the light source is the same across the camera's field of view o The intensity of the light source is constant across the camera's field of view
[0091] For both ways, we must additionally
a. have accurate knowledge of the spectral transmittance of the lens.
b. know the spectrum of the light source, or we can repeat the same
measurements with different optical filters with known spectral transmittance mounted in front of the lens (alternatively, we can use diffuse reflectance targets with known spectral reflectance).
c. know the response composition matrix of the sensor.
[0092] The mapping of full-system (i.e., camera + lens) spectral responses onto the
corresponding responses in IMEC's response composition matrix calibration set-up can be expressed as a sensor row-dependent scaling on the central sensor columns.
[0093] In embodiments of the present invention, the measurements are processed by the software according to the following methodology:
Assume the sensor is illuminated by a frame-filling uniform light source with spectrum L( ), observed through a lens with spectral transmittance T_L (λ). Then the linearized sensor response on the central column can be written as
1 (b) = M2 (b) j TL (X)L(X)f(b, X) dX ,
where b indicates the spectral band on the sensor for which we describe the response, f(b, ) is the spectral response function of the sensor band, and M_2 (b) is the intensity modulation induced by the camera-lens system at the location of the band. In discretized form, we have
I (b) = M2 (b) (f(b) - (TL ® L))
where f(b) is the Q-element row vector on the row of the response composition matrix corresponding with the observed band, and T_L®L is the Q-element row vector obtained by point-wise multiplying the lens transmittance vector with the light spectrum.
Typically, the spectrum ranges from 400 to 1000 nm in 1 nm steps, so Q=601.
[0094] If both the light spectrum and the lens transmittance are known, we can compute the modulation directly through the point-wise division
b
[0095] If the light spectrum is unknown, then we have N (number of bands) equations with 2N unknowns (entries of M_2 + entries of F (T_L®L)), and the system is under-constrained.
[0096] We can resolve this by recording K images with spectrally independent light sources L i, i£[l ,K], and parameterizing the light sources with parameter vectors p i of dimension P<N. E.g., if we model the light sources as perfect black body radiators, the number of parameters per light source is P=2 (temperature and intensity). We then obtain K*N equations with N+K*P unknowns, which can be solved as soon as we have enough independent light sources (e.g., halogen lights driven at different current levels).
[0097] Alternatively, we can generate different light source spectra by filtering the light with a selection of K different optical filters of which we know the spectral transmittances. This way, we obtain a set of images
li{b) = M2 (b){f{b) - (TL ® Ti ® L)) ,
with T_i, the spectral transmittance of filter i. The number of equations is still K*N, but the number of unknowns now remains fixed at N+Q (or N+P if we parameterize the light source). If P<N, we can solve this non-linear optimization problem if we have at least two different filters (or one filtered + one unfiltered recording).
[0098] Using a linear translation stage, we can simulate the uniform light source by making a scan of an illuminated white tile, with translation according to the vertical sensor direction. The translation speed must be fixed at eight pixels per frame (assuming the spectral bands of the sensor are eight pixels high). For each physical point of the white tile scanned in this way, we obtain the set of equations
Ii(b) = M2(b)(f(b) - (TL ® Ti ® R ® L)) , where R is the spectral reflectance vector of the physical point on the white tile. Given that we know R (for a white tile, this should be approximately a constant across the spectrum), we obtain the same non-linear optimization problem as described above for the uniform light source case. In practice, we average the measurements over many physical points of the white tile to improve the signal-to-noise ratio.
[0099] If a halogen light source with an aluminum- coated reflector is used, a good parametric approximation for the light spectrum is an ideal black body radiator (Planck law). In our experiments, the light bulb spectrum was even more accurately modeled as a black body radiation spectrum (generated by the tungsten filament), filtered through a clear glass element
(the front glass of the halogen light source). The transmission curve of the front glass was derived from accurate spectral radiant flux measurements of our halogen lamp, measured by
Laser2000 in the Netherlands. Two such measurements were made, both at an electrical current of 3 A, but one in the horizontal orientation and one in the vertical orientation. The measurements exhibited a significant spectral difference, both in color temperature and in intensity, which may be due to differences in convection inside the light bulb between the two orientations. Assuming the optical properties of the front glass element did not change between the two measurements, we can mathematically describe them as
LH{X) = P{TH, X) - TG {X)
LV(X = P(TVl X - TG (X where Ρ(Τ,λ) describes the ideal black body radiation spectrum of the filament at temperature T, and T_G (λ) is the spectral transmittance of the front glass element. The latter was then estimated by fixing T V at 3000 K, and optimizing for T_H.
[00100] The Figure 12 shows that the reconstructed light source spectrum with and without the M_2 modulation reconstruction, demonstrating the importance of this calibration step and of the correction applied as a result of it. Figure 12 illustrates measured emission spectra of the halogen light source in the horizontal and vertical orientation. These measurements were used for estimating the transmittance of the front glass element, and are accurate up to 0.67 percent.
[00101] Figure 13 illustrates the estimated halogen light front glass transmittance spectrum. Figure 14 illustrates the typical transmission spectrum of soda-lime glass provided for reference (from Wikipedia).
[00102] Figure 15 illustrates the estimated modulation M2. Figure 16 illustrates a light source spectrum reconstruction with and without M2 modulation correction. In both cases the spectral correction matrix computed by IMEC was used for spectral correction.
[00103] For embodiments, an evaluation system is configured to use an image-side telecentric lens, or direct measurement of the response composition matrix of the HSI system using a monochromator set-up. Further the evaluation system is configured to avoid possible inaccuracies inherent in the optical filter-based measuring set-up by replacing the filters with observation of diffuse reflectance targets with accurately known spectral reflectance.
Spatial Variation Calibration: [00104] The previously mentioned modulation between sensor columns (Ml) and rows
(M2) are specific examples of spatial variation calibration, explained here in more general terms.
[00105] When an ideal camera system would look at an ideal scene consisting of complete perfect uniform spectral radiance, then two pixels belonging to the same filter group should give identical responses, irrespective of their location on the sensor. In reality, such pixels can have different responses due to differences which are sensor location dependent, hence the term spatial variation. Here a split in the process is made between two types of spatial variation
[00106] Magnitude calibration: Requires the following two priors:
a. When the camera system would be presented with an ideal scene of uniform
spectral radiance, then changing the spectral content of this uniform radiance does not change the ratio of the responses between any two pixels belonging to the same filter group. The spatial variation between pixels of the same filter group can then be modeled as only a variation in magnitude of response which needs to be calibrated. Reasons for this can e.g. be:
i. a vignetting effect caused by the lens in the system
ii. or a change in the global quantum efficiency of the sensor pixels across the sensor
b. A part of the system has been spectrally pre-calibrated before. Meaning that the spectral response curve of each pixel or of a filter group is known beforehand (e.g. calibrated by manufacturer), but that new elements or changes introduced to the system (e.g. such as adding a lens, or changing aperture, ... ) caused changes in magnitude of the response across the sensor. [00107] Full spectral calibration: This calibration is required when the priors of magnitude calibration are not fulfilled, because e.g. :
a. A pre-calibration of a part of the system is not available.
b. Any phenomenon responsible for the following: When the camera system would be presented with an ideal scene of uniform spectral radiance, then changing the spectral content of this uniform radiance does change the ratio of the responses between any two pixels belonging to the same filter group. E.g.
i. Pre-calibration is available but not valid for all pixels. E.g. only a part of the sensor has been pre-calibrated and the other pixels were assumed to behave similarly but don't. E.g. because the spectral quantum efficiency curve per pixel changes depending on its location on the sensor.
ii. New parts or changes introduced to the system cause a difference in
spectral content received by pixels normally designed to receive the same spectral content. E.g. because the lens introduced chromatic aberrations, or caused light to have a different angle of incidence for pixels at different locations causing a shift in spectral filtering, or causing multiple reflections between sensor and lens which are location dependent.
[00108] Magnitude calibration is comprised of two consecutive parts, specifically:
o Intra-filter calibration: All pixels within the same filter group should give identical response values after intra-filter calibration irrespective of their location on the sensor. o Inter-filter calibration: given prior knowledge on real input light spectra and the pre-calibrated spectral response curve of each filter group, the following data can be generated:
A: the real recorded response of each filter group
B: the values obtained by applying the pre-calibrated spectral response curves to the input light spectra.
Ideally they should be identical but in reality there can be a difference in magnitude. Inter-filter calibration determines these difference in magnitude in order to correct B so that they match A, the real recorded responses.
Intra-Filter Calibration
[00109] Intra-filter calibration is also provided in some embodiments. Under all prior assumptions and previously performed calibrations (dark current + non-linearity), pixels within the same filter group, but lying at different locations on the sensor, will now give the same response value up to an unknown scale. The goal is to determine these scale differences in order to get complete identical responses for all pixels of the same filter group, when an object is viewed by the system, irrespective of the location on the sensor for these pixels. Therefore the physical setup below is designed to provide all pixels within the same filter group with the same spectral radiance. This can be achieved in several ways as described below.
[00110] In the current embodiment the camera system is physically translated with respect to a calibration target e.g. a white tile. The calibration target is designed to reflect light as a Lambertian surface such that from a point p on this calibration target, the same spectral radiance is emitted in all directions. By properly translating the camera system the same target point p can be viewed by all different pixels of the same filter group. The manner of translation therefore depends on the layout of the pixels. E.g. if all pixels of a single filter group are within the same sensor row then the translation direction will be parallel to this row such that the same calibration point p is seen consecutively by all pixels on the same row. E.g. if the sensor has a mosaic pattern, meaning that all pixel of a single filter group are located on a grid with fixed pixel spacing in vertical and horizontal directions then the translations must also be according to the directions of this grid and in sync with the grid spacing such that the image of the same physical calibration target point visits all pixels of that filter group.
[00111] In an alternative embodiment, the camera system is looking into an integrating sphere. The purpose of the integrating sphere is present very low spatial variation of emitted radiance in the entire field of view of the camera system. As such all pixels of the same filter group can be assumed to receive the same spectral radiance. Such a system would not require translations but puts more stringent constraints on the spatial uniformity required of the integrating sphere.
[00112] Further, a combination of current embodiment and previous alternative embodiment where the tile of the current embodiment is replaced by the integrating sphere.
[00113] To reduce noise during the measurements, multiple images taken from the same vantage point are taken and averaged. This principle is valid for all calibration steps.
[00114] The difference in scale between the response different pixels within the same filter group can be deduced from the difference in their recorded responses (after dark current and non-linearity have been calibrated for already).
Inter-filter calibration [00115] In another embodiment, inter-filter calibration is provided. After all prior calibrations, all pixels within the same filter group should now behave identically._Also assumed is that the spectral response curve of each of these groups has been pre-calibrated, e.g. by the manufacturer, and that these spectral response curves are available.
[00116] The inter-filter calibration takes care of changes in scale which can be seen between actual recorded values of each filter group and those predicted by the pre-calibration when viewing input light spectra. These differences in scaling can e.g. be due to the vignetting of the lens or other changes which has been introduced into the system after the pre-calibration.
[00117] We make a distinction between the following approaches to deduce these scaling differences.
a. If the input light spectrum is exactly know a-priori then the changes in scale can be determined directly by comparing the recorded response of each filter group with those predicted by applying the pre-calibrated spectral response curves of each filter group with the known input light spectrum. This would require a prior knowledge of the spectrum emitted by a calibrated light source and the spectral response curve of each additional element introduced into the path between light source and the pre-calibrated system such as e.g. a lens which was added or changed. If prior calibration data is available of all elements this approach is feasible.
b. If the input light is unknown, we can distinguish approaches which can be
combined.
i. Shape of input light spectrum is determined by less parameters, P, than there are filter groups N. E.g. if we use light source(s) which can be modelled accurately as black body radiators, the number of parameters per light source is P=2 (black body temperature and intensity). Making K recordings, each at a different setting of the light source, we then obtain KxN recorded filter group responses and have N+KxP unknowns. The unknowns come from the N unknown scales to calibrate for and the KxP unknowns coming from the different setting of the light source per recording. The equations to retrieve the unknowns can then be solved as soon as we have enough independent light sources (e.g., halogen lights driven at different current levels),
ii. Shape of input light is unknown but can be modeled with P parameters which are kept fixed during K recordings, but between each recording an object is placed in the path between the light source and the camera system of which the introduced spectral change is known a-priori. E.g. a set of filters with known calibrated transmittance curves, a set of diffuse calibrated Lambertian reflectance tiles. In this case the number of recorded filter group responses is KxN, but the number of unknowns now remains fixed at N+P. Ν for the unknowns scales to calibrate for and P to model the unknown fixed light source.
[00118] In one example, the physical embodiment is comprised of the same current embodiment as mentioned for intra-filter calibration but with the following changes:
a. Between the K recordings a different known filter is introduced in the path from light source to camera system. b. The translations are now such that the same target point p is viewed each time by a pixel from a different filter group instead of pixels within the same filter group.
[00119] Alternative embodiments are further provided and are comprised of the same as current embodiment but with different known reflective tiles between recordings or a combination of this with known filters.
[00120] Additionally, the alternative embodiment described above may be used for intra- filter calibration, but in which variation between the K recordings is introduced by inserting K different known filters somewhere in the light path, e.g. between light source and integrating sphere at entrance port of the integrating sphere, or e.g. in front of camera lens.
Full spectral calibration
[00121] No current embodiment but the envisioned embodiment is to use an integrating sphere + monochromator + spectrometer/photodiode to perform full spectral calibration. The purpose of the integrating sphere is to provide the camera system with an as spatially uniform spectrum as possible to all pixels on the sensor. The purpose of the monochromator is to scan through the wavelength range and determine the response of each pixel individually for each wavelength in the range.
[00122] The purpose of the spectrometer/photodiode is to look inside the integrator sphere in order to know the spectral radiance of the light being directed towards the camera system. This knowledge, together with the recorded responses for each individual pixel allows determining the spectral response curve for each individual pixel.
[00123] In this way spatial variation can be determined for each individual pixel. Data compression techniques become important to handle the high amount of data generated. [00124] The system can be calibrated for each setting which can change the spatial variation. E.g. when the aperture of the lens is changed, the collection of light ray directions which fall onto the pixels changes and these changes are different for pixels at different locations. The change in light directions can bring a change in spectral response and therefore causes a different change in the spectral response curves for different pixel locations and requires full spectral calibration as presented above.
Response composition Matrix (F - Matrix):
[0125] The response composition matrix (or F-matrix) describes the response of each of the hyperspectral camera's bands to monochromatic light at the full range of wavelengths to which the sensor is sensitive. It is a N X Q matrix, in which each of the N rows corresponds with a specific band of the sensor, and each column corresponds with a certain wavelength. Typically, there are around 105 bands for an IMEC line-scan sensor, and the wavelengths cover the range from 400 to 1000 nm in 1 nm steps. Right-multiplying the F-matrix with a column vector representing a light spectrum produces a N-dimensional vector representing the expected response of each of the sensor's bands to the light.
[0126] IMEC has measured the F-matrix by exposing the naked sensor (without the camera enclosure and additional bandpass filter) to a monochromatic light beam from a light source (produced by a monochromator). Sensor readings (images) were taken at each of the
wavelengths in the measuring range. The response on the central columns of the sensor was averaged per sensor band, and entered into the rows of the F-matrix. The response composition matrix for the sensor with an additional bandpass filter is then computed by element-wise multiplying each row of the matrix with the spectral transmittance of the bandpass filter. [0127] Methodology of embodiments of the present invention: We have noticed that there are certain artifacts present in the structure of the F-matrix that indicate flaws in the measurement procedure followed by IMEC. One of these artifacts is the presence of sudden drops or increases in the response levels of the sensor bands when passing certain wavelengths. The jumps produce discontinuities in the filter characteristics of the bands, and are followed by all bands simultaneously. The most plausible explanation for these artifacts is a change in gain (or light intensity, or integration time) when the measurement process crosses certain wavelengths, possibly to compensate for decreased signal-to-noise ratios.
[0128] We have been able to remove some of these artifacts by multiplying the affected columns of the F-matrix with an experimentally deduced gain factor. Admittedly, this is a very ad-hoc procedure, and the validity can only be evaluated by measuring well-known spectra.
[0129] Figure 17 illustrates a graphical representation of the composition matrix for a sensor as measured by IMEC. Response curves for different sensor bands are plotted in different colors. The composition matrix was measured without an additional 600 - 1000 nm bandpass filter.
[0130] Figure 18 illustrate a close-up of a spectral region of the composition matrix where a discontinuity artifact is present. Figure 19 illustrates a composition matrix after application of calibration information such as a spatial modulation compensation parameter generated using techniques including those described herein.
Spectral correction:
[0131] Spectral correction refers to the process of correcting a spectral signal captured by a non- ideal filter bank (i.e., filters consisting of non-ideal peaks or multiple peaks). Since the response of the filter bands on Ximea's hyperspectral cameras with the IMEC sensor strongly exhibits such imperfections, spectral correction is necessary to obtain reliable estimates of the incoming light spectrum.
[0132] It is desirable to express the spectral correction in a number of bands that is close to the number of sensor bands. To this end, let's introduce a set of virtual filters. These are ideal filters, i.e. having a single peak with no side lobes and fixed bandwidth:
[0133] Let N_s and N_v be the number of sensor filters and the number of virtual filters, respectively. Let F_s (N_s x Q_ ad F_V (N_v x Q) be the response composition matrices of the sensor filters and virtual filters, respectively. Construct a new N_v x N_s correction matrix as follows:
S = F F*
[0134] The corrected spectrum l_v then becomes:
[0135] The latter only approximately holds because F_s*F_s does not reduce to a QxQ identity matrix.
[0136] The current systems compute the right side pseudo-inverse as follows:
Figure imgf000040_0001
[0137] In embodiments of the present invention, it was found via experimentation that instead of using a band-pass filter for the range 600 to 1000 nm, better results were obtained by using a box filter from 602 to 970 nm, and applying a sigmoid function to smooth the lower and upper edges. [0138] According to embodiments of the invention, better results are obtained if a covariance matrix is applied and a non-linear Wiener optimization is used. This type of optimization assumes that the unknown spectra that needs to be estimated from the measured spectra belongs to a family of spectral curves that are reasonably smooth and can be extracted from a broader set of representative samples. Thus, the optimum is tuned to the desired application: the Wiener optimization can be tuned to optimally reconstruct plant spectra by using a training set that provide a range of plant spectra, or it can be tuned to optimally reconstruct the spectra of brain tissues by using a training set that provides a range of spectra typically found in such tissues.
[0139] With Wiener estimation, the unknown spectrum is assumed to be a sample of a vector random process with a known mean and covariance matrix K_c. Given a zero mean, the pseudo- inverse becomes:
Figure imgf000041_0001
[0140] The covariance matrix K_c can be modeled as first-order Markov process covariance matrix, which depends on the correlation parameter p E [0,1] between neighboring values of the spectrum. This model essentially assumes that the spectra that need to be corrected belong to the family of smooth curves. Alternatively, K_c can be computed as the second order moment matrix (i.e. covariance matrix about zero) of a set of training spectra. The training spectra should be selected as representative for the intended application (e.g., vegetation and soil reflectance spectra illuminated by sunlight for agricultural applications). Observation noise can be modeled as a white noise process with measured noise variance per sensor band stored in the diagonal noise covariance matrix K n. [0141] Note that when K_c=I and K_n=0, this procedure is identical to the right-side pseudo- inverse as mentioned earlier.
[0142] We compiled a set of example spectra from a spectral reflectance database of vegetation and plants. The database contains 200 samples. We extracted a subset of 31 based on variation of the spectral profile. More specifically, we selected those curves with a standard deviation larger than 0.2, as we do not want to introduce a bias towards overly smooth curves.
[0143] To evaluate our proposed spectral correction methods, we applied spectral correction and white-balancing to simulated data based on the example database.
[0144] Compared to the spectral correction matrix in the calibration file provided by IMEC, we were able reduce the reconstruction error to roughly l/3rd of the original error using the improved pseudo-inverse as shown in Table 1 below.
TABLE 1
Figure imgf000042_0001
[0145] Figure 20: Example of spectral correction of a synthetic vegetation spectrum with IMEC's method and our Wiener pseudo-inverse without training on example spectra. Notice significant differences above 875 nm.
Measurement transfer between cameras
[0146] In some cases, it is necessary to transfer measurements from one hyperspectral camera to another. For example, one camera may be used to measure the illumination spectrum, while the other camera measures the spectrum reflected from a sample. In order to estimate the reflectance spectrum of the sample, it is necessary to apply white balancing to the measurements of the second camera, based on the illumination spectrum measured by the first camera.
IMEC's approach
[0147] Since IMEC only calibrates the central column responses of the naked sensor, it is only possible to compare measurements taken under these same conditions (i.e., measured on the central columns of the naked sensors without camera body and lens). We have already shown that in a full camera-lens setup, enough modulation effects exist within and between sensors to make direct transfer of measurements (with IMEC's spectral correction) between cameras highly inaccurate under these conditions.
Our approach, Methodology of embodiments of the present invention
[0148] After full calibration of a camera-lens system using the methods described in the previous sections, we are able to reconstruct good approximations of the actual incoming light spectrum from the measurement signals of a hyperspectral sensor. This implies that the reconstructed spectra can be directly transferred between cameras, provided that they were calibrated to the same light intensity level (otherwise, the reconstructions are only transferable up to scale).
[0149] An interesting practical problem is the transfer of spectral measurements from a camera with mounted diffuser system to a regular hyperspectral camera with a lens. The camera with diffuser may be configured in an upwards viewing orientation to sample the hemispherical illumination spectrum, while the camera with lens may be used in a downward orientation to measure the light reflected from a sample. In this case, intensity level calibration may be achieved by comparing a spectrally calibrated measurement of an illuminated white tile produced by the camera-lens system to a calibrated illumination spectrum measurement with the diffuser system placed at the position of the white tile, facing the light source. The scale factor between both cameras can be straightforwardly deduced from the intensity difference between both measurements.
Generation of Calibration Data
[0150] The calibration process steps are performed sequentially (For spatial variation calibration there is a split in the process). Each step generates its own calibration data which will be already employed in the further calibration steps. In the end all generated calibration data is tagged for traceability, digitally signed for authenticity, and further compressed to minimize footprint for easy of exchange or storage in non-volatile memory of the camera system.
[0151] When data compression is mentioned different variants and levels are possible. The simplest is no compression at all so that the recorded/modelled data becomes in essence a lookup table which can be used to lookup data using Nearest Neighbor or other interpolation techniques.
[0152] A simple compression technique consists of subsampling the dimensions, so in essence throwing part of the data away and relying on interpolation to represent the discarded data accurate enough.
[0153] Compression using spatial correlations is different from the mentioned general compression techniques. In the first technique we will exploit the fact that we know the origin of the data source. E.g. it is likely that all pixels from the same pixel group demonstrate related behavior and since we know the layout of the pixel groups we might compress per group, or rely on the fact that neighboring pixel on the sensor might have similar responses, ... The general compression techniques does not employ prior knowledge of the data to compress, e.g.
application of general zip operation to the data.
[0154] A main aim of compression is to have a small memory footprint which is useful when storing this data in non-volatile memory of a camera system or when exchanging this data as files. When the data however will be used it will be decompressed (and perhaps even expand beyond original size) to a format which consumes more memory again but is more efficient in processing efficiency.
Usage of Calibrated Camera System
[0155] The idea is that a calibration usage software component can be licensed to be
incorporated into the frame grabber software or image processing software of other vendors.
[0156] The calibration usage software will verify signature and camera system IDs to guarantee authenticity and extract traceability data which can be added to all processed image content such that it is traceable.
[0157] Because the electronic camera device can be queried, it is possible to automatically verify its ID and its settings to ensure that the calibration data is valid for this camera and its settings. Some elements in the camera system however are passive and cannot be automatically queried for its actual settings. For example the lens used by an industrial camera is often a passive device but e.g. its aperture setting can have an influence on how the calibration is performed. Since these settings cannot be automatically verified this type of settings have to be explicitly set and confirmed by the user.
[0158] Traceable calibrated image content can be generated without additionally passing it through the spectral correction step. The traceable calibrated image content is independent of the application whereas the spectral corrected data can be application specific since spectral correction can use prior knowledge of typical spectra to expect from the objects which are seen in the data. This is one main reason for making a distinction between both types of generated data.
[0159] Figures 21 A and 21B show spectral measurement of Tungsten-Halogen light before (Fig. 21 A) and after (Fig. 2 IB) camera calibration performed according to embodiments of the present invention (HSI lines can camera using a 600-1 OOOnm sensor). Figures 22A and 22B show spectral measurement of Calibration Target CSTM-MC-010 before (Fig. 22A) and after (Fig. 22B) camera calibration performed according to embodiments of the present invention.
Summary:
[0160] When the calibration apparatus and methods of the present invention is used, the inventors have demonstrated that the raw images acquired with a CMOS imaging sensor with Fabry-Perot filters can be corrected to take into account multiple sources of variance. These corrections are based on the measurements obtained with the novel calibration apparatus and methods, and can be applied to the raw images using calibration files embedded into the camera system (e.g., inside the camera module and in the processing computer). These calibration files are unique to each sensor, lens, and camera system, and enable the pre-processing of the raw hyperspectral images, so that the processing software is more efficient, robust and reliable in extracting useful information.
[0161] Exemplary embodiments of the present invention have been presented. The invention is not limited to these examples, and these examples are presented herein for purposes of illustration and not limitation. Alternatives (including equivalents, extensions, variations, deviations, etc.) will be apparent to persons skilled in the relevant art(s) based on teachings contained herein.

Claims

CLAIMS What is claimed is:
1. An apparatus for calibrating a hyperspectral imaging system comprising:
an evaluation system configured to:
receive the hyperspectral imaging system including a semiconductor pixel array having one or more Fabry-Perot filters;
characterize the hyperspectral imaging system to determine variations between one or more pixels of the semiconductor pixel array having one or more Fabry-Perot filters; and
generate calibration information for the hyperspectral imaging system, the calibration information including generated based on at least two or more calibration parameters including a dark- current compensation parameter based on integration time and temperature response of the semiconductor pixel array, a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array, a spatial modulation compensation parameter based on a response variation of pixels positioned under a filter having the nominal central frequency, and a spectral modulation parameter based on a pixel response variation due to light wavelength outside the nominal width of the spectral filter in front of that pixel in the semiconductor pixel array.
2. The apparatus of claim 1, further comprising the evaluation system configured to transmit the calibration information to the memory of the hyperspectral imaging system.
3. The apparatus of claim 1, further comprising the evaluation system configured to generate one or more calibration files, the one or more calibration files including one or more of any of calibration information, a serial number of the semiconductor pixel array, a serial number of a camera system including the semiconductor pixel array, a lens of the hyperspectral imaging system, a calibration date, a calibration location, an operator identifier, a reference identifier, a digital signature, secure communication information, and a calibration version number.
4. A method for calibrating a hyperspectral imaging system comprising:
characterizing the hyperspectral imaging system including a semiconductor pixel array having one or more Fabry-Perot filters;
generating calibration information for the hyperspectral imaging system, the calibration information including generated based on at least two or more calibration parameters including a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array, a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array, a spatial modulation compensation parameter based on a response variation of pixels positioned under a filter having the nominal central frequency, and a spectral modulation parameter based on a pixel response variation due to light wavelength outside the nominal width of the spectral filter in front of that pixel in the semiconductor pixel array.
5. A hyperspectral imaging system comprising:
a semiconductor pixel array including one or more Fabry-Perot filters, the semiconductor pixel array configured to generate raw image data; and a memory configured to store calibration information generated based on at least two or more calibration parameters, the calibration information including a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array, a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array, a spatial modulation compensation parameter based on a response variation of pixels positioned under a filter having the nominal central frequency, and a spectral modulation parameter based on a pixel response variation due to light wavelength outside the nominal width of the spectral filter in front of that pixel in the semiconductor pixel array.
6. A method for processing hyperspectral images comprising:
receiving one or more sets of raw image data from a hyperspectral imaging system including a semiconductor pixel array including one or more Fabry-Perot filters;
receiving calibration information for the hyperspectral image system for each set of the one or more sets of raw image data, the calibration information generated based on at least two or more calibration parameters including a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array, a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array, a spatial modulation compensation parameter based on a response variation of pixels positioned under a filter having the nominal central frequency, and a spectral modulation parameter based on a pixel response variation due to light wavelength outside the nominal width of the spectral filter in front of that pixel in the semiconductor pixel array; and for each of the one or more sets of raw image data, generating output image data to correct spectral imperfections of the semiconductor pixel array based on the calibration information.
7. An apparatus for calibrating a hyperspectral imaging system configured to generate to generate one or more calibration files, the one or more calibration files including one or more of any of calibration information, a serial number of the semiconductor pixel array, a serial number of a camera system including the semiconductor pixel array, a lens of the hyperspectral imaging system, a calibration date, a calibration location, an operator identifier, a reference identifier, a digital signature, secure communication information, and a calibration version number based on output from an evaluation system configured to calibrate a hyperspectral imaging system.
8. A data structure for a hyperspectral imaging system comprising one or more of any of calibration information, a serial number of the semiconductor pixel array, a serial number of a camera system including the semiconductor pixel array, a lens of the hyperspectral imaging system, a calibration date, a calibration location, an operator identifier, a reference identifier, a digital signature, secure communication information, and a calibration version number based on output from an evaluation system configured to calibrate a hyperspectral imaging system.
9. A method for generating one or more calibration files including one or more of any of calibration information, a serial number of the semiconductor pixel array, a serial number of a camera system including the semiconductor pixel array, a lens of the hyperspectral imaging system, a calibration date, a calibration location, an operator identifier, a reference identifier, a digital signature, secure communication information, and a calibration version number based on output from an evaluation system configured to calibrate a hyperspectral imaging system.
PCT/US2017/060409 2016-11-07 2017-11-07 Calibration method and apparatus for active pixel hyperspectral sensors and cameras WO2018085841A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP17867023.8A EP3535553A4 (en) 2016-11-07 2017-11-07 Calibration method and apparatus for active pixel hyperspectral sensors and cameras

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662418755P 2016-11-07 2016-11-07
US62/418,755 2016-11-07

Publications (1)

Publication Number Publication Date
WO2018085841A1 true WO2018085841A1 (en) 2018-05-11

Family

ID=62076383

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/060409 WO2018085841A1 (en) 2016-11-07 2017-11-07 Calibration method and apparatus for active pixel hyperspectral sensors and cameras

Country Status (2)

Country Link
EP (1) EP3535553A4 (en)
WO (1) WO2018085841A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459135A (en) * 2018-12-07 2019-03-12 中国科学院合肥物质科学研究院 A kind of CCD imaging spectrometer image bearing calibration
CN112304904A (en) * 2019-07-15 2021-02-02 松山湖材料实验室 Silicon wafer reflectivity detection method based on filter array
US11092677B2 (en) * 2017-09-29 2021-08-17 Sony Semiconductor Solutions Corporation Time measurement device and time measurement unit
EP4001867A1 (en) * 2020-11-23 2022-05-25 Thermo Fisher Scientific (Bremen) GmbH Diagnostic testing method for a spectrometer
CN114785965A (en) * 2022-04-20 2022-07-22 四川九洲电器集团有限责任公司 Hyperspectral image automatic exposure method and system based on COPOD algorithm
WO2023041566A1 (en) 2021-09-15 2023-03-23 Trinamix Gmbh Method for calibrating a spectrometer device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112444502B (en) * 2020-11-19 2021-09-24 哈尔滨理工大学 Lead ion/bacterium monitoring double-parameter optical fiber sensing device and implementation method
CN112444503B (en) * 2020-11-19 2021-09-24 哈尔滨理工大学 Copper ion/bacterium monitoring dual-parameter optical fiber sensing device and implementation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140257113A1 (en) * 2006-06-30 2014-09-11 Svetlana Panasyuk Oxyvu-1 hyperspectral tissue oxygenation (hto) measurement system
US20150018645A1 (en) * 2013-07-15 2015-01-15 Daniel Farkas Disposable calibration end-cap for use in a dermoscope and other optical instruments
DE102014002514B4 (en) * 2014-02-21 2015-10-29 Universität Stuttgart Device and method for multi- or hyperspectral imaging and / or for distance and / or 2-D or 3-D profile measurement of an object by means of spectrometry
US20160155882A1 (en) * 2007-04-18 2016-06-02 Invisage Technologies, Inc. Materials, systems and methods for optoelectronic devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140257113A1 (en) * 2006-06-30 2014-09-11 Svetlana Panasyuk Oxyvu-1 hyperspectral tissue oxygenation (hto) measurement system
US20160155882A1 (en) * 2007-04-18 2016-06-02 Invisage Technologies, Inc. Materials, systems and methods for optoelectronic devices
US20150018645A1 (en) * 2013-07-15 2015-01-15 Daniel Farkas Disposable calibration end-cap for use in a dermoscope and other optical instruments
DE102014002514B4 (en) * 2014-02-21 2015-10-29 Universität Stuttgart Device and method for multi- or hyperspectral imaging and / or for distance and / or 2-D or 3-D profile measurement of an object by means of spectrometry

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3535553A4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11092677B2 (en) * 2017-09-29 2021-08-17 Sony Semiconductor Solutions Corporation Time measurement device and time measurement unit
CN109459135A (en) * 2018-12-07 2019-03-12 中国科学院合肥物质科学研究院 A kind of CCD imaging spectrometer image bearing calibration
CN112304904A (en) * 2019-07-15 2021-02-02 松山湖材料实验室 Silicon wafer reflectivity detection method based on filter array
CN112304904B (en) * 2019-07-15 2023-11-03 松山湖材料实验室 Silicon wafer reflectivity detection method based on filter array
EP4001867A1 (en) * 2020-11-23 2022-05-25 Thermo Fisher Scientific (Bremen) GmbH Diagnostic testing method for a spectrometer
AU2021273542B2 (en) * 2020-11-23 2023-01-05 Thermo Fisher Scientific (Bremen) Gmbh Diagnostic testing method for a spectrometer
WO2023041566A1 (en) 2021-09-15 2023-03-23 Trinamix Gmbh Method for calibrating a spectrometer device
CN114785965A (en) * 2022-04-20 2022-07-22 四川九洲电器集团有限责任公司 Hyperspectral image automatic exposure method and system based on COPOD algorithm
CN114785965B (en) * 2022-04-20 2023-09-05 四川九洲电器集团有限责任公司 Automatic hyperspectral image exposure method and system based on COPOD algorithm

Also Published As

Publication number Publication date
EP3535553A1 (en) 2019-09-11
EP3535553A4 (en) 2020-09-30

Similar Documents

Publication Publication Date Title
EP3535553A1 (en) Calibration method and apparatus for active pixel hyperspectral sensors and cameras
US11566941B2 (en) Systems and methods for calibrating, configuring and validating an imaging device or system for multiplex tissue assays
US11193830B2 (en) Spectrocolorimeter imaging system
JP2019070648A (en) High accuracy imaging colorimeter by specially designed pattern closed loop calibration assisted by spectrograph
JP2016510408A5 (en)
US9638575B2 (en) Measuring apparatus, measuring system, and measuring method
US10514335B2 (en) Systems and methods for optical spectrometer calibration
US20140193050A1 (en) Multispectral Imaging Systems and Methods
US8976240B2 (en) Spatially-varying spectral response calibration data
US10323985B2 (en) Signal processing for tunable Fabry-Perot interferometer based hyperspectral imaging
US10578487B2 (en) Calibration for fabry perot spectral measurements
CN107576395B (en) Multispectral lens, multispectral measuring device and calibration method thereof
Henriksen et al. Real-time corrections for a low-cost hyperspectral instrument
Dittrich et al. Extended characterization of multispectral resolving filter-on-chip snapshot-mosaic CMOS cameras
Gebejes et al. Color and image characterization of a three CCD seven band spectral camera
Darrodi et al. A ground truth data set for Nikon camera's spectral sensitivity estimation
CN108700462B (en) Double-spectrum imager without moving part and drift correction method thereof
US11867615B2 (en) Field calibration for near real-time Fabry Perot spectral measurements
CN110174351B (en) Color measuring device and method
CN114930136A (en) Method and apparatus for determining wavelength deviation of images captured by multi-lens imaging system
Ononye et al. Calibration of a fluorescence hyperspectral imaging system for agricultural inspection and detection
CN117368124A (en) Radiation calibration method, system, device and medium for hyperspectral camera
Hunt et al. The Commissioning of the Arcetri Near-Infrared Camera ARNICA: II. Broadband Astronomical Performance
Lenhard Monte-Carlo based determination ofmeasurement uncertainty for imaging spectrometers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17867023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017867023

Country of ref document: EP

Effective date: 20190607