EP3535553A1 - Kalibrierverfahren und -vorrichtung für hyperspektrale aktivpixelsensoren und -kameras - Google Patents

Kalibrierverfahren und -vorrichtung für hyperspektrale aktivpixelsensoren und -kameras

Info

Publication number
EP3535553A1
EP3535553A1 EP17867023.8A EP17867023A EP3535553A1 EP 3535553 A1 EP3535553 A1 EP 3535553A1 EP 17867023 A EP17867023 A EP 17867023A EP 3535553 A1 EP3535553 A1 EP 3535553A1
Authority
EP
European Patent Office
Prior art keywords
calibration
pixel array
imaging system
hyperspectral imaging
semiconductor pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP17867023.8A
Other languages
English (en)
French (fr)
Other versions
EP3535553A4 (de
Inventor
Rik FRANSENS
Kurt CORNELIS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Biosensing Systems LLC
Original Assignee
Biosensing Systems LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Biosensing Systems LLC filed Critical Biosensing Systems LLC
Publication of EP3535553A1 publication Critical patent/EP3535553A1/de
Publication of EP3535553A4 publication Critical patent/EP3535553A4/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/12Generating the spectrum; Monochromators
    • G01J3/26Generating the spectrum; Monochromators using multiple reflection, e.g. Fabry-Perot interferometer, variable interference filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/63Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/67Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
    • H04N25/671Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
    • H04N25/673Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction by using reference sources

Definitions

  • Embodiments of the present application relate broadly to calibration of hyperspectral imaging systems, and more specifically to methods and apparatus for calibrating active pixel (such as but not limited to CMOS based) hyperspectral sensors and cameras.
  • active pixel such as but not limited to CMOS based
  • Hyperspectral imaging systems have been used in various applications for many decades. Typically, such systems have been bulky and expensive, which has limited their wide adoption. More recently, sensors based on mounting or depositing Fabry-Perot filters directly onto CMOS digital imaging arrays have been developed, reducing dramatically the size and weight of the hyperspectral cameras based on such new sensors. Because such sensors can be manufactured directly onto silicon wafers using photolithographic processes, there is the potential to drastically reduce the cost by increasing the manufacturing volume.
  • IMEC IMEC sensor
  • line-scan "tiled” and “mosaic”
  • patents US2014/0175265 (Al) for line- scan, US2014267849 (Al) for tiled and US2015276478 (Al) for mosaic US2014/0175265 (Al) for line- scan
  • US2014267849 Al
  • US2015276478 Al
  • the IMEC LSI 00-600- 1000 line-scan sensor is based on a CMOS chip designed by CMOSIS, which has 2048 pixels by 1088 pixels.
  • IMEC deposits Fabry-Perot filters onto each wafer, to provide about 100 spectral bands ranging from 600 nm to 1000 nm. The filter for each band is positioned above a "row" of 8 pixels by 2048 pixels.
  • An example of cameras that incorporate the IMEC sensor is manufactured by Ximea.
  • the camera is small and light, for example having a dimension of 26x26x21 millimeters (mm) and weighing 32 grams, and provides a USB3.0 digital output capable of providing 170 frames per second.
  • An example of camera systems which can incorporate the Ximea camera based on the IMEC sensor is the IMEC Evaluation System, which includes also halogen lights, a linear translation stage controlled by a computer software to move the camera or the sample being scanned, and calibration and processing software to acquire images and perform certain transformation of the raw data to demonstrate various applications.
  • the hyperspectral sensors can provide much finer spectral resolution, which can be used to distinguish between objects that may appear the same to a human eye despite differences in their chemical or biochemical make up.
  • hyperspectral imaging during brain surgery, to help the surgeon distinguish between various tissues while performing the surgery.
  • software is needed to quickly and reliably process the raw hyperspectral data and provide a virtual image that can be superimposed in the field of view of the operating microscope, providing extra information to the surgeon.
  • Small changes in the oxygen concentration in small capillaries can be magnified and even the boundaries between a tumor and healthy tissues can be more clearly visible when the output of the hyperspectral system can superimpose additional information that would not be readily visible to the naked human eye.
  • Embodiments of the present invention combine several measurements which need to be performed to best calibrate the hyperspectral imaging system. While one can perform only some of the measurements described herein, the combination of many, most or all of the measurements can provide information that, when used by the image processing software, reduces the variances introduced by the hyperspectral measurement system and improves the accuracy and the benefits of the processed information produced by the hyperspectral analysis software.
  • Embodiments of the invention provide a systematic way to measure non-ideal characteristics of the hyperspectral camera systems, otherwise referred to herein as a
  • hyperspectral imaging system that use an active pixel sensor based on Fabry-Perot filters applied onto the active pixel array.
  • the deviation of the actual performance of a given system from the ideal performance i.e. that of a theoretical system that would perform perfectly and consistently
  • This pre-processing software uses the calibration information to apply a correction to the raw data recorded using the camera system, reducing the variance or noise introduced by the actual system as compared to the ideal system.
  • the sources of variance can be within a given sensor; or can be due to external factors affecting a given sensor, such as its temperature.
  • the sensor may also respond differently to light rays coming thru the lens' axis and rays coming from other directions.
  • the pairing of a given sensor with the same model lens may yield different results because the lenses are not identical.
  • a given sensor paired with a specific lens may provide different raw data than another senor and lens of identical model and specifications simply because of the small manufacturing variances between the nominally identical items.
  • Any digital camera is programmed to convert electromagnetic radiation (photons) into digital frames through the reading of the semiconductor pixel array based on the lens aperture and the integration time (a longer integration time implies that more photons are collected before the conversion into electrons is read into a digital reading for each pixel).
  • the goal of embodiments of the invention is to minimize all of the above sources of variance due to the hyperspectral system, so that the processing software can more efficiently be trained to recognize specific "spatial - spectral signatures" in a robust and reliable way. This is especially important if many camera systems are used to scan hundreds or thousands of acres to detect a crop disease or if many surgeons rely on such system to perform surgeries and to compare data with that of other colleagues.
  • Embodiments of the invention comprises of multiple measurements to quantify several non-ideal characteristics of such system.
  • the measurements can be taken according to the sequence presented here, or can be taken in an alternate sequence. Some of the measurements can be skipped or all of the measurements can be combined in a single calibration process.
  • Fabry -Perot filters mounted on a semiconductor pixel array or deposited directly onto the chip, such as a CMOS digital pixel arrays with Fabry-Perot filters, than with older and more conventional hyperspectral systems based on other technologies and techniques.
  • Figure 1 illustrates a variation of dark current levels at the center of a semiconductor pixel array including one or more Fabry-Perot filters as a function of the integration time and sensor temperature, in accordance with certain aspects of the present disclosure.
  • Figure 2 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 0.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 3 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 1.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 4 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.0 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 5 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.5 for all recorded temperature - integration time points, generated using techniques described herein, in accordance with certain aspects of the present disclosure.
  • Figure 6 illustrates filter transmission approximation for a semiconductor pixel array indicating the non-linearity of the sensor before application of a non-linear compensation parameter to the raw data generated by the semiconductor ("Original") and for the corrected output that applies a non-linear compensation parameter to the raw data to compensate for the non-linear gain response of the semiconductor pixel array, in accordance with certain aspects of the present disclosure.
  • Figure 7 illustrates the estimated non-linear gain determined using an evaluation system as described herein at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm, in accordance with certain aspects of the present disclosure.
  • Figure 8 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels, in accordance with certain aspects of the present disclosure.
  • PID proportional- integral-derivative
  • Figure 9 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels and an integrating sphere, in accordance with certain aspects of the present disclosure.
  • PID proportional- integral-derivative
  • Figure 10 illustrates an estimated modulation map Ml , in accordance with certain aspects of the present disclosure.
  • Figure 11 illustrates reconstructions (up to scale) of identical light spectra using a HSI system without compensation for modulation between sensor columns. Clearly, the reconstructed spectrum varies strongly across the sensor, in accordance with certain aspects of the present disclosure.
  • Figure 12 illustrates measured emission spectra of the halogen light source in the horizontal and vertical orientation, in accordance with certain aspects of the present disclosure.
  • Figure 13 illustrates the estimated halogen light front glass transmittance spectrum, in accordance with certain aspects of the present disclosure.
  • Figure 14 illustrates the typical transmission spectrum of soda-lime glass provided for reference.
  • Figure 15 illustrates the estimated modulation M 2 , in accordance with certain aspects of the present disclosure.
  • Figure 16 illustrates a light source spectrum reconstruction with and without M 2 modulation correction, in accordance with certain aspects of the present disclosure.
  • Figure 17 illustrates a graphical representation of the composition matrix for a sensor as measured by IMEC, in accordance with certain aspects of the present disclosure.
  • Figure 18 illustrate a close-up of a spectral region of the composition matrix where a discontinuity artifact is present, in accordance with certain aspects of the present disclosure.
  • Figure 19 illustrates a composition matrix after application of calibration information such as a spatial modulation compensation parameter generated using techniques including those described herein, in accordance with certain aspects of the present disclosure.
  • Figure 20 illustrates an example of spectral correction of a synthetic vegetation spectrum with IMEC's method and our Wiener pseudo-inverse without training on example spectra. Notice significant differences above 875 nm, in accordance with certain aspects of the present disclosure.
  • Figures 21 A and 21B show spectral measurement of Tungsten-Halogen light before (Fig. 21 A) and after (Fig. 2 IB) camera calibration performed according to embodiments of the present invention (HSI lines can camera using a 600-1000nm sensor), in accordance with certain aspects of the present disclosure.
  • Figures 22A and 22B show spectral measurement of Calibration Target CSTM-MC-010 before (Fig. 22A) and after (Fig. 22B) camera calibration performed, in accordance with certain aspects of the present disclosure.
  • Figures 23 A, 23B, 23 C, and 23D illustrate flow diagrams of calibration operations, in accordance with certain aspects of the present disclosure.
  • Figures 24A, 24B, and 24C illustrate examples of extensive use of the IMEC sensor, Ximea camera, and IMEC Evaluation System, including the use of the hyperspectral system in agricultural fields, with the camera positioned on a linear stage between two tripods or onto a small drone, in accordance with certain aspects of the present disclosure.
  • FIGS 25A, 25B, and 25C illustrate filter groups, in accordance with certain aspects of the present disclosure.
  • a full system calibration system and method for hyperspectral imaging (HSI) cameras and sensors which addresses multiple aspects, including any one or more of knowledge of both sensor related aspects such as dark current, sensor non- linearity and quantum efficiency (QE), and system related aspects such as lens aberrations, lens transmission and camera-lens interactions.
  • the full system calibration method and system of the present invention removes intra & inter sensor variations, and renders HSI camera invariant to the effects of, for example, temperature and specific lens settings.
  • the full system calibration system and method of embodiments of the present invention addresses both the camera and the lens(es), which provides particular advantage since both sensor and system related aspects are addressed. This innovative approach removes intra & inter sensor variations. Correct spectral measurements are obtained in a repeatable manner, irrespective of where the object appears in the camera's field of view (FOV) or which particular camera is being used, and independent from changing lighting conditions and operational mode of the particular HSI system.
  • FOV field of view
  • the full system calibration system and method of embodiments of the present invention are well suited for critical applications, for example including the medical or life science domain, which demand reliable spectral measurements, for applications which apply trained classifiers to hyperspectral image data, or applications which apply carefully chosen thresholds to specific hyperspectral data attributes. More specifically, embodiments of the calibration system and method of the present invention are configured to enable an HSI camera system to operate robustly in one or more of the following situations: changes in lens settings; switches of HSI camera system or optics; altering light conditions, e.g. caused by decaying lighting systems; and /or variable operational conditions, e.g. changes in environmental temperature.
  • An hyperspectral imaging system may include one or more active pixel sensors, otherwise referred to herein as a semiconductor pixel array.
  • Each of the one or more active pixel sensors includes Fabry-Perot filters mounted on a semiconductor pixel array or deposited directly onto the pixel array.
  • the semiconductor pixel array including the one or more Fabry-Perot filters may be formed to have any capture and filter arrangement including but not limited to, a linescan arrangement, a snapshot tiled arrangement, and a snapshot mosaic arrangement using techniques including those known in the art.
  • the hyperspectral imaging system may also include one or more lens used to project an image on each of the one or more semiconductor pixel arrays.
  • one lens may be used to project an image on more than one semiconductor pixel array.
  • the hyperspectral imaging system may also include one or more filters and other components for capture images.
  • the hyperspectral imaging system is configured to generate one or more images or raw output data, including five or greater number of spectral bands, each having a nominal spectral bandwidth less than 100 nm, for example 8 bands of less than 50 nm, or 10 bands of less than 40 nm; or 20 bands of less than 30 nm; or 40 bands of less than 20 nm; or more than 50 bands each with less than 15 nm nominal bandwidth. Please note that such bands may be adjacent, or may be selected from a plurality of bands based on a careful selection of the most important bands identified during a classification training process.
  • a hyperspectral imaging system is configured to generate one or more images or raw output data, including five or greater spectral bands having a spectral bandwidth of up to 1000 nanometers (nm).
  • An embodiment for an apparatus for calibrating a hyperspectral imaging system includes an evaluation system configured to generate calibration information for the hyperspectral imaging system, the calibration information including generated based on at least two or more calibration parameters including a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array, a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array, a spatial modulation compensation parameter based on a response variation of pixels positioned under a filter having the nominal central frequency, and a spectral modulation parameter based on a pixel response variation due to light wavelength outside the nominal width of the spectral filter in front of that pixel in the semiconductor pixel array using techniques including those described herein.
  • Such calibration can be done at different levels of the hyperspectral imaging system and not necessarily at the very end using the complete final system.
  • the calibration can be done at the (1) wafer level, (2) chip level, (3) chip + PC Board level (4) Chip + PC board + lens (full system) level
  • Dark current is an electrical effect that causes a semiconductor pixel array to return non-zero digital values for each pixel even when there is no light (i.e. no external electromagnetic radiation in the spectral range of the sensor) reaching the sensor.
  • a conventional approach to measuring the dark current involves: covering the lens with a cap to prevent any light from entering it; acquiring a series of digital images and averaging the values of each pixel to produce an average dark reference image. Using a conventional approach, such as the approach implemented in the IMEC Evaluation System software, this dark reference is then subtracted from the actual raw data for each image. Based on measurements with actual sensors from IMEC, the inventors have concluded that a novel and more comprehensive approach is necessary. Embodiments of the invention not only assume that the dark current varies across the area of the sensor, but its magnitude also vary as a function of the sensor's temperature and integration time.
  • Dark current can be measured by taking images when the lens is blocked with a metal lens cap. To eliminate the effect of noise, many images can be taken successively, and averaged on a pixel-by-pixel basis. The effect of temperature and integration time variation on dark current can be directly measured by taking dark frames at a sparse selection of temperature- integration time points. Bilinear interpolation of the resulting lookup table would result in highly accurate dark current predictions. However, the required size of the lookup table, which is typically stored in non-volatile memory inside the camera or camera system, could be too large in practice.
  • the storage space requirements can be reduced by replacing the lookup tables per pixel by a polynomial model.
  • a third order polynomial for each pixel proved sufficient for reliable dark current predictions.
  • a third order polynomial for each pixel proved sufficient for reliable dark current predictions.
  • a separate polynomial model is fitted through its recorded data.
  • the dark current response is measured again at several temperature - integration time points, to have data which was generated independently from the training phase.
  • each recorded temperature - integration time point the difference is determined between (a) the actual recorded pixel response and (b) the response predicted by the polynomial model for that pixel.
  • Figures 2 to 5 present a view on the magnitude of these differences across the sensor. Each Figure shows the entire image sensor array in which a pixel is either rendered as white or as black. For a selected threshold value, if the difference between actual recorded pixel value and predicted value is less than the mentioned threshold, for all temperature - integration time points for that pixel, it is rendered as a white pixel in the figure. If not, it is rendered as a black pixel.
  • Figure 1 illustrates a variation of dark current levels at the center of a semiconductor pixel array including one or more Fabry-Perot filters as a function of the integration time and sensor temperature.
  • Figure 2 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 0.5 for all recorded temperature - integration time points, generated using techniques described herein.
  • Figure 3 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 1.5 for all recorded temperature - integration time points, generated using techniques described herein.
  • Figure 4 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.0 for all recorded temperature - integration time points, generated using techniques described herein.
  • Figure 5 illustrates the distribution of pixels across the sensor for which the difference between actual recorded pixel value and predicted value using a third-degree polynomial model is less than 2.5 for all recorded temperature - integration time points, generated using techniques described herein.
  • various embodiments are configured to generate a dark-current compensation parameter based on an integration time and a temperature response of the semiconductor pixel array.
  • Non-linearity is an electrical effect that causes the response (i.e. digital output) of a sensor pixel to be non-linearly related to the amount of incident light reaching it.
  • embodiments of the systems and methods described herein measure and take into account other variables leading to a greater amount of non-linearity than accounted for by the current systems. For example, a non-linear effect is present across the full dynamic range (and not only in the range of the most significant bit of each pixel) and that this effect varies with sensor temperature and integration time.
  • Embodiments of the invention use monochromatic light and optical filters interposed between the light source and the sensor, to create pairs of measurements (e.g., digital reading from each pixels) from incident light beams that are in precisely known intensity ratio.
  • embodiments are configured to measure sensor non-linearity by taking image pairs of monochromatic light at a wide range of unknown light intensities. The difference between the first and second image of each pair is that the second image is taken with light at a precisely known intensity fraction of the first image.
  • the known intensity fraction can be generated by placing an optical filter in front of the light source with precisely known transmittance at the monochromatic wavelength.
  • the monochromatic light can be generated with a broadband light source filtered through a narrowband filter.
  • such a calibration system is configured to use narrowband filters with fwhm (full width at half maximum) less than 3 nm.
  • a narrowband filter is mounted in front of the camera such that all of the light entering the camera is of the same monochromatic wavelength or closely thereto.
  • the system is configured to prevent the camera from observing light that has not passed through the known optical filter in front of the light source. For example, this can be achieved with adequate shielding of stray light and/or recording an extra image in which only the stray light is measured.
  • the light is measured without a lens, from a sufficient distance to make sure that the light rays transmitted through the narrowband filter are approximately parallel. This is necessary, because the central wavelength of optical narrowband filters varies with the angle of incident light.
  • I 0 , Ii represent the (dark current subtracted) pixel responses without and with the additional known optical filter
  • B is the response to only the stray light
  • T is the known filter transmittance at the wavelength of the monochromatic light.
  • p represents the parameter vector of the non-linearity correction function
  • the system is configured to calculate a non-linear compensation parameter based on a non-linear gain over at least a portion of a transduction dynamic range of the semiconductor pixel array.
  • Figure 6 illustrates filter transmission approximation for a semiconductor pixel array indicating the non-linearity of the sensor before application of a non-linear compensation parameter to the raw data generated by the semiconductor ("Original") and for the corrected output that applies a non-linear compensation parameter to the raw data to compensate for the non-linear gain response of the semiconductor pixel array.
  • the non-linearity estimation results, as illustrated in Figure 6, were determined at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm as determined using a system according to embodiments described herein.
  • Figure 7 illustrates the estimated non-linear gain determined using an evaluation system as described herein at a sensor temperature of 30 degrees Celsius for a monochromatic light signal at 785 nm.
  • a non-linear correction can reduce the variance due to this non-linear effect by a factor of 3 to 5 times.
  • embodiments of the evaluation system are configured to generate a non-linear compensation parameter, such as a correction function, that varies as well to compensate the raw data.
  • a non-linear compensation parameter such as a correction function
  • the evaluation system is configured to repeat the measurements at a plurality of pre-defined sensor temperatures to generate one or more non-linear compensation parameters, and use interpolation techniques to correct images, the raw data, taken at different temperatures.
  • a similar technique can be used to accommodate a range of different integration times.
  • Figure 8 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels.
  • PID proportional- integral-derivative
  • Such an embodiment includes intensity reduction filter configured to be moved in and out automatically (e.g., filter wheel), such as under the control of a processor or computer to gather image pairs at two light intensities, of which the second level is known as a fraction of the first intensity level.
  • the narrow band pass filter is configured to automatically switch between several different filter bands (e.g., filter wheel) under the control of a processor or computer to measure the camera responses at different filter bands.
  • the evaluation system also includes a halogen light DC controller under the control of a processor or computer to gather image pairs at different base intensity levels to compute a non-linear gain correction curve over a desired response range, such as the entire response range of the semiconductor pixel array.
  • a PID temperature controller under the control of a processor or computer to automatically sample the raw data of a semiconductor pixel array over a temperature range, such as the entire operating temperature range of the semiconductor pixel array.
  • a control program on a controller or a computer can cycle automatically through all relevant settings (e.g., temperature, exposure time (integration time), recording mode), and can at each setting record all raw data for the image pairs used to compute the non-linear gain correction function over a desired response range of a hyperspectral imaging system.
  • a hyperspectral imaging (“HSI”) system includes but is not limited to, one or more of any of a semiconductor pixel array, a lens, a filter, a housing, and other optical or electrical components.
  • Figure 9 illustrates an embodiment of an evaluation system including a proportional- integral-derivative ("PID") controlled Peltier element used to maintain the sensor temperature at the desired levels and an integrating sphere.
  • the evaluation system is configured to reduce the intensity of a light source, such as Halogen light, using a DC controller.
  • the evaluation system is configured to measure the reduced intensity by a spectrometer to gather raw data of the hyperspectral imaging system for image pairs for at least two light intensities, of which the second intensity level is a known fraction of the first intensity level.
  • the evaluation system is configured to use many image pairs at different base intensity levels to compute a non-linear gain correction curve over a desired response of the range, such as the entire response range of the HSI system.
  • the evaluation system also includes a monochromator configured to be under control of a processor or a computer to measure the HSI system responses at one or more filter bands. Further, the evaluation system includes a PID temperature controller under the control of a processor or computer to automatically sample the raw data of a semiconductor pixel array over a temperature range, such as the entire operating temperature range of the semiconductor pixel array.
  • a control program on a controller or a computer can cycle automatically through all relevant settings (e.g., temperature, exposure time (integration time), recording mode), and can at each setting record all raw data for the image pairs used to compute the non-linear gain correction function over a desired response range of a hyperspectral imaging system.
  • the evaluation system is configured to use a non-monochrome light source to illuminate spectrally uniform diffuse grey reflectance targets (i.e., with a known, flat spectral reflectance). These targets could be observed with a lens mounted on the HSI system, and all pixels imaging the targets could then be organized in intensity tuples (with as many entries as reflectance targets). This has the potential to greatly simplify the optical measuring set-up.
  • the reflectance properties of the targets need to be known with sufficient accuracy.
  • Embodiments of the present invention are based on the experimental observation that the semiconductor pixel array, such as a CMOS sensor array, with Fabry-Perot filters exhibit a very significant response modulation between sensor pixel columns (as much as a 20% variance between column on the left and on the right of the mid line) and that such variation can be corrected by measuring the Mi modulation map and applying the required correction to the images.
  • the semiconductor pixel array such as a CMOS sensor array
  • Fabry-Perot filters exhibit a very significant response modulation between sensor pixel columns (as much as a 20% variance between column on the left and on the right of the mid line) and that such variation can be corrected by measuring the Mi modulation map and applying the required correction to the images.
  • the response variation across each row is measured while observing the same physical point.
  • Such measure is performed by taking a series of images of an illuminated white tile, with the camera moving sideways on a computer controlled linear translation stage.
  • the translation speed and the geometry of the translation stage are selected so that the modulation map can be straightforwardly computed by dividing each measurement by its corresponding measurement on the central column.
  • the measure is repeated for different physical points, and the computation of the modulation map is performed by estimating the optimal correction factor per pixel for mapping its response onto the corresponding average of the N central columns using linear least squares.
  • the N central columns should correspond with those columns used to measure the response composition matrix for the specific CMOS sensor (see definition of response composition matrix described herein).
  • the reflectance of the white tile is approximately Lambertian.
  • the translation speed and camera frame rate are precisely tuned so that the image of the white tile moves horizontally over a fixed integer number of pixels between any two consecutive images.
  • the optical axis of the camera is perpendicular to the white tile surface.
  • the translation is parallel with the sensor rows.
  • the lens distortion is either negligible or corrected for.
  • the integration time of the camera is the same for all frames.
  • the illumination spectrum and intensity remains the same throughout the scanning process.
  • a frame filling uniform light source (e.g. generated by an integrating sphere) can be observed. This entails the following assumptions: the spectrum and intensity of the light source is the same across the camera's field of view.
  • Figure 10 illustrates an estimated modulation map Ml .
  • the modulation ranges between 0.80 and 1.19. This suggests an HSI system without calibration information applied to the raw output of the system may be distorted by as much as 20 percent when measured at the left or right end of the sensor.
  • Figure 11 illustrates reconstructions (up to scale) of identical light spectra using a HSI system without compensation for modulation between sensor columns. Clearly, the reconstructed spectrum varies strongly across the sensor.
  • Embodiments of the invention use images to measure the reflectance from a white tile that is approximately Lambertian, and multiple points on the tile are observed and measured as the linear translation stages moves the camera at a speed and frame rate that is precisely selected so that the images move vertically over a fixed integer number of pixels between consecutive frames.
  • the reflectance of the white tile is approximately Lambertian.
  • the translation speed and camera frame rate are precisely tuned so that the image of the white tile moves vertically over a fixed integer number of pixels between any two consecutive images
  • the optical axis of the camera is perpendicular to the white tile surface, o
  • the translation is parallel with the sensor columns,
  • the lens distortion is either negligible or corrected for.
  • the spectrum of the light source is the same across the camera's field of view o
  • the intensity of the light source is constant across the camera's field of view
  • corresponding responses in IMEC's response composition matrix calibration set-up can be expressed as a sensor row-dependent scaling on the central sensor columns.
  • the measurements are processed by the software according to the following methodology:
  • b indicates the spectral band on the sensor for which we describe the response
  • f(b, ) is the spectral response function of the sensor band
  • M_2 (b) is the intensity modulation induced by the camera-lens system at the location of the band.
  • f(b) is the Q-element row vector on the row of the response composition matrix corresponding with the observed band
  • T_L®L is the Q-element row vector obtained by point-wise multiplying the lens transmittance vector with the light spectrum.
  • Ii(b) M 2 (b)(f(b) - (T L ® Ti ® R ® L)) , where R is the spectral reflectance vector of the physical point on the white tile. Given that we know R (for a white tile, this should be approximately a constant across the spectrum), we obtain the same non-linear optimization problem as described above for the uniform light source case. In practice, we average the measurements over many physical points of the white tile to improve the signal-to-noise ratio.
  • the transmission curve of the front glass was derived from accurate spectral radiant flux measurements of our halogen lamp, measured by
  • FIG. 12 shows that the reconstructed light source spectrum with and without the M_2 modulation reconstruction, demonstrating the importance of this calibration step and of the correction applied as a result of it.
  • Figure 12 illustrates measured emission spectra of the halogen light source in the horizontal and vertical orientation. These measurements were used for estimating the transmittance of the front glass element, and are accurate up to 0.67 percent.
  • Figure 13 illustrates the estimated halogen light front glass transmittance spectrum.
  • Figure 14 illustrates the typical transmission spectrum of soda-lime glass provided for reference (from Wikipedia).
  • Figure 15 illustrates the estimated modulation M 2 .
  • Figure 16 illustrates a light source spectrum reconstruction with and without M 2 modulation correction. In both cases the spectral correction matrix computed by IMEC was used for spectral correction.
  • an evaluation system is configured to use an image-side telecentric lens, or direct measurement of the response composition matrix of the HSI system using a monochromator set-up. Further the evaluation system is configured to avoid possible inaccuracies inherent in the optical filter-based measuring set-up by replacing the filters with observation of diffuse reflectance targets with accurately known spectral reflectance.
  • spectral radiance then changing the spectral content of this uniform radiance does not change the ratio of the responses between any two pixels belonging to the same filter group.
  • the spatial variation between pixels of the same filter group can then be modeled as only a variation in magnitude of response which needs to be calibrated.
  • Reasons for this can e.g. be:
  • Pre-calibration is available but not valid for all pixels. E.g. only a part of the sensor has been pre-calibrated and the other pixels were assumed to behave similarly but don't. E.g. because the spectral quantum efficiency curve per pixel changes depending on its location on the sensor.
  • spectral content received by pixels normally designed to receive the same spectral content E.g. because the lens introduced chromatic aberrations, or caused light to have a different angle of incidence for pixels at different locations causing a shift in spectral filtering, or causing multiple reflections between sensor and lens which are location dependent.
  • Magnitude calibration is comprised of two consecutive parts, specifically:
  • ⁇ B the values obtained by applying the pre-calibrated spectral response curves to the input light spectra.
  • Inter-filter calibration determines these difference in magnitude in order to correct B so that they match A, the real recorded responses.
  • Intra-filter calibration is also provided in some embodiments. Under all prior assumptions and previously performed calibrations (dark current + non-linearity), pixels within the same filter group, but lying at different locations on the sensor, will now give the same response value up to an unknown scale. The goal is to determine these scale differences in order to get complete identical responses for all pixels of the same filter group, when an object is viewed by the system, irrespective of the location on the sensor for these pixels. Therefore the physical setup below is designed to provide all pixels within the same filter group with the same spectral radiance. This can be achieved in several ways as described below.
  • the camera system is physically translated with respect to a calibration target e.g. a white tile.
  • the calibration target is designed to reflect light as a Lambertian surface such that from a point p on this calibration target, the same spectral radiance is emitted in all directions.
  • the manner of translation therefore depends on the layout of the pixels. E.g. if all pixels of a single filter group are within the same sensor row then the translation direction will be parallel to this row such that the same calibration point p is seen consecutively by all pixels on the same row.
  • a calibration target e.g. a white tile.
  • the calibration target is designed to reflect light as a Lambertian surface such that from a point p on this calibration target, the same spectral radiance is emitted in all directions.
  • the translations must also be according to the directions of this grid and in sync with the grid spacing such that the image of the same physical calibration target point visits all pixels of that filter group.
  • the camera system is looking into an integrating sphere.
  • the purpose of the integrating sphere is present very low spatial variation of emitted radiance in the entire field of view of the camera system. As such all pixels of the same filter group can be assumed to receive the same spectral radiance. Such a system would not require translations but puts more stringent constraints on the spatial uniformity required of the integrating sphere.
  • Inter-filter calibration is provided. After all prior calibrations, all pixels within the same filter group should now behave identically._Also assumed is that the spectral response curve of each of these groups has been pre-calibrated, e.g. by the manufacturer, and that these spectral response curves are available.
  • the inter-filter calibration takes care of changes in scale which can be seen between actual recorded values of each filter group and those predicted by the pre-calibration when viewing input light spectra. These differences in scaling can e.g. be due to the vignetting of the lens or other changes which has been introduced into the system after the pre-calibration.
  • the changes in scale can be determined directly by comparing the recorded response of each filter group with those predicted by applying the pre-calibrated spectral response curves of each filter group with the known input light spectrum. This would require a prior knowledge of the spectrum emitted by a calibrated light source and the spectral response curve of each additional element introduced into the path between light source and the pre-calibrated system such as e.g. a lens which was added or changed. If prior calibration data is available of all elements this approach is feasible.
  • Shape of input light spectrum is determined by less parameters, P, than there are filter groups N.
  • P the number of parameters per light source
  • Making K recordings, each at a different setting of the light source we then obtain KxN recorded filter group responses and have N+KxP unknowns.
  • the unknowns come from the N unknown scales to calibrate for and the KxP unknowns coming from the different setting of the light source per recording.
  • the equations to retrieve the unknowns can then be solved as soon as we have enough independent light sources (e.g., halogen lights driven at different current levels),
  • Shape of input light is unknown but can be modeled with P parameters which are kept fixed during K recordings, but between each recording an object is placed in the path between the light source and the camera system of which the introduced spectral change is known a-priori.
  • P parameters which are kept fixed during K recordings, but between each recording an object is placed in the path between the light source and the camera system of which the introduced spectral change is known a-priori.
  • a set of filters with known calibrated transmittance curves a set of diffuse calibrated Lambertian reflectance tiles.
  • the number of recorded filter group responses is KxN, but the number of unknowns now remains fixed at N+P.
  • ⁇ for the unknowns scales to calibrate for and P to model the unknown fixed light source.
  • the physical embodiment is comprised of the same current embodiment as mentioned for intra-filter calibration but with the following changes:
  • Alternative embodiments are further provided and are comprised of the same as current embodiment but with different known reflective tiles between recordings or a combination of this with known filters.
  • the alternative embodiment described above may be used for intra- filter calibration, but in which variation between the K recordings is introduced by inserting K different known filters somewhere in the light path, e.g. between light source and integrating sphere at entrance port of the integrating sphere, or e.g. in front of camera lens.
  • the envisioned embodiment is to use an integrating sphere + monochromator + spectrometer/photodiode to perform full spectral calibration.
  • the purpose of the integrating sphere is to provide the camera system with an as spatially uniform spectrum as possible to all pixels on the sensor.
  • the purpose of the monochromator is to scan through the wavelength range and determine the response of each pixel individually for each wavelength in the range.
  • the purpose of the spectrometer/photodiode is to look inside the integrator sphere in order to know the spectral radiance of the light being directed towards the camera system. This knowledge, together with the recorded responses for each individual pixel allows determining the spectral response curve for each individual pixel.
  • the response composition matrix (or F-matrix) describes the response of each of the hyperspectral camera's bands to monochromatic light at the full range of wavelengths to which the sensor is sensitive. It is a N X Q matrix, in which each of the N rows corresponds with a specific band of the sensor, and each column corresponds with a certain wavelength. Typically, there are around 105 bands for an IMEC line-scan sensor, and the wavelengths cover the range from 400 to 1000 nm in 1 nm steps. Right-multiplying the F-matrix with a column vector representing a light spectrum produces a N-dimensional vector representing the expected response of each of the sensor's bands to the light.
  • IMEC has measured the F-matrix by exposing the naked sensor (without the camera enclosure and additional bandpass filter) to a monochromatic light beam from a light source (produced by a monochromator). Sensor readings (images) were taken at each of the
  • Figure 17 illustrates a graphical representation of the composition matrix for a sensor as measured by IMEC. Response curves for different sensor bands are plotted in different colors. The composition matrix was measured without an additional 600 - 1000 nm bandpass filter.
  • Figure 18 illustrate a close-up of a spectral region of the composition matrix where a discontinuity artifact is present.
  • Figure 19 illustrates a composition matrix after application of calibration information such as a spatial modulation compensation parameter generated using techniques including those described herein.
  • Spectral correction refers to the process of correcting a spectral signal captured by a non- ideal filter bank (i.e., filters consisting of non-ideal peaks or multiple peaks). Since the response of the filter bands on Ximea's hyperspectral cameras with the IMEC sensor strongly exhibits such imperfections, spectral correction is necessary to obtain reliable estimates of the incoming light spectrum.
  • a non- ideal filter bank i.e., filters consisting of non-ideal peaks or multiple peaks. Since the response of the filter bands on Ximea's hyperspectral cameras with the IMEC sensor strongly exhibits such imperfections, spectral correction is necessary to obtain reliable estimates of the incoming light spectrum.
  • N_s and N_v be the number of sensor filters and the number of virtual filters, respectively.
  • F_s (N_s x Q_ ad F_V (N_v x Q) be the response composition matrices of the sensor filters and virtual filters, respectively. Construct a new N_v x N_s correction matrix as follows:
  • the Wiener optimization can be tuned to optimally reconstruct plant spectra by using a training set that provide a range of plant spectra, or it can be tuned to optimally reconstruct the spectra of brain tissues by using a training set that provides a range of spectra typically found in such tissues.
  • the covariance matrix K_c can be modeled as first-order Markov process covariance matrix, which depends on the correlation parameter p E [0,1] between neighboring values of the spectrum. This model essentially assumes that the spectra that need to be corrected belong to the family of smooth curves.
  • K_c can be computed as the second order moment matrix (i.e. covariance matrix about zero) of a set of training spectra.
  • the training spectra should be selected as representative for the intended application (e.g., vegetation and soil reflectance spectra illuminated by sunlight for agricultural applications).
  • Figure 20 Example of spectral correction of a synthetic vegetation spectrum with IMEC's method and our Wiener pseudo-inverse without training on example spectra. Notice significant differences above 875 nm.
  • one camera may be used to measure the illumination spectrum, while the other camera measures the spectrum reflected from a sample.
  • the other camera measures the spectrum reflected from a sample.
  • white balancing it is necessary to apply white balancing to the measurements of the second camera, based on the illumination spectrum measured by the first camera.
  • An interesting practical problem is the transfer of spectral measurements from a camera with mounted diffuser system to a regular hyperspectral camera with a lens.
  • the camera with diffuser may be configured in an upwards viewing orientation to sample the hemispherical illumination spectrum, while the camera with lens may be used in a downward orientation to measure the light reflected from a sample.
  • intensity level calibration may be achieved by comparing a spectrally calibrated measurement of an illuminated white tile produced by the camera-lens system to a calibrated illumination spectrum measurement with the diffuser system placed at the position of the white tile, facing the light source.
  • the scale factor between both cameras can be straightforwardly deduced from the intensity difference between both measurements.
  • the calibration process steps are performed sequentially (For spatial variation calibration there is a split in the process). Each step generates its own calibration data which will be already employed in the further calibration steps. In the end all generated calibration data is tagged for traceability, digitally signed for authenticity, and further compressed to minimize footprint for easy of exchange or storage in non-volatile memory of the camera system.
  • a simple compression technique consists of subsampling the dimensions, so in essence throwing part of the data away and relying on interpolation to represent the discarded data accurate enough.
  • a main aim of compression is to have a small memory footprint which is useful when storing this data in non-volatile memory of a camera system or when exchanging this data as files.
  • the data will be used it will be decompressed (and perhaps even expand beyond original size) to a format which consumes more memory again but is more efficient in processing efficiency.
  • the calibration usage software will verify signature and camera system IDs to guarantee authenticity and extract traceability data which can be added to all processed image content such that it is traceable.
  • the electronic camera device can be queried, it is possible to automatically verify its ID and its settings to ensure that the calibration data is valid for this camera and its settings.
  • Some elements in the camera system are passive and cannot be automatically queried for its actual settings.
  • the lens used by an industrial camera is often a passive device but e.g. its aperture setting can have an influence on how the calibration is performed. Since these settings cannot be automatically verified this type of settings have to be explicitly set and confirmed by the user.
  • Traceable calibrated image content can be generated without additionally passing it through the spectral correction step.
  • the traceable calibrated image content is independent of the application whereas the spectral corrected data can be application specific since spectral correction can use prior knowledge of typical spectra to expect from the objects which are seen in the data. This is one main reason for making a distinction between both types of generated data.
  • Figures 21 A and 21B show spectral measurement of Tungsten-Halogen light before (Fig. 21 A) and after (Fig. 2 IB) camera calibration performed according to embodiments of the present invention (HSI lines can camera using a 600-1 OOOnm sensor).
  • Figures 22A and 22B show spectral measurement of Calibration Target CSTM-MC-010 before (Fig. 22A) and after (Fig. 22B) camera calibration performed according to embodiments of the present invention.
  • the inventors have demonstrated that the raw images acquired with a CMOS imaging sensor with Fabry-Perot filters can be corrected to take into account multiple sources of variance. These corrections are based on the measurements obtained with the novel calibration apparatus and methods, and can be applied to the raw images using calibration files embedded into the camera system (e.g., inside the camera module and in the processing computer). These calibration files are unique to each sensor, lens, and camera system, and enable the pre-processing of the raw hyperspectral images, so that the processing software is more efficient, robust and reliable in extracting useful information.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Spectrometry And Color Measurement (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
EP17867023.8A 2016-11-07 2017-11-07 Kalibrierverfahren und -vorrichtung für hyperspektrale aktivpixelsensoren und -kameras Withdrawn EP3535553A4 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662418755P 2016-11-07 2016-11-07
PCT/US2017/060409 WO2018085841A1 (en) 2016-11-07 2017-11-07 Calibration method and apparatus for active pixel hyperspectral sensors and cameras

Publications (2)

Publication Number Publication Date
EP3535553A1 true EP3535553A1 (de) 2019-09-11
EP3535553A4 EP3535553A4 (de) 2020-09-30

Family

ID=62076383

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17867023.8A Withdrawn EP3535553A4 (de) 2016-11-07 2017-11-07 Kalibrierverfahren und -vorrichtung für hyperspektrale aktivpixelsensoren und -kameras

Country Status (2)

Country Link
EP (1) EP3535553A4 (de)
WO (1) WO2018085841A1 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112444503A (zh) * 2020-11-19 2021-03-05 哈尔滨理工大学 一种监测铜离子/细菌双参量光纤传感装置及实现方法
CN112444502A (zh) * 2020-11-19 2021-03-05 哈尔滨理工大学 一种监测铅离子/细菌双参量光纤传感装置及实现方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112687713A (zh) * 2017-09-29 2021-04-20 索尼半导体解决方案公司 光检测器件
CN109459135A (zh) * 2018-12-07 2019-03-12 中国科学院合肥物质科学研究院 一种ccd成像光谱仪图像校正方法
CN112304904B (zh) * 2019-07-15 2023-11-03 松山湖材料实验室 基于滤波阵列的硅片反射率检测方法
GB2601182B (en) * 2020-11-23 2022-12-28 Thermo Fisher Scient Bremen Gmbh Diagnostic testing method for a spectrometer
WO2023041566A1 (en) 2021-09-15 2023-03-23 Trinamix Gmbh Method for calibrating a spectrometer device
CN114785965B (zh) * 2022-04-20 2023-09-05 四川九洲电器集团有限责任公司 基于copod算法的高光谱图像自动曝光方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644911B1 (en) * 2006-06-30 2014-02-04 Hypermed Imaging, Inc. OxyVu-1 hyperspectral tissue oxygenation (HTO) measurement system
US7923801B2 (en) * 2007-04-18 2011-04-12 Invisage Technologies, Inc. Materials, systems and methods for optoelectronic devices
AU2014290137B2 (en) * 2013-07-15 2019-04-04 Daniel L. Farkas Disposable calibration end-cap for use in a dermoscope and other optical instruments
DE102014002514B4 (de) * 2014-02-21 2015-10-29 Universität Stuttgart Vorrichtung und Verfahren zur multi- oder hyperspektralen Bildgebung und / oder zur Distanz- und / oder 2-D oder 3-D Profilmessung eines Objekts mittels Spektrometrie

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112444503A (zh) * 2020-11-19 2021-03-05 哈尔滨理工大学 一种监测铜离子/细菌双参量光纤传感装置及实现方法
CN112444502A (zh) * 2020-11-19 2021-03-05 哈尔滨理工大学 一种监测铅离子/细菌双参量光纤传感装置及实现方法
CN112444503B (zh) * 2020-11-19 2021-09-24 哈尔滨理工大学 一种监测铜离子/细菌双参量光纤传感装置及实现方法
CN112444502B (zh) * 2020-11-19 2021-09-24 哈尔滨理工大学 一种监测铅离子/细菌双参量光纤传感装置及实现方法

Also Published As

Publication number Publication date
EP3535553A4 (de) 2020-09-30
WO2018085841A1 (en) 2018-05-11

Similar Documents

Publication Publication Date Title
WO2018085841A1 (en) Calibration method and apparatus for active pixel hyperspectral sensors and cameras
US11566941B2 (en) Systems and methods for calibrating, configuring and validating an imaging device or system for multiplex tissue assays
US11193830B2 (en) Spectrocolorimeter imaging system
JP2019070648A (ja) 分光器で支援された特別設計パターン閉ループ較正による高精度イメージング測色計
JP2016510408A5 (de)
US10514335B2 (en) Systems and methods for optical spectrometer calibration
US20140375994A1 (en) Measuring apparatus, measuring system, and measuring method
US8976240B2 (en) Spatially-varying spectral response calibration data
US10578487B2 (en) Calibration for fabry perot spectral measurements
US10323985B2 (en) Signal processing for tunable Fabry-Perot interferometer based hyperspectral imaging
CN107576395B (zh) 一种多光谱镜头、多光谱测量装置及其标定方法
Henriksen et al. Real-time corrections for a low-cost hyperspectral instrument
Vunckx et al. Accurate video-rate multi-spectral imaging using imec snapshot sensors
Dittrich et al. Extended characterization of multispectral resolving filter-on-chip snapshot-mosaic CMOS cameras
CN110174351B (zh) 颜色测量装置及方法
Gebejes et al. Color and image characterization of a three CCD seven band spectral camera
Darrodi et al. A ground truth data set for Nikon camera's spectral sensitivity estimation
CN108700462B (zh) 无移动部件的双光谱成像器及其漂移纠正方法
US11867615B2 (en) Field calibration for near real-time Fabry Perot spectral measurements
CN114930136A (zh) 确定多透镜摄像系统拍摄的图像的波长偏差的方法和装置
Ononye et al. Calibration of a fluorescence hyperspectral imaging system for agricultural inspection and detection
CN117368124A (zh) 一种高光谱相机的辐射定标方法、系统、装置及介质
Hunt et al. The Commissioning of the Arcetri Near-Infrared Camera ARNICA: II. Broadband Astronomical Performance
Lenhard Monte-Carlo based determination ofmeasurement uncertainty for imaging spectrometers

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190524

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RIC1 Information provided on ipc code assigned before grant

Ipc: G01J 3/45 20060101ALI20200525BHEP

Ipc: G01J 3/26 20060101AFI20200525BHEP

Ipc: G01J 3/28 20060101ALI20200525BHEP

Ipc: G01J 3/02 20060101ALI20200525BHEP

Ipc: H04N 5/357 20110101ALI20200525BHEP

A4 Supplementary search report drawn up and despatched

Effective date: 20200901

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 5/357 20110101ALI20200826BHEP

Ipc: G01J 3/02 20060101ALI20200826BHEP

Ipc: G01J 3/26 20060101AFI20200826BHEP

Ipc: G01J 3/45 20060101ALI20200826BHEP

Ipc: G01J 3/28 20060101ALI20200826BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20210330