WO2013009189A1 - Hyperspectral camera and method for acquiring hyperspectral data - Google Patents

Hyperspectral camera and method for acquiring hyperspectral data Download PDF

Info

Publication number
WO2013009189A1
WO2013009189A1 PCT/NO2012/050132 NO2012050132W WO2013009189A1 WO 2013009189 A1 WO2013009189 A1 WO 2013009189A1 NO 2012050132 W NO2012050132 W NO 2012050132W WO 2013009189 A1 WO2013009189 A1 WO 2013009189A1
Authority
WO
WIPO (PCT)
Prior art keywords
array
sensor
light mixing
light
camera according
Prior art date
Application number
PCT/NO2012/050132
Other languages
French (fr)
Inventor
Gudrun Kristine HØYE
Andrei L. FRIDMAN
Original Assignee
Norsk Elektro Optikk As
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Norsk Elektro Optikk As filed Critical Norsk Elektro Optikk As
Priority to CA2839115A priority Critical patent/CA2839115C/en
Priority to EP12754107.6A priority patent/EP2729774B1/en
Publication of WO2013009189A1 publication Critical patent/WO2013009189A1/en
Priority to US14/140,598 priority patent/US9538098B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
    • G01J3/0216Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using light concentrators or collectors or condensers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
    • G01J3/0229Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using masks, aperture plates, spatial light modulators or spatial filters, e.g. reflective filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2803Investigating the spectrum using photoelectric array detector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer

Definitions

  • Hyperspectral camera and method for acquiring hyperspectral data are provided.
  • hyperspectral cameras for aerial imaging, environmental monitoring, forensic science, forestry, agriculture as well as in military and industrial applications.
  • Hyperspectral cameras normally also cover a wavelength range outside the visible thus making the design of the optical system very challenging.
  • hyperspectral imagers There are different principles of hyperspectral imagers, but in this patent application a "hyperspectral camera” is defined as a hyperspectral imager of the "push broom” type.
  • hyperspectral imaging system is spatial misregistration as function of wavelength.
  • each spatial pixel is supposed to contain the signals for different spectral bands captured from the same spatial area.
  • This type of misregistration occurs when either the position of a depicted area (corresponding to one pixel) changes as function of wavelength ⁇ this error is commonly referred to as "keystone" ⁇ , or the borders or objects in the depicted area are blurred differently for different spectral channels, which is caused by variation in the point spread function (PSF) as function of wavelength.
  • PSF point spread function
  • Another effect or limiting factor is a similar spectral misregistration commonly referred to as "smile". This effect will partly shift the spectrum of a depicted poin .
  • This invention solves or at least improves significantly the problem with keystone and PSF variations for a first embodiment, and the problem with keystone and smile for a second embodiment.
  • Several approaches have been tried in the past in prior art to solve the problem, the two most common approaches being hardware correction and re- sampling of the hyperspectral image data. A further description of prior art is given in the section below.
  • Push-broom spectrometers operate as follows: they project, a one-dimensional image of a very narrow scene area onto a two-dimensional pixel array.
  • a dispersing element a diffraction grating or a prism
  • each column of the array contains spectrum of one small scene area as shown in figure 2.
  • the two-dimensional image of the scene is then obtained by scanning.
  • the final data form a so-called "datacube", which represents a two-dimensional image with the third dimension containing the spectral information for each spatial pixel .
  • the spectral information is not captured perfectly. Spectral misregistration occurs when a given pixel on the sensor is expected to receive light from a certain spectral band but instead receives
  • Spatial misregistration occurs when a pixel on the sensor receives a certain spectral band but looks at slightly wrong area of the scene. Misregistration is discussed in more detail in the article "Spectral and spatial uniformity in pushbrocrrt imaging spectrometers" by Pantazis Mouroulis, 1999.
  • VNIR visible and near- infrared region
  • pixel count in hyperspectral cameras is very modest (compared to traditional imaging systems such as phot ⁇ cameras) , since it is not enough just to produce a reasonably sharp image, and optical aberrations have to be corrected at subpixel level.
  • Optical design becomes extremely difficult if higher spatial resolution is required.
  • Offner relay with a convex diffraction grating has magnification -1 and is often used in hyperspectral cameras ("Optical design of a compact imaging
  • Dyson relay is another example of an optical system used in hyperspectral imaging.
  • Optical design of. a coastal ocean imaging spectrometer, by Pantazis Mouroulis et al, 9 June 2008 / Vol. 16, No, 12 / OPTICS EXPRESS 9096 It has magnification -1 and uses a concave reflective, grating as a dispersive element. There is also at least one refractive element.
  • the F-number can be significantly lower (faster) than in Offner systems.
  • the system can be quite small. However, extremely tight centration
  • the negative side is the fact that the residual spatial misregistration after resampling with approx. Ix factor is quite large. In order to bring it down to acceptable level downsam ling by factor 2x or more is usually required. Therefore the full sensor resolution is not utilised. Also, the necessity to capture two times more pixels than in case of hardware corrected systems, may slow down the framerate and processing speed.
  • Variable filter camera is another way to capture
  • hyperspectral data The principle of operation is described in the US Patent 5,790, 188. This camera can be quite small compared to the other hyperspectral cameras. However, it may be necessary to resample the captured data, unless the scanning motion is stable with subpixel accuracy. Also, the spectral resolution becomes very limited if a reasonably low F-number is required.
  • a hyperspectrai camera may be beneficial to split this range between two or more sensors, as described in the NASA PIDDP Final Report (May 1st 1996) «A Visible-Inf ared Imaging Spectrometer for Planetary Missions. » Example of. such an imaging spectrometer is shown on Fig.3 of this report. Since both sensors share foreoptics and slit, it is possible to get very good spatial coregistration in the along-the-track direction. However, in tire across- the -track direction ⁇ where keystone is normally measured) it will be nearly impossible to achieve good ⁇ i.e. a few percent of a pixel) coregistration .
  • Necessity to correct, spectral keystone to a very small fraction of a pixel is the principal driver restricting the spatial resolution of the existing hyper pectral cameras, possible design approaches, speed (i.e. light gathering ability) , and the final data quality.
  • Resampling hyperspectral cameras while being free of the first three restricting factors, normally require downsampling of the captured data by a large factor, which reduces the resolution (i.e. pixel count) of the final data.
  • the captured data have to be dovmsampled by a relatively large factor in order to match, the data quality from a hardware corrected camera.
  • This downsampled data may still have higher pixel count than the data from a hardware corrected camera, but clearly the full resolution of the sensor will not be utilized.
  • keystone Physically in any given system keystone is a certain fraction of the image size. When one increases the pixel count for a given image size, keystone as a fr-action of the pixel size will therefore increase.
  • the current invention introduces new "light mixing" elements, a kind of imaginary pixels, combined with a novel approach to restore image information corresponding to these imaginary pixels based on image data from a traditional image sensor. Compared to existing
  • the current invention has several advantages, namely it:
  • Foreoptics (2020) project the scene onto the slit plane (2040) .
  • The. slit plane (2040) contains an optical component
  • An example of such component is an array of square chambers (3000) with reflective walls as shown in figures 1 and 3 a) .
  • the number of chambers has to be at least a few percent lower than the pixel count of the sensor (2100 ⁇ in the spatial direction.
  • Each chamber (3000) contains radiance and spectral content of a small scene area which will be recorded as one pixel in the final captured data. The projection of this scene pixel onto the slit will hereafter be referred to as a "mi el.”
  • a mixel or an array of mixels have feature sizes in the micron and tens of microns range for the most common wavelengths and pixel sizes, therefore the machining tolerances are stringent.
  • manufacture the mixels or array of mixels is to use a high- spect --ratio micromachining technique using deep X ⁇ ray lithography.
  • a high- spect --ratio micromachining technique using deep X ⁇ ray lithography.
  • One feasible option is often called the W LIGA" process.
  • a publication describing such processes is "High-Aspect-Ratio Micromachining Via Deep X-Ray
  • the slit with the mixel ax-ray can be protected from contaminants, such as dust particles and gas, by
  • These windows can be placed far enough from the slit, so that any scratches, particles, and other imperfections on the surfaces of these windows would appear very blurry in the slit plane,, sensor plane, or any other corresponding intermediate plane.
  • the slit enclosure formed by the windows and/or lenses can be filled with gas such as nitrogen in order to prevent oxidation of the reflective surfaces of the mixel array i.e., introducing a protective atmosphere.
  • This enclosure can be sealed or it can be purged with the gas in question.
  • Another alternative is to have vacuum in the enclosure.
  • the relay optics (2060) projects the image of the slit (3100) onto the sensor (2100) .
  • the image of each mixel has to be (at least a few percent) larger than one sensor pixel.
  • the dispersive element ⁇ 2080 ⁇ in the relay optics (2060) ensures that the image of the slit is dispersed approximately perpendicular to the slit (3100) .
  • the processing unit restores the intensity content of each mixel (for each spectral channel) based on the sensor output.
  • S n is signal content of mixel #n.
  • S R m is signal content recorded in pixel #m on the sensor.
  • q Kn is the fraction of the signal content in mixel #n that is recorded in pixel #tn on the sensor.
  • N is the total number of mixels .
  • S K is known (measured) and q ran is known (can be calculated) for ail m and n when the keystone for the system and the "shape" of the light distribution from the mixel (as it is projected onto the sensor) is known.
  • Equation 1 can be written in matrix form:
  • the matrix q describes important properties of. the camera and can be called the ⁇ camera model".
  • optimisation method such as for instance the least squares method must be used to obtain the solution.
  • the process described above can be repeated for all spectral lines. This means that even though the spectral lines have different length (i.e. different keystone ⁇ they will all be converted to the same final grid with no loss of resolution with regards to mixel count. Of course the number of mixels has to be lower than the number of sensor pixels for the method to work.
  • the signal passes through optics which ⁇ smears' the signal somewhat. What, happens when the signal from a mixel is smeared, is that part of the signal leaks into the neighboring mixels, see figure 11. If i is known how the signal is smeared (i.e., the ⁇ shape' of the transition is known) , then the original content (S n ) of the mixel can .be restored as before.
  • Figure 11 shows the mixel with flat light distribution and signal content S n (upper- figure) .
  • Figure 12 shows an example of a third order polynomial transition used to model the smear effect.
  • the figure shows an example of the details of such a transition.
  • the original signal content (before smearing) of mixel #1 is equal to yi
  • the original signal content of mixel #2 is equal to y 2 (the width of the. mixel is set equal to 1) .
  • a and c are two constants that can be determined from the boundary conditions y (3 ⁇ 4 ) (or equivalently y and y' ( x :1 ) f or equival ently y' (x 2 ) -0) , which gives
  • the transition has odd symmetry about its centre where x 0 and y Q are given by
  • Figure 13 shows an example where four such mixels are recorded onto five (+ two ⁇ sensor pixels.
  • the mixels with transitions are recorded onto the sensor pixels.
  • the transition extends a fraction 0.2 into each side of the mixel .
  • the corresponding matrix equations are given by:
  • the keystone part can be solved by using reflective optics which is (of course) completely keystone free. Now about the PSF variations.
  • relay optics with magnification 0.2 , ..0 , 5x ⁇ or - 0.2... ⁇ 0.5x) it is possible to achieve F- numbers as low as F1-F1.3 in the image space and at the same time have very modest F- umber in the slit plane ⁇ -F4-F6), Therefore :
  • the foreoptics can be designed and made diffraction limited.
  • the refractive optics in front of the array of mixing chambers may be implemented as a lens system or as one or more flats made of an optically transparent material where one takes advantage of the longitudinal chromatic aberration of the material.
  • These flat (s) could be made of, but not limited to, any standard optical material.
  • a single mixel at one end of the array (figure 3b) , which is separated from the rest of the array by a relatively large gap, would illuminate 2-3 pixels of the sensor for each spectral channel. If the PSF of the optics is known, the position of the sensor relative to the mixel array can be calculated.
  • This single mixel may have any shape, it may also be a pinhole, or a dedicated light source. The main requirement is to have known intensity
  • the main sensor instead of using a part of the main sensor for measuring the position of the reference mixels or light sources it is possible to use small dedicated sensors besides the main sensor. Also, additional optics such as mirrors and/or prism can be used to direct, the light to these small sensors.
  • the PSF of the relay optics i.e. the shape of transition between every two adjacent mixeis
  • the PSF of the relay optics should be known. This can be found directly in the optical design software but it will not be very precise since the PSF is very affected by tolerances in the optical system. The best would be to measure the PSF directly on the assembled system for several wavelengths and field points.
  • a secondary array of mixeis should be placed parallel to the primary array (figure 3d) . Every two adjacent elements of the secondary array should have sufficiently large gap in between - so a single PSF can be clearly observed.
  • the simplest (but again not very precise) method of measuring PSF would be to use the signal from the 3 consecutive sensor pixels where a signal mixel from the secondary array is imaged. Much more precise information about PSF can be obtained by taking several measurements from those sensor pixels and changing the position of the mixel relative to the sensor for each measurement. This can be done by a high resolution translation stage.
  • the system is overdetermined and must be solved by use of an optimisation method, such as for instance the least squares method.
  • the least squares method searches for the solution ° that gives the best fit ° to the recorded data S R , i.e., that minimizes the error vector ⁇ where
  • the figure shows the error ⁇ as a function of assumed relative position.
  • the minimum error vector m ⁇ n is found for the correct relative position X 0 .
  • the mixel array can be made two-dimensional and placed close to the sensor plane or in any intermediate image plane after the dispersive element. The most practical location for it is the sensor plane. This gives a possibility to correct not only keystone but also smile.
  • the array is distorted according to the smile and keystone of the optical system. Since the size of each mixel is slightly larger than the size of the sensor pixels, the light from each mixel will be distributed between 4 or more sensor pixels. Since the number of pixels is larger than the number of ixels, and the light distribution inside each mixel is known, it is possible to restore the intensity content of each mixel based on the sensor output similarly to what is done in the one- ditnensional case. The principle is shown in figure 15.
  • the restored intensity content of the mixels forms a smile and keystone free image as long as the geometry of the two-dimensional mixel array replicates the smile and keystone of the optical system.
  • the misregistration error caused by differences in PSF (for the optics before the two-dimensional mixel array) for different wavelengths will in this case remain uncorrected. Therefore, the optics does not need to be corrected for smile or
  • the PSF of the camera optics is measured for every extra mixel and/or pinhole (3150 ⁇ (3170) (3190) (ref. figure 3) for one or more wavelengths.
  • the PSF for the secondary mixel array (3200) is measured for several wavelengths. Measurements of PSF are done by moving the mixel array relative to the sensor (or vice versa) in small steps. This can be done with a commercially available or custom- made translat ion stage.
  • the secondary mixel array (3200) can be used to characterise the keystone of the camera optics.
  • the main mixel array (3100) (ref . figure 3 d) should be obscured using a shutter or similar, and only the light passing through the secondary mixel array should be used.
  • the measured keystone and the PSF data should be used to form the matrix q in the equation 2.
  • the secondary mixel array should no be obscured, and the main mixel array with the end pinholes/mixels (3150) (3170) (31.90) should be opened. If the camera does not have the secondary mixel array, the keystone and the PSF data can be imported from the optical design software such as ZEMAX by ZEMAX
  • the signal from the end pinholes/mixels (3150) (3170) (3190) and the P5P data for the corresponding field points are used to determine the position and/or length of the mixel array relative to the sensor.
  • this position and this length can be predetermined by the mechanics of the camera, but since it is desirable to know them with subpixel accuracy, the easiest way is normally to monitor the signal from the end
  • Every spatial line will have the data about the current relative position and/or length of the mixel array relative to the sensor.
  • the information about the position and length of the mixel array relative to the sensor is used to adjust the coefficients of the matrix q in Eq,2.
  • several versions of the matrix q for different r-elative positions and lengths can be generated in advance in order to reduce the amount of calculations for finding mixel values.
  • the mixel values for different wavelengths can now be found by solving the overdetermined equation 2 and optimising the solution by minimising the inequalities in the solved system.
  • any or ail of described calibration steps can be skipped. Instead, the data from a natural scene can be restored for different assumed positions and/or lengths of the mixel array relative to the sensor, and/or FSF' s and/or keystone.
  • the solution, where the inequalities in the equation 2 are at the minimum, is the most accurate.
  • the inequalities in the solved equation 2 will increase. If there is a reason to suspect that one of the sensor pixels is dead or hot (for example, by inspecting a few images) it is possible to exclude that particular pixel from the equation 2. The corresponding mixel can still be calculated because it is imaged onto at least two adjacent sensor pixels in each spectral band. Then it can be checked whether the inequalities in the solved equation 2 were lowered by excluding the suspected dead or hot. pixel from the calculations. If it is not known which of the sensor pixels is hot or dead, it is possible to exclude one of them at a time from the equation 2, and then solve that equation.
  • hyperspectrai camera light in each spectral channel becomes partially coherent.
  • the consequence of this partial coherence are ripples - small (in size)
  • the PSF which is used for forming the matrix q in the equation 2 includes both the PSF of the optics and PSF introduced by the motion blur.
  • a mixel is fully clogged. If the input of a mixel is partially clogged, it still mixes light, i.e. it outputs even illumination but the intensity is lower than it is supposed to be according to the scene. This will appear on the image as consistently darker pixels. Knowing the attenuatio coefficient (i.e. amount of clogging) one can fully recover the original brightness in postprocessing, and the only penalty will be increase in noise.
  • the amount of clogging can be measured by observing a uniform object (such as sky, an integrating sphere, an illuminated diffuse object placed between the foreoptics and the mixels) , or using
  • Figure 19 a) shows a mixel 3000 partia clogged, on its output by a particle 3030.
  • the mixel is imaged onto two sensor pixels 2100.
  • the shape of the clogged part can be measured similarly to PSF: by look at a uniform object (such as an integrating sphere or an illuminated diffuse object placed between the foreoptics and the mixel array) and moving the mixels in small steps parallel to the direction of the mixel array ( Figure 19 a,b,c5.
  • This scanning motion may not be nesessarv in case of large keystone. If there is large enough keystone in the system (as large as the clogged part of the mixel or larger) and the uniform object is illuminated by a broadband source, then the shape of the clogged part can be found by comparing the illumination of the pixels where the part is imaged to the adjacent pixels in different spectral bands. In case of large enough keystone this will be equivalent to moving the slit in small steps.
  • this mixel can be interpolated by using the data from the adjacent.
  • hyperspectral imaging when one of two sensor captures the highlight information about the scene, and the second one captures the shadows of the same scene. And it is all done with nearly perfect spatial coregistration between highlights and shadows.
  • a first embodiment has a. linear array of mixels (3100) placed in the slit plane (2040) or in any of the other intermediate image planes .
  • This embodiment might be a standard one, ref. figure 3 a) and figure 7 a) or it might be comprising other features as shown in figure 3 b, c, d or . It could also have shapes as described in figure 7 b, c , e , e, f, g or h. A combination thereof i.e., features and/or shapes will be apparent for the person skilled in the art.
  • a second embodiment is comprising a two dimensional array of light mixing element (3500) .
  • This two dimensional array will be placed directly in front of the image sensor (2100) or in any other intermediate image plane after the dispersive element.
  • a variation of this second embodiment is to make the two dimensional array fit the optical errors of the system i.e., so that it matches the smile and keystone of the optics .
  • the processed or restored image data from this embodiment will be smile and keystone free .
  • the current invention is based on a novel principle of using light mixing elements constituting a kind of imaginary pixels which can be calculated based on data from image sensor pixels.
  • the two embodiments described here cover all optical components all the way to the image sensor .
  • the electronics including the data processing system starts. In this section this will be referred to as the "data processing system:”.
  • the tasks of the data processing system is to acquire and digitise data from the image sensor and do all necessary processing of and calculation on the data so that restored image data corresponding to the mixels or imaginary pixels be outputted and stored. It will also do all necessary "house keeping” like controlling image sensor, translation stages etc,
  • FIG. 16 The standard-component-based system (figure 16) is built using a standard camera head (2000 ⁇ or industrial camera combined with a PC (5000) and an interface controller (5100) used for communication with the camera.
  • the processing will be done in the PC processor (s) or in additional processor boards (5050) which might be comprising special processors suitable for the processing tasks.
  • processors (5050) could be CPUs , DPSs and its like.
  • Custom software will be running on the PC controlling the system and acquiring and processing the data .
  • a version of the camera data processing system in a self- contained embodiment is shown in figure 18, In this case the output is processed or corrected, and all this processing will be done by the electronics inside the camera so that it functions more or less like an
  • image processing unit refers to a combination of software, firmware and hardware performing the required image processing tasks.
  • the image processing unit is comprising an "equation solver unit” which is the functionality described in the section “Restoring equations” and an “optimisation unit” which is the functionality described in "The general equations”. As can easily be understood is that the equation solver unit is solving the equation sets as given in the
  • Figure 1 shows the basic principle of the current, invention, Foreopties (2020) project the scene onto the slit plane (2040) .
  • the slit plane ⁇ 2040 ⁇ contains an array of light mixing elements (3100) .
  • 'mixel' 30005 represents one spatial pixel in the final hyperspectral data.
  • the relay optics (2060 ⁇ projects the image of the slit (3100) onto the sensor (2100) .
  • the dispersive element (2080) in the relay optics (2060) ensures that the image of the slit is dispersed approximately perpendicular to the slit.
  • Figure 2 shows the prior art. It illustrates the general principle of a push-broom hyperspectral camera.
  • the foreoptics (2020) create an image of the scene in the slit plane (2040) .
  • the slit (2200) acts as a field stop and cuts out a narrow portion of the image and transmits it further into the relay system (2060) .
  • the relay system (2060) images this narro scene region onto the sensor (2100) . Due to the dispersive element (2080) in the relay, the image of the slit is dispersed on the sensor - so that the different wavelengths ( ⁇ , ⁇ 2) are imaged onto different parts of the sensor.
  • Figure 3 shows a slit ⁇ 3100) with mixing chambers and the additional holes/chambers for calibration purposes.
  • a) A single array of mixels (3100) .
  • b) A single array of mixels with an additional mixel at one end of the slit (3150 ⁇ . This single roixel makes it possible to measure the relative position of the mixel array and the sensor or their relative length.
  • c) A single array of mixels with two additional mixels - one on each end of the mixel array (3150, 3170) . These single mixels make it possible to measure the relative position of the mixel array and the sensor as well as their relative length.
  • d) The array of mixels (3100) like in a) , and a
  • the secondary array of mixels (3200) with relatively large gaps between the mixels .
  • the secondary array allows to measure PSF and keystone of the optics before taking the measurements .
  • the mixels in the array (3100) are not necessarily squared.
  • the additional mixels (3190) at the ends of the slit are not necessarily squared. Of course all the features shown in figure 3 can be combined.
  • Figure 4 illustrates what a spectral keystone is.
  • the centres of gravity of the PSF for different wavelengths are not located on the same vertical line .
  • the light coming from the same area of the scene is recorded as coming from slightly different areas depending on the wavelength.
  • Figure 5 shows that a keystone-Iike effect may occur in the final data even if the actual keystone of the optical system is zero. This phenomenon is caused by variation of PSF for * different wavelength,
  • Figure 6 shows various possible shapes of mixels.
  • Figure 7 shows that the mixel array can be straight (a) , or curved (b,c) in order to compensate optical
  • Figure 9 shows a simple numerical example of how mixels are recorded on. the sensor in the case of perfectly sharp optics .
  • Figure 10 illustrates that the mixel content can be perfectly restored from the recorded pixels.
  • Figure 11 shows how an image of a mixel may look if the relay optics is not perfectly sharp.
  • Figure 12 shows a possible representation of a. transition between two mixels after they have been imaged onto the sensor by the relay optics.
  • Figure 13 shows how the mixel. content can be restored after the mixel borders have been blurred in the relay optics, and the amount of blurring is known.
  • Figure 14 shows how the deviation between the calculated sensor pixel values (from the approximate solution for the raixel values) and the recorded sensor pixel values changes as a function of the assumed position of mixels relative to the sensor pixels.
  • the inequality is minimum.
  • This can be used for measuring the relative position of mixels and pixels by analysing the captured data.
  • the method can be extended for measuring more than one parameter simultaneously: for example both the relative position of mixels and pixels, and the relative length of the mixel array and the pixel array.
  • the PSF and/or keystone can be measured.
  • Figure 15a shows a two-dimen ional mixel array (3500) on top of a two-dimensional pixel array or senso (2100) .
  • Figure 15b shows the same layout but this time the mixel array is distorted (3500) to fit the smile and keystone generated by the camera optics.
  • the restored mixel intensities will represent smile- free and keystone-free hyperspectral data.
  • Figure 16 shows a system set-up based on a computer
  • the camera body (2000) communicates with and controlling the camera body (2000) with sensor (2100 ⁇ .
  • the camera body (2000) is connected to the spectrometer part (1000) with entrance
  • the computer (5000) will control the camera body and the spectrometer, and will also receive data acquired from the sensor ⁇ 2100) and process them to restore corrected data in the standard CPU system of the computer or utilising one or more optional signal or graphics processing board (s) ⁇ 5050). Temporary and processed data may be stored on a permanent storage device (5200) like a hard disk.
  • FIG 17 shows a simplified sketch of the data flow and the process steps in the signal processing related to the restoring of data.
  • An analogue sensor signal (6000) is digitised giving digitised raw data (6050) which may undergo an optional pre-processing (6100) .
  • the raw data might be temporarily stored in a raw data storage (6200) for later processing or immediately go further in the chain to data restoring (6300) and/or calibration (6400) .
  • the output of this is restored or corrected data (6500) which might be displayed right away or be stored in the final data storage means (6600) .
  • Figure 18 shows one possible implementation of the electronic system (7000) of a self-contained
  • the figure shows the image sensor (2100) feeding analogue signals into an amplifier system (7100) which output is input to an analogue to digital converter (7200) .
  • the figure also shows electronics for camera control (7300) which will control a positioning stage and possibly the image sensor.
  • Electronics for I/O control (7400) like external sync signals is also shown.
  • An interface controller (7500) is shown and this might be controlling interface types like CameraLink, FireWire, USB, Ethernet, GigE etc.
  • a memory unit (7600) is shown and this might, be comprising one or more of Pl sh/EEPROM (7620) , DRAM (7640 ⁇ , SRAM (7660 ⁇ , buffer or FIFO circuits (7680 ⁇ or any other types of memory devices. All this is
  • processor module (7700) comprising at least one processor (7780) which might be a general micro processor, a micro controller or a digital signal processor or an embedded processor in a field
  • the processor module (7700) might also be comprising one or more additional processor units (7720) .
  • These additional processor units (7720) might be custom modules optimised for tasks or sub-tasks related to equation solving and optimisation of
  • Figure 19 shows some examples of a partially clogged light mixing element
  • Figure 19 a) shows a mixel (3000) partially clogged with matter (3030) , The clogging matter is only present in the projection onto the
  • Figure 19 b) shows another scenario where the clogging matter has started to cross the border between pixels
  • figure 19 c) shows a case where the clogging matter is present in the projection onto both pixels.
  • Camera body comprising image sensor, control electronics, data acquisition system etc
  • Interface controller in compute like CameraLink, Ethernet, USB, FireWire or similar Permanent storage device, hard disk, solid state disk and its like

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Spectrometry And Color Measurement (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Recording Measured Values (AREA)
  • Optical Recording Or Reproduction (AREA)

Abstract

Hyperspectral camera comprising light mixing chambers (3000) where the chambers are projected onto the imaging sensor (2100), the projection of each chamber being slightly larger than a sensor pixel. The chambers are placed as a linear array (3100) in the slit plane (2040) for a first embodiment or as a two dimensional matrix directly in front of the imaging sensor for a second embodiment. The mixed light from each chamber is depicted by several sensor pixels, the sensor outputting sensor information used to form an overdetermined equation set,, this set is solved and optimised for the solution giving the lowest overall error or the best fit. The solution of the equation set combined with the optimisation is the intensity values of the chambers (3000) constituting imaginary pixels being calculated. These imaginary pixels form the output of an improved hyperspectral camera system, which has significantly lower optical errors like keystone and point spread function variation for different wavelengths.

Description

Hyperspectral camera and method for acquiring hyperspectral data.
Background of the invention
There has been a growth in the use of hyperspecti'al cameras for aerial imaging, environmental monitoring, forensic science, forestry, agriculture as well as in military and industrial applications. Hyperspectral cameras normally also cover a wavelength range outside the visible thus making the design of the optical system very challenging. There are different principles of hyperspectral imagers, but in this patent application a "hyperspectral camera" is defined as a hyperspectral imager of the "push broom" type.
One of the most important limiting factors in a
hyperspectral imaging system is spatial misregistration as function of wavelength. In hyperspectral data, each spatial pixel is supposed to contain the signals for different spectral bands captured from the same spatial area. This type of misregistration occurs when either the position of a depicted area (corresponding to one pixel) changes as function of wavelength {this error is commonly referred to as "keystone"}, or the borders or objects in the depicted area are blurred differently for different spectral channels, which is caused by variation in the point spread function (PSF) as function of wavelength. In practice this means that information from different positions on the scene depicted can be
intermixed and resulting in an erroneous spectrum for spatial image pixels positioned close together.
Another effect or limiting factor is a similar spectral misregistration commonly referred to as "smile". This effect will partly shift the spectrum of a depicted poin .
This invention solves or at least improves significantly the problem with keystone and PSF variations for a first embodiment, and the problem with keystone and smile for a second embodiment. Several approaches have been tried in the past in prior art to solve the problem, the two most common approaches being hardware correction and re- sampling of the hyperspectral image data. A further description of prior art is given in the section below.
Prior Art Most push-broom spectrometers operate as follows: they project, a one-dimensional image of a very narrow scene area onto a two-dimensional pixel array. A dispersing element (a diffraction grating or a prism) disperses light in such a way that, instead of a one-dimen ional image elongated in direction, each column of the array contains spectrum of one small scene area as shown in figure 2. The two-dimensional image of the scene is then obtained by scanning. The final data form a so-called "datacube", which represents a two-dimensional image with the third dimension containing the spectral information for each spatial pixel .
Due to various optical aberrations in real -world
hyperspectral cameras, the spectral information is not captured perfectly. Spectral misregistration occurs when a given pixel on the sensor is expected to receive light from a certain spectral band but instead receives
slightly wrong wavelengths . Spatial misregistration occurs when a pixel on the sensor receives a certain spectral band but looks at slightly wrong area of the scene. Misregistration is discussed in more detail in the article "Spectral and spatial uniformity in pushbrocrrt imaging spectrometers" by Pantazis Mouroulis, 1999.
Both types of misregistration may severely distort the spectral information captured by the camera. Therefore they should be corrected to a small fraction of a pixel . In hyperspectral cameras for the visible and near- infrared region (VNIR) spectral misregistration can be corrected by oversampling the spectral data and
resampling it in postprocessing (since most of the sensors have many extra pixels in the spectral direction, such a camera will still have good spectral resolution) . However, it is normally desirable to correct the spatial misregistration in hardware as well as possible.
Spatial misregistration in hyperspectral cameras is caused by two factors:
1} . Variation in position of the PSF's centre of gravity as a function of wavelength, usually called spectral keystone. This is show in figure 4. 2 ) , Variation in size and shape of PSF as a function of the wavelength. This is shown in figure 5.
Since the positions of PSF's centre of gravity for different wavelengths must not differ by more than a small fraction of pixel size {even as small deviation as 0.1 of a pixel may introduce noticeable errors in the measured spectrum) , optical design of such systems is very challenging. Keeping the size and the shape of PSF similar for all wavelengths is perhaps even more
difficult .
In general, pixel count in hyperspectral cameras is very modest (compared to traditional imaging systems such as phot©cameras) , since it is not enough just to produce a reasonably sharp image, and optical aberrations have to be corrected at subpixel level. Optical design becomes extremely difficult if higher spatial resolution is required.
Another serious challenge is to build the designed camera. Manufacturing and centration tolerances for hardware corrected cameras (even wit relatively modest. spatial resolution of 300-600 pixels) can be very tight.
As a result of such strict requirements to the image quality, the new hyperspectral cameras more or less converged to a few designs which offer somewhat
acceptable correction of spatial misregistration.
Offner relay with a convex diffraction grating has magnification -1 and is often used in hyperspectral cameras ("Optical design of a compact imaging
spectrometer for planetary mineralogy"' by Pantazis
Mourouiis et ai , Optical Engineering 466, 063001 June 2007; Patent US 5,880,834}. It can be designed to have reasonably small spectral keystone. Variations in PSF's size and shape can be corrected to some extent, but there are too few optical surfaces to make this kind of correction perfect. In a real system manufacturing and centration tolerances degrade the spatial misregistration further. Even though Offner relay is not very sensitive to decentration, the tolerances can be very tight in cameras with high spatial resolution. The minimum i.e., the fastest F-number in Offner cameras is limited to approx. F2.5. Polarisation dependency of the diffraction grating may be a problem.
Dyson relay is another example of an optical system used in hyperspectral imaging. (Optical design of. a coastal ocean imaging spectrometer, by Pantazis Mouroulis et al, 9 June 2008 / Vol. 16, No, 12 / OPTICS EXPRESS 9096), It has magnification -1 and uses a concave reflective, grating as a dispersive element. There is also at least one refractive element. The F-number can be significantly lower (faster) than in Offner systems. The system can be quite small. However, extremely tight centration
requirements make it difficult to achieve low
misregistration errors even with low resolution sensors. Both the slit and the detector face the same optical surface, therefore stray light is often a problem. Also, it is a challenge even to place the detector close enough to the first optical surface (as close as it is required in the Dyson design) . Due to extremely tight tolerances and practical difficulties in placing the detector, the resolution, of Dyson cameras is not particularly high.
Design and manuf cturing of good foreoptics (i.e. the foreoptxcs which would take full advantage of low misregistration error in the following relay) for both Offner and especially Dyson relays is very challenging.
Some manufacturers base their hyperspectral cameras on various proprietary lens systems. Performance of such cameras is more or less similar to the performance of the Offner based cameras. In a push-broom hyperspectral camera rays with different wavelengths are focused on different parts of the sensor. Therefore, compared to more traditional imaging systems (such as photolenses) , sensor tilt can be introduced
(United States Patent. US 6,552,788 Bl) . This tilt can be used as an additional parameter when correcting keystone, smile, and PSF variations in the optical system. Of course, the tilt will not eliminate keystone etc.
completely - it merely offers more flexibility in
optimizing system's performance. However, in relatively complex systems, where there are many such parameters available already, introduction of this additional parameter may not lead to any significant improvements in keystone, smile, and PSF variation correction.
A known alternative to precise correction of spatial misregistration in hardware is resampling. Since the most- challenging requirements are lifted in resampling
cameras, the optical design becomes similar to
traditional imaging optics. This gives a possibility to design optics with lower (faster) F-number, high
resolution etc. The negative side is the fact that the residual spatial misregistration after resampling with approx. Ix factor is quite large. In order to bring it down to acceptable level downsam ling by factor 2x or more is usually required. Therefore the full sensor resolution is not utilised. Also, the necessity to capture two times more pixels than in case of hardware corrected systems, may slow down the framerate and processing speed.
Variable filter camera is another way to capture
hyperspectral data. The principle of operation is described in the US Patent 5,790, 188. This camera can be quite small compared to the other hyperspectral cameras. However, it may be necessary to resample the captured data, unless the scanning motion is stable with subpixel accuracy. Also, the spectral resolution becomes very limited if a reasonably low F-number is required.
If a hyperspectrai camera has to work in a very wide spectral range, it may be beneficial to split this range between two or more sensors, as described in the NASA PIDDP Final Report (May 1st 1996) «A Visible-Inf ared Imaging Spectrometer for Planetary Missions. » Example of. such an imaging spectrometer is shown on Fig.3 of this report. Since both sensors share foreoptics and slit, it is possible to get very good spatial coregistration in the along-the-track direction. However, in tire across- the -track direction {where keystone is normally measured) it will be nearly impossible to achieve good {i.e. a few percent of a pixel) coregistration . This dawback of the multiple sensor approach is explained in the Final Report of the « Concept for Future Visible and Infrared Imager* Study, by Astrium GmbH for ESA, Doc. No FI-RP--ASG-0007 (Chapter 4.5.3, Page 4-33). If the slit 2040 in the camera from Fig. 2 is replaced by a thin plate with pinholes ar anged in a two-dimentional array, it becomes possible to capture the datacube in one exposure without scanning. This approach together with possible enhancements is thoroughly described in the US Patent Applications 2006/0072109 Al and 2008/0088840 Al . While being great for some applications {such as
multispectrai video) , this approach severily limits spatial and/or spectral resolution of cameras (if
compared with push-broom, approach) . Also, cameras built on this principle are just as prone to smile and keystone errors as the previously described push-broom cameras.
Necessity to correct, spectral keystone to a very small fraction of a pixel is the principal driver restricting the spatial resolution of the existing hyper pectral cameras, possible design approaches, speed (i.e. light gathering ability) , and the final data quality.
Resampling hyperspectral cameras, while being free of the first three restricting factors, normally require downsampling of the captured data by a large factor, which reduces the resolution (i.e. pixel count) of the final data. In other words, even though it is possible to design and manufacture a resampling camera with very sharp optics and very high pixel count, the captured data have to be dovmsampled by a relatively large factor in order to match, the data quality from a hardware corrected camera. This downsampled data may still have higher pixel count than the data from a hardware corrected camera, but clearly the full resolution of the sensor will not be utilized.
To sum u this means that in state-of-the-art systems the requirement for keystone correction in the optics typically limit the maximum resolution, the light collecting capabilities of the system, not to mention the number of feasible optical design solutions.
Even when the keystone has been corrected as well as possible using all available techniques, the remaining keystone will still be significant for a vast majority of systems. In the few systems where the keystone appears to be closer to acceptable the overall resolution or pixel count is relatively low. The cause of this is that the precision of the keystone correction is .linked to the pixel count for current state-of-the-art systems.
Physically in any given system keystone is a certain fraction of the image size. When one increases the pixel count for a given image size, keystone as a fr-action of the pixel size will therefore increase.
Description of the invention
The current invention introduces new "light mixing" elements, a kind of imaginary pixels, combined with a novel approach to restore image information corresponding to these imaginary pixels based on image data from a traditional image sensor. Compared to existing
hyperspectral cameras the current invention has several advantages, namely it:
1) Significantly reduces spatial misregistration relative to the pixel size compared to the existing hardware corrected solutions.
2) Removes the necessity to correct spatial
misregistration in the imaging optics. This opens up a possibility to use other optical designs than the traditional "hyperspect al solutions" such as Dyson or Offner relays. Therefore the spatial resolution (and the pixel count) as well as light throughput of the imaging optics can be greatly increased. The manufacturing and centration tolerances at the same time will not have to be tighter than in the current systems. On the contrary, they may be more relaxed .
3} Keeps resolution and pixel count of the recorded data nearly at the same level as the sensor pixel count . Figure 1 shows the basic principle of the current
invention. Foreoptics (2020) project the scene onto the slit plane (2040) .
The. slit plane (2040) contains an optical component
(3100) which mixes the light in such a way that the spatial content of the signal has slightly lower
resolution than the spatial resolution of the sensor. An example of such component is an array of square chambers (3000) with reflective walls as shown in figures 1 and 3 a) . The number of chambers has to be at least a few percent lower than the pixel count of the sensor (2100} in the spatial direction. Each chamber (3000) contains radiance and spectral content of a small scene area which will be recorded as one pixel in the final captured data. The projection of this scene pixel onto the slit will hereafter be referred to as a "mi el."
A mixel or an array of mixels have feature sizes in the micron and tens of microns range for the most common wavelengths and pixel sizes, therefore the machining tolerances are stringent. One possible method to
manufacture the mixels or array of mixels is to use a high- spect --ratio micromachining technique using deep X~ ray lithography. One feasible option is often called the WLIGA" process. A publication describing such processes is "High-Aspect-Ratio Micromachining Via Deep X-Ray
Lithography", H Guckel , Proceedings of the IEEE, Vol. 96, No. 8, August 1998. For systems with more relaxed tolerance requirements laser machining or possibly other methods can. be used to manuf cture the array of mixing chambers , For some optical designs it might be beneficial to have varying mixel lengths (ref . figure 7 h) and/or a curved mixel array (ref. figure 7 b, c, f , g) .
The slit with the mixel ax-ray can be protected from contaminants, such as dust particles and gas, by
transparent windows or lenses .
These windows can be placed far enough from the slit, so that any scratches, particles, and other imperfections on the surfaces of these windows would appear very blurry in the slit plane,, sensor plane, or any other corresponding intermediate plane.
The slit enclosure formed by the windows and/or lenses can be filled with gas such as nitrogen in order to prevent oxidation of the reflective surfaces of the mixel array i.e., introducing a protective atmosphere. This enclosure can be sealed or it can be purged with the gas in question. Another alternative is to have vacuum in the enclosure.
The relay optics (2060) projects the image of the slit (3100) onto the sensor (2100) . The image of each mixel has to be (at least a few percent) larger than one sensor pixel. Also, in order to reduce noise in the final data it is beneficial to make the projection of the mixel array at least 1 pixel shorter than the length of the sensor, so that for every mixel (including the mixels at the ends of the mixel array) most of the energy is captured by the sensor. The dispersive element {2080} in the relay optics (2060) ensures that the image of the slit is dispersed approximately perpendicular to the slit (3100) .
The processing unit restores the intensity content of each mixel (for each spectral channel) based on the sensor output.
Restoring equations
After the light was captured by the. sensor, it is possible to calculate with high accuracy the total amount of light inside each mixel for each wavelength. The result of these calculations will accurately represent the spatial and spectral content of the captured scene.
The method to perform these calculations is described below. First, the general equations are presented.
Then, a simple example which illustrates the principle for the case of infinitely sharp optics is provided (the Chapter "Flat light distribution"). Finally, it is shown how to take into account the blur introduced, by the optics (the Chapter "Flat light distribution with transitions"). Again, a simple numerical example is provided.
The general equations
Let us consider the situation where we want to restore N mixels from M recorded pixels (M>N) . The keystone is then equal to ( -N) pixels. This situation is shown in Figure 8. Figure 8 shows the mixels and the corresponding recorded pixels for the general case. We can now set up the following set of equations:
Figure imgf000014_0001
whe e :
Sn is signal content of mixel #n.
SR m is signal content recorded in pixel #m on the sensor. qKn is the fraction of the signal content in mixel #n that is recorded in pixel #tn on the sensor.
N is the total number of mixels .
is the total number of pixels recorded on the sensor.
Here SK,,, is known (measured) and qran is known (can be calculated) for ail m and n when the keystone for the system and the "shape" of the light distribution from the mixel (as it is projected onto the sensor) is known.
Equation 1 can be written in matrix form:
Figure imgf000014_0002
Since typically only a couple of mixels contribute to each recorded pixel, most of the coefficients q are equal to zero. The matrix will then typically have the form:
s..
si (Eq. 3)
where the coefficients ¾ are nonzero only along the diagonals and zero everywhere else.
The matrix q describes important properties of. the camera and can be called the ^camera model".
The matrix system {Eq. 2 and 3} can now be solved for the unknowns Sn. Note that the system has more equations than unknowns (M>N) , In fact, each extra pixel of keystone gives one extra equation. However, for the ideal case when there is no noise in the system, the matrix system is compatible, i.e., can be solved. For a real system with noise, the system is overdetermined and an
optimisation method such as for instance the least squares method must be used to obtain the solution. The process described above can be repeated for all spectral lines. This means that even though the spectral lines have different length (i.e. different keystone} they will all be converted to the same final grid with no loss of resolution with regards to mixel count. Of course the number of mixels has to be lower than the number of sensor pixels for the method to work.
Flat light distribution
Let us look at the case when the light distribution from each mixel is flat. An example with four such mixels recorded onto five sensor pixels is shown in figure 10 and in a somewhat simplified form in figure 9, Figure 10 shows the mixels with flat light distribution recorded onto the sensor pixels .
The corresponding matrix equations are given by:
Figure imgf000016_0001
and give the correct mixel values [10 30 100 50] when being solved. This example is noise free and the system is therefore compatible, but. in real life the system will be ove determined . Flat light distribution with transitions
Before the mixels with flat light distribution is recorded onto the sensor, the signal passes through optics which Λ smears' the signal somewhat. What, happens when the signal from a mixel is smeared, is that part of the signal leaks into the neighboring mixels, see figure 11. If i is known how the signal is smeared (i.e., the Λ shape' of the transition is known) , then the original content (Sn) of the mixel can .be restored as before.
Figure 11 shows the mixel with flat light distribution and signal content Sn (upper- figure) , and the
corresponding mixel where the signal is smeared and has leaked into the neighboring mixels (lower figure) .
Figure 12 shows an example of a third order polynomial transition used to model the smear effect. The figure (figure 12) shows an example of the details of such a transition. The transition starts at ¾χχ and ends at x=x2 - The original signal content (before smearing) of mixel #1 is equal to yi, and the original signal content of mixel #2 is equal to y2 (the width of the. mixel is set equal to 1) . As an example, we will show how the
restoring is done for transitions that can be described by third order polynoms (however, the principle applies to any type of transition as long as the shape of the transi ion is known) .
The equation that describes a third order polynomial transition is
Figure imgf000018_0001
where a and c are two constants that can be determined from the boundary conditions y (¾ )
Figure imgf000018_0002
(or equivalently y
Figure imgf000018_0003
and y' ( x:1 ) f or equival ently y' (x2) -0) , which gives
Figure imgf000018_0004
2(x? - )
The transition has odd symmetry about its centre where x0 and yQ are given by
Y 4~ Y
χ — i — z-
(Eq,
Figure imgf000018_0005
In order to find the signal content As"1 of the area that lies between x=xa and =Xb of mixel #n and tlat is recorded onto pixel #m on the sensor, one must calculate
Figure imgf000019_0001
where
Figure imgf000019_0002
Here an arid cn ax~e the coefficients in the equation that describes the transition between mixel #(n~i) and mixel #n.
Figure 13 shows an example where four such mixels are recorded onto five (+ two} sensor pixels. The mixels with transitions are recorded onto the sensor pixels. The transition extends a fraction 0.2 into each side of the mixel . The corresponding matrix equations are given by:
Figure imgf000020_0001
and give the correct values (original mixel content) [10 30 100 50] when solved for the unknown mixel value This example is noise free and the system is therefore compatible , but in real life the system will be
overdetermined .
Keystone and PSF variations in the foreoptics
It is important to remember that the data is restored (i.e., the keystone and PSF are corrected) only for the relay optics. The foreoptics still needs to be as
keystone free as possible with as similar PSF for
different wavelengths as possible. The keystone part can be solved by using reflective optics which is (of course) completely keystone free. Now about the PSF variations. By using relay optics with magnification 0.2 , ..0 , 5x {or - 0.2... ~ 0.5x) it is possible to achieve F- numbers as low as F1-F1.3 in the image space and at the same time have very modest F- umber in the slit plane {-F4-F6), Therefore :
- The foreoptics can be designed and made diffraction limited.
- The size of the Airy disk becomes quite small compared to the pixel size (and therefore the disk is small compared to the raixel size too) , It means that even though there is a difference in PSF size for the shorter and the longer wavelengths,, this difference is quite small compared to the mixel size and therefore it will not affect the final data as much as it happens in conventional systems.
One optimisation that can be done for the array of mixing chambers is to put refractive optics in front of the array of mixing chambers to equalise the point spread function for different wavelengths. This refractive optics' will defocus different wavelengths differently to compensate for different point spread function as
function of the wavelengths in the fore optics .
For a given f -number of the fore optics in the standard embodiment without refractive optics there is an optimal ratio between the width of the mixing chamber and its length. For this ratio the best mixing will be achieved. This ratio should be adjusted in the embodiment using refractive optics in front of the mixing chambers. The optimal ratio for the different optical solutions can be achieved using advanced optical simulation software or can be found analytically. Yi-Kai Cheng and Jyh-Long Ch rn has in the academic paper "Irradiance formations in hollow straight light pipes with square and circular shapes" (Vol. 23, No. 2 /February 2006/J. Opt. Soc. Am A) investigated light propagation in hollow straight light pipes in more detail. The refractive optics in front of the array of mixing chambers may be implemented as a lens system or as one or more flats made of an optically transparent material where one takes advantage of the longitudinal chromatic aberration of the material. These flat (s) could be made of, but not limited to, any standard optical material.
Calibration
The relative position of mixels and pixels in the across - the-track direction must be known precisely in order to restore the mixel values correctly.
A single mixel at one end of the array (figure 3b) , which is separated from the rest of the array by a relatively large gap, would illuminate 2-3 pixels of the sensor for each spectral channel. If the PSF of the optics is known, the position of the sensor relative to the mixel array can be calculated. This single mixel may have any shape, it may also be a pinhole, or a dedicated light source. The main requirement is to have known intensity
distribution inside it.
Placing two such mixels (one on each side, of the mixel array, figure 3c) will provide the possibility to monitor the position and the length of the sensor relative to the mixel array.
Instead of using a part of the main sensor for measuring the position of the reference mixels or light sources it is possible to use small dedicated sensors besides the main sensor. Also, additional optics such as mirrors and/or prism can be used to direct, the light to these small sensors. As it has been pointed out, in order to restore the mixel content correctly the PSF of the relay optics (i.e. the shape of transition between every two adjacent mixeis) should be known. This can be found directly in the optical design software but it will not be very precise since the PSF is very affected by tolerances in the optical system. The best would be to measure the PSF directly on the assembled system for several wavelengths and field points. For this purpose a secondary array of mixeis (or pinholes) should be placed parallel to the primary array (figure 3d) . Every two adjacent elements of the secondary array should have sufficiently large gap in between - so a single PSF can be clearly observed. The simplest (but again not very precise) method of measuring PSF would be to use the signal from the 3 consecutive sensor pixels where a signal mixel from the secondary array is imaged. Much more precise information about PSF can be obtained by taking several measurements from those sensor pixels and changing the position of the mixel relative to the sensor for each measurement. This can be done by a high resolution translation stage.
When the PSF is known for several wavelengths and field points, it is possible to use the same secondary array for measuring the keystone of the optics. While it is possible to get the information about keystone from the optical design software, measuring keystone of the real assembled system allows to relax the tolerances. It is also possible to determine the position and the length of the mixel array relative to the sensor, as well as keystone and PSF, by restoring the data from a natural scene and minimising (by varying the assumed position, length, keystone and/or PSF) the inequalities in the solved overdetermined system of linear equations.
To restore the mixel values we must solve equation ( which can be written in short form-.
Figure imgf000024_0001
where
q - [M x N] matrix with M>N
S - [Nxl] vector
SR - [Mxl] vector
The system is overdetermined and must be solved by use of an optimisation method, such as for instance the least squares method. The least squares method searches for the solution ° that gives the best fit ° to the recorded data SR , i.e., that minimizes the error vector Δ where
Figure imgf000024_0002
and
Figure imgf000025_0001
The more noise that is present in the sensor pixels, the larger the error vector Δ will be. This fact can be used to calibrate the system with respect to relative position between mixels and pixels and/or relative length of the slit and/or keystone and/or PSF, based on the information in the captured image only .
Consider an example where we want to calibrate the system with respect to the relative position between mixels and pixels. An assumed relative position that is different from the correct one is equivalent to noise in the sensor pixels, and will make it more difficult to fit a solution {for the mixels values) to the recorded data (the sensor pixels) . This means that the resulting error vector Δ will be larger, and the more incorrect the assumed relative position is the larger vector Δ will be. The smallest value for the error vector Λ~Λ,;· is found when the assumed relative position is correct. By varying the assumed relative position and determining when the error is smallest, the correct relative position between mixels and sensor pixels can be determined. Figure 14
demonstrates the principle. The figure shows the error Δ as a function of assumed relative position. The minimum error vector m±n is found for the correct relative position X0.
The same method can be used to calibrate for the relative length of the slit and/or keystone and/or PSF. Two-dimensional mixel array, correction of smile
The mixel array can be made two-dimensional and placed close to the sensor plane or in any intermediate image plane after the dispersive element. The most practical location for it is the sensor plane. This gives a possibility to correct not only keystone but also smile.
The array is distorted according to the smile and keystone of the optical system. Since the size of each mixel is slightly larger than the size of the sensor pixels, the light from each mixel will be distributed between 4 or more sensor pixels. Since the number of pixels is larger than the number of ixels, and the light distribution inside each mixel is known, it is possible to restore the intensity content of each mixel based on the sensor output similarly to what is done in the one- ditnensional case. The principle is shown in figure 15. The restored intensity content of the mixels forms a smile and keystone free image as long as the geometry of the two-dimensional mixel array replicates the smile and keystone of the optical system. The misregistration error caused by differences in PSF (for the optics before the two-dimensional mixel array) for different wavelengths will in this case remain uncorrected. Therefore, the optics does not need to be corrected for smile or
keystone but must have as similar PSF for different wavelengths as possible. Data processing from a system calibration perspective .
The PSF of the camera optics is measured for every extra mixel and/or pinhole (3150} (3170) (3190) (ref. figure 3) for one or more wavelengths. The PSF for the secondary mixel array (3200) is measured for several wavelengths. Measurements of PSF are done by moving the mixel array relative to the sensor (or vice versa) in small steps. This can be done with a commercially available or custom- made translat ion stage.
If we do not have a monochromatic light source or a light source with distinct spectral lines during the
calibration procedure it might be beneficial to use a shutter to close the first array of light mixing elements while the second array is being used for calibration. During normal use it will simiiax-ly be beneficial to close the secondary array using another shutter.
For the camera, with the secondary mixel array (3200) when the PSF is known, the secondary mixel array (3200) can be used to characterise the keystone of the camera optics. The main mixel array (3100) (ref . figure 3 d) should be obscured using a shutter or similar, and only the light passing through the secondary mixel array should be used. The measured keystone and the PSF data should be used to form the matrix q in the equation 2. The secondary mixel array should no be obscured, and the main mixel array with the end pinholes/mixels (3150) (3170) (31.90) should be opened. If the camera does not have the secondary mixel array, the keystone and the PSF data can be imported from the optical design software such as ZEMAX by ZEMAX
Development Corporation.
The signal from the end pinholes/mixels (3150) (3170) (3190) and the P5P data for the corresponding field points are used to determine the position and/or length of the mixel array relative to the sensor. In principle, this position and this length can be predetermined by the mechanics of the camera, but since it is desirable to know them with subpixel accuracy, the easiest way is normally to monitor the signal from the end
pinhoies/rriixels . Note that if the end pinholes/mixels are present, more or less every captured frame, i.e.
every spatial line, will have the data about the current relative position and/or length of the mixel array relative to the sensor.
The information about the position and length of the mixel array relative to the sensor is used to adjust the coefficients of the matrix q in Eq,2. Alternatively, several versions of the matrix q for different r-elative positions and lengths can be generated in advance in order to reduce the amount of calculations for finding mixel values.
The mixel values for different wavelengths can now be found by solving the overdetermined equation 2 and optimising the solution by minimising the inequalities in the solved system.
Any or ail of described calibration steps can be skipped. Instead, the data from a natural scene can be restored for different assumed positions and/or lengths of the mixel array relative to the sensor, and/or FSF' s and/or keystone. The solution, where the inequalities in the equation 2 are at the minimum, is the most accurate.
Also, if the image sensor contains a dead or hot pixel, the inequalities in the solved equation 2 will increase. If there is a reason to suspect that one of the sensor pixels is dead or hot (for example, by inspecting a few images) it is possible to exclude that particular pixel from the equation 2. The corresponding mixel can still be calculated because it is imaged onto at least two adjacent sensor pixels in each spectral band. Then it can be checked whether the inequalities in the solved equation 2 were lowered by excluding the suspected dead or hot. pixel from the calculations. If it is not known which of the sensor pixels is hot or dead, it is possible to exclude one of them at a time from the equation 2, and then solve that equation. This action should be repeated for every suspected pixel of the sensor (or even for every pixel of the sensor) . The solution of the equation 2 with the lowest inequalities will indicate which pixel is hot/dead and therefore is to be excluded from the calculations . In case of a perfectly calibrated camera and ideal mixing the inequalities in the solved equation 2 will occur only because of photon noise and readout sensor noise, i.e. they will depend on the brightness of the scene only. If the inequalities are larger, than the brightness of the scene would suggest, this would indicate that the
calibration of the camera (i.e. the coefficients in the matrix q from the equation 2) is not optimal. In other words, inequalities in the solved equation 2 provide useful diagnostic tool for the described hyperspectral camera .
If the information regarding position of individual mixels relative to the pixels is incorrect, there will be errors in the restored data . Such errors will appear on 2D images as artefacts on sharp borders between objects. One way to control, that the position and length of the mixels and pixels relative to each other are correct, is to control, that the objects with known, geometry (such as buildings) retain their rectangular shape. Also in such cases it is possible to adjust the coefficients of the restoring matrix using the objects with known geometry in the image as a reference . The energy in each raixel is calculated correctly only if the light distribution in the output of the mixing chamber is independent of the light distribution in the input of the mixing chamber. In a push-broom
hyperspectrai camera light in each spectral channel becomes partially coherent. The consequence of this partial coherence are ripples - small (in size)
variations in light intensity across the output of the mixing chamber. These ripples will have different aralitucses in different parts of the mixing chamber, also their amplitudes will depend on the position of the light source in the input of the mixing chamber, i.e., the light distribution in the output will not be completely independent of the light distribution in the input anymore. For realistic bandwidth of a single spectral channel it will not. be very noticeable, but. it is possible to further improve the accuracy of calculating the mixei values (i.e. when solving the equation 2} if the mixei array is moved a few times during every
exposure back and forth perpendicular to the optical axis and parallel to the slit. The optimal amplitude of such movement is larger than the size of ripples but smaller than the mixei width. As is obvious for a person skilled in the art, the PSF which is used for forming the matrix q in the equation 2 includes both the PSF of the optics and PSF introduced by the motion blur.
In a case when a mixei is clogged, it is possible to measure the amount of clogging and to adjust the
restoring coefficients accordingly - so that the image is restored correctly. This may improve yield in slit manufacturing and/or allow vise of a partially destroyed slit.
Three different scenarios are described below-.
« The input of a mixei is partially clogged,
β The output of a mixel is partially clogged.
* A mixel is fully clogged. If the input of a mixel is partially clogged, it still mixes light, i.e. it outputs even illumination but the intensity is lower than it is supposed to be according to the scene. This will appear on the image as consistently darker pixels. Knowing the attenuatio coefficient (i.e. amount of clogging) one can fully recover the original brightness in postprocessing, and the only penalty will be increase in noise. The amount of clogging can be measured by observing a uniform object (such as sky, an integrating sphere, an illuminated diffuse object placed between the foreoptics and the mixels) , or using
statistic of the natural scene when adjacent sensor pixels are assumed to capture similai" amount of energy over several exposures. If a mixel, when compared to its neighbours, is consistenlty darker equally in all spectral bands, then the input of this mixel is partially clogged.
If the output of a mixel is partially clogged, it stil mixes light, i.e. it outputs even illumination in its unobscured part, and outputs no illumination in the obscured part . Figure 19 a) shows a mixel 3000 partia clogged, on its output by a particle 3030. The mixel is imaged onto two sensor pixels 2100. The shape of the clogged part can be measured similarly to PSF: by look at a uniform object (such as an integrating sphere or an illuminated diffuse object placed between the foreoptics and the mixel array) and moving the mixels in small steps parallel to the direction of the mixel array (Figure 19 a,b,c5. This scanning motion, may not be nesessarv in case of large keystone. If there is large enough keystone in the system (as large as the clogged part of the mixel or larger) and the uniform object is illuminated by a broadband source,, then the shape of the clogged part can be found by comparing the illumination of the pixels where the part is imaged to the adjacent pixels in different spectral bands. In case of large enough keystone this will be equivalent to moving the slit in small steps.
If a mixel is clogged completely, the corresponding part of the scene cannot be captured, and will be recorded into the datacube as black. Alternatively., this mixel can be interpolated by using the data from the adjacent.
mixels .
The benefits of very low misregistration errors can also be utilized in systems where 2 or more sensor's share common foreoptics and slit. If an ordinary slit is used, it is more or less impossible to get precise {i.e. small fraction of a pixel) coregistration between different sensors, but when using mixels, it is fairly
straightforward: the calibration of the camera is handled independently for each sensor, but since each sensor captures the content of the same mixel array, then the calculated mixel values should have as good
coregistration for 2 or more sensors as they have for a single sensor. Also, with so good coregistration between 2 and more sensors it is possible to expand the use of such cameras to new applications such as high dynamic range
hyperspectral imaging, when one of two sensor captures the highlight information about the scene, and the second one captures the shadows of the same scene. And it is all done with nearly perfect spatial coregistration between highlights and shadows. Brief summary of the different embodiments
Two main embodiments of the current invention have been described in the above text. These two embodiments differ in the implementation of the actual array of light mixing elements, mixels, but use the same concepts and principles to process data i.e. , " .re s to re " data.
A first embodiment has a. linear array of mixels (3100) placed in the slit plane (2040) or in any of the other intermediate image planes . This embodiment might be a standard one, ref. figure 3 a) and figure 7 a) or it might be comprising other features as shown in figure 3 b, c, d or . It could also have shapes as described in figure 7 b, c , e , e, f, g or h. A combination thereof i.e., features and/or shapes will be apparent for the person skilled in the art.
A second embodiment is comprising a two dimensional array of light mixing element (3500) . This two dimensional array will be placed directly in front of the image sensor (2100) or in any other intermediate image plane after the dispersive element. A variation of this second embodiment is to make the two dimensional array fit the optical errors of the system i.e., so that it matches the smile and keystone of the optics . The processed or restored image data from this embodiment will be smile and keystone free .
The current invention is based on a novel principle of using light mixing elements constituting a kind of imaginary pixels which can be calculated based on data from image sensor pixels. The two embodiments described here cover all optical components all the way to the image sensor . At the output of the image sensor the electronics including the data processing system starts. In this section this will be referred to as the "data processing system:". The tasks of the data processing system is to acquire and digitise data from the image sensor and do all necessary processing of and calculation on the data so that restored image data corresponding to the mixels or imaginary pixels be outputted and stored. It will also do all necessary "house keeping" like controlling image sensor, translation stages etc,
The two outer ends of different embodiments for the data processing system are described in figures 16 and 18. The standard-component-based system (figure 16) is built using a standard camera head (2000} or industrial camera combined with a PC (5000) and an interface controller (5100) used for communication with the camera. The processing will be done in the PC processor (s) or in additional processor boards (5050) which might be comprising special processors suitable for the processing tasks. These processors (5050) could be CPUs , DPSs and its like. Custom software will be running on the PC controlling the system and acquiring and processing the data . A version of the camera data processing system in a self- contained embodiment is shown in figure 18, In this case the output is processed or corrected, and all this processing will be done by the electronics inside the camera so that it functions more or less like an
industrial camera, however, with optics for hyperspectrai use included. Figure IS and its figure text, give a more detailed description of this solution. As is obvious fox* a person skilled in the art, the im lementation of the data processing system can be done using the two versions described in this current
invention or any other solutions in between a custom self-contained system and a standard-component system or any other variations thereof .
The different processing steps can be grouped into logical blocks so that the functionality can be described even though it might be performed on different, hardware implementations. The term "image processing unit" refers to a combination of software, firmware and hardware performing the required image processing tasks. The image processing unit is comprising an "equation solver unit" which is the functionality described in the section "Restoring equations" and an "optimisation unit" which is the functionality described in "The general equations". As can easily be understood is that the equation solver unit is solving the equation sets as given in the
description and the optimisation unit, is optimising the solution. The matrix q in equation 2 describes important properties of the camera and can be called the "camera model" , Description of the figures
Figure 1 shows the basic principle of the current, invention, Foreopties (2020) project the scene onto the slit plane (2040) . The slit plane {2040} contains an array of light mixing elements (3100) . Each element
(called 'mixel') (30005 represents one spatial pixel in the final hyperspectral data.
The relay optics (2060} projects the image of the slit (3100) onto the sensor (2100) . The image of each mixel
(3000) is slightly (a few percent) larger than one sensor pixel. Also, to reduce noise in the final data it is beneficial to make the projection of the mixel array at least 1 pixel shorter than the length of the sensor, so that for every mixel (including the mixels at the ends of the mixel array) most of the energy is captured by the sensor. The dispersive element (2080) in the relay optics (2060) ensures that the image of the slit is dispersed approximately perpendicular to the slit.
Different wavelengths λΐ and λ2 are dispersed
differently.
Figure 2 shows the prior art. It illustrates the general principle of a push-broom hyperspectral camera. The foreoptics (2020) create an image of the scene in the slit plane (2040) . The slit (2200) acts as a field stop and cuts out a narrow portion of the image and transmits it further into the relay system (2060) . The relay system (2060) images this narro scene region onto the sensor (2100) . Due to the dispersive element (2080) in the relay, the image of the slit is dispersed on the sensor - so that the different wavelengths (λΐ, λ2) are imaged onto different parts of the sensor. Figure 3 shows a slit {3100) with mixing chambers and the additional holes/chambers for calibration purposes.
Various configurations are shown. a) . A single array of mixels (3100) . b) . A single array of mixels with an additional mixel at one end of the slit (3150} . This single roixel makes it possible to measure the relative position of the mixel array and the sensor or their relative length. c) . A single array of mixels with two additional mixels - one on each end of the mixel array (3150, 3170) . These single mixels make it possible to measure the relative position of the mixel array and the sensor as well as their relative length. d) . The array of mixels (3100) like in a) , and a
secondary array of mixels (3200) with relatively large gaps between the mixels . The secondary array allows to measure PSF and keystone of the optics before taking the measurements . e) . The mixels in the array (3100) are not necessarily squared. f) . The additional mixels (3190) at the ends of the slit are not necessarily squared. Of course all the features shown in figure 3 can be combined.
Figure 4 illustrates what a spectral keystone is. The centres of gravity of the PSF for different wavelengths are not located on the same vertical line., and the light coming from the same area of the scene is recorded as coming from slightly different areas depending on the wavelength.
Figure 5 shows that a keystone-Iike effect may occur in the final data even if the actual keystone of the optical system is zero. This phenomenon is caused by variation of PSF for* different wavelength,
Figure 6 shows various possible shapes of mixels.
Figure 7 shows that the mixel array can be straight (a) , or curved (b,c) in order to compensate optical
aberrations. Also, the mixels can have different size (d,e) . Of course, these two approaches can be combined (f) . Figure 7 (g) shows a curved mixel array, while (h) shows a mixel array with varying mixel chamber lengths . Figure 8 shows the general example of how mixels are recorded on the sensor in the case of perfectly sharp optics .
Figure 9 shows a simple numerical example of how mixels are recorded on. the sensor in the case of perfectly sharp optics ,
Figure 10 illustrates that the mixel content can be perfectly restored from the recorded pixels.
Figure 11 shows how an image of a mixel may look if the relay optics is not perfectly sharp. Figure 12 shows a possible representation of a. transition between two mixels after they have been imaged onto the sensor by the relay optics. Figure 13 shows how the mixel. content can be restored after the mixel borders have been blurred in the relay optics, and the amount of blurring is known.
Figure 14 shows how the deviation between the calculated sensor pixel values (from the approximate solution for the raixel values) and the recorded sensor pixel values changes as a function of the assumed position of mixels relative to the sensor pixels.
When the assumed relative position between the mixels and the pixels is correct, the inequality is minimum. This can be used for measuring the relative position of mixels and pixels by analysing the captured data. The method can be extended for measuring more than one parameter simultaneously: for example both the relative position of mixels and pixels, and the relative length of the mixel array and the pixel array. In addition the PSF and/or keystone can be measured.
Figure 15a shows a two-dimen ional mixel array (3500) on top of a two-dimensional pixel array or senso (2100) .
Figure 15b shows the same layout but this time the mixel array is distorted (3500) to fit the smile and keystone generated by the camera optics. The restored mixel intensities will represent smile- free and keystone-free hyperspectral data.
Figure 16 shows a system set-up based on a computer
{5000) with an interface controller (5100) for
communicating with and controlling the camera body (2000) with sensor (2100} . The camera body (2000) is connected to the spectrometer part (1000) with entrance
aperture {1050) . The computer (5000) will control the camera body and the spectrometer, and will also receive data acquired from the sensor {2100) and process them to restore corrected data in the standard CPU system of the computer or utilising one or more optional signal or graphics processing board (s) {5050). Temporary and processed data may be stored on a permanent storage device (5200) like a hard disk.
Figure 17 shows a simplified sketch of the data flow and the process steps in the signal processing related to the restoring of data. An analogue sensor signal (6000) is digitised giving digitised raw data (6050) which may undergo an optional pre-processing (6100) . At this level the raw data might be temporarily stored in a raw data storage (6200) for later processing or immediately go further in the chain to data restoring (6300) and/or calibration (6400) . The output of this is restored or corrected data (6500) which might be displayed right away or be stored in the final data storage means (6600) .
Figure 18 shows one possible implementation of the electronic system (7000) of a self-contained
hyperspectral camera according to the present invention. The figure shows the image sensor (2100) feeding analogue signals into an amplifier system (7100) which output is input to an analogue to digital converter (7200) . The figure also shows electronics for camera control (7300) which will control a positioning stage and possibly the image sensor. Electronics for I/O control (7400) like external sync signals is also shown. An interface controller (7500) is shown and this might be controlling interface types like CameraLink, FireWire, USB, Ethernet, GigE etc. A memory unit (7600) is shown and this might, be comprising one or more of Pl sh/EEPROM (7620) , DRAM (7640} , SRAM (7660}, buffer or FIFO circuits (7680} or any other types of memory devices. All this is
interconnected to the processor module (7700) comprising at least one processor (7780) which might be a general micro processor, a micro controller or a digital signal processor or an embedded processor in a field
programmable gate array. The processor module (7700) might also be comprising one or more additional processor units (7720) . These additional processor units (7720) might be custom modules optimised for tasks or sub-tasks related to equation solving and optimisation of
overdetermined equation sets.
Figure 19 shows some examples of a partially clogged light mixing element, Figure 19 a) shows a mixel (3000) partially clogged with matter (3030) , The clogging matter is only present in the projection onto the
rightmost pixel on the image sensor (2100) . Figure 19 b) shows another scenario where the clogging matter has started to cross the border between pixels, and figure 19 c) shows a case where the clogging matter is present in the projection onto both pixels.
The table below lists the numbers used in the figures-.
Description
Spectrometer, complete optical part
Entrance aperture of spectrometer
Camera body., comprising image sensor, control electronics, data acquisition system etc
Foreoptics
Slit plane
Relay optics
Dispersive element
Image sensor
Slit of prior art type
Mixing element, mixel
Array of mixel elements, mixels, slit One additional mixel for position measurement A second additional mixel for position and length measurement
Circular or pinhole shaped additional pixels Additional array of mixels for calibration purposes
Two-dimensional mixel array
Two-dimensional mixel array with distortion to correct for optical errors
Computer
Signal or Graphics processing .board
Interface controller in compute!" like CameraLink, Ethernet, USB, FireWire or similar Permanent storage device, hard disk, solid state disk and its like
Analogue sensor signal
Digitised raw data
Pre-processing step or pre-processed data
Storage means for raw data
Data restoring function
Calibration function
Restored / corrected data
Final data storage means
Electronic system of self contained camera Amplifier system
A/D converter
Electronics fox- camera control
Electronics for I/O control
Interface controller for CameraLink, Ethernet,
USB, Fireivire or similar
Memory unit
Flash EEPROM
DRAM 7660 SRAM
7680 Buffer or FIFO
7700 Processor module
7720 Custom or special task processor module
Processor like micro processor, micro 7780 controller or DSP

Claims

Claims
1. A hyperspectral camera comprising
a) foreoptics (2020)
b) a slit (3-100/2200)
c) relay optics (2060} with dispersive element (2080) d) an image sensor (2100.)
e) an image acquisition unit
and
f) an image processing unit
characterised by
g) an array (3100) of light mixing elements (3000) each mixing the incoming light so that the intensity and spectral content are mixed at the output, and having a dimension so that the projection by the relay optics of the output of each light mixing element onto the image sensor is slightly larger than an image sensor pixel, this array (3100} being in the slit plane, in the focal plane or in any other intermediate plane, h) an image processing unit comprising a coefficient matrix with elements q,i;n, the coefficient matrix
describing the distribution of mixed light from light mixing elements (3000) onto pixels on the sensor (2100) , this image processing unit also comprising an equation solver unit for solving an overdetermined equation set, the image processing unit solving the equation set based on data from the image sensor and the coefficient matrix,, and the image processing unit forwarding the solution to a data output unit
and
i) a data output unit outputting a corrected data cube.
2. A hyperspectral camera according to claim 1 characterised by
having a one dimensional array of light mixing elements {3100} in the slit plane (2040) or any other
intermediate planes before the dispersive element (2080) .
3. A hyperspectral camera according to claim 1
characterised by
having a two dimensional arra.y (3500) (3550) of light mixing elements (3000) in the sensor plane or
immediatel in front of the image sensor (2100) or in any other intermediate planes after the dispersive element (2080) ,
4. A hyperspectral camera according to claims 1 and 2 characterised by
having means to measure the position of the one
dimensional array of light mixing elements (3100)
relatively to the image sensor (2100) and/or measure the length of the one dimensional array of light mixing elements .
5. A hyperspectral camera according to claim 4
characterised by
- adding at least one light mixing element ox* any other hole (3150) (3130) or light, source at one end of the one dimensional array of mixing elements for position
measurement
and/or - adding at least two light mixing elements (3150) {3170} or any other hole (3190) or light source, one in each end of the one dimensional array for length ir.easurenient, these extra elements positioned some distance from the one dimensional array itself.
6. A hyperspectral camera according to any of the claims 1, 2 or 4
characterised by
having a second array of light mixing elements (3200) with longer spacing than the primary array of mixing elements, these mixing elements in the second array being used for measurement of the point spread function and/or keystone of the relay optics.
7„ A hyperspectral camera according to any of the claims 1 , 2 , 4 , 5 or 6
characterised by
having means to move the array of light mixing elements χ-elatively to the sensor preferably in small steps.
8. A hyperspectral camera according to any of the claims 1 , 2 , 4 , 5 , 6 or 7
characterised by
having calibration routines for measurement of the relative position of or the length of the array of light mixing elements or keystone or point spread function or an combination thereof. 9. A hyperspectral camera according to claims 1 and 2
characterised by
having an image processing unit creating an
overdetermined equation set with n unknowns based on m measurements, giving m equations, m > n , this image processing unit giving an approximate solution to the equation set.
10. A hyperspectral camera according to claim 9
characterised by
having an optimisation unit in the image processing unit optimising the solutions of the equations for the best fit. 12. A hyperspectral camera according to claim 10
characterised by
using the least squared error method for optimising the solution of the equations. 12. Method for acquiring hyperspectral data using a hyperspecrai camera comprising foreoptics, a slit, relay optics with a dispersive element, an image sensor, an image acquisition unit, and an image processing unit.
characterised by
placing an array of light mixing elementsin the slit plane, in the focal plane or in any other intermediate plane, each of these light mixing elements mixing the incoming light so that the intensity and spectral content are mixed at the output, and having a dimension so that the projection by the relay optics of the output of each light mixing element onto the image sensor is slightly larger than an image sensor pixel, processing data xsing a coefficient matrix with elements qmn, the coefficient matrix describing the distribution of mixed light from light mixing elements onto pixels on the sensor, this processing also comprising solving an overdetermined equation set, this equation set based on data from the image senso and the coefficient matrix, and then forwarding the solution to a data output step and
output a data set comprising a corrected data cube.
13. Hyperspectral camera according to any of the claims 1,. 2, 4, 5, 6 or 8 where means are present to move the array of light mixing elements perpendicular to the optical axis along the slit during the exposure.
14. Hyperspectral camera according to any of the claims 5, 7 o 8
characterised by
having one or two extra image sensors mounted beside the main sensor and in correspondence with the one or two extra light mixings elements for the measurement of light mixing array position or light mixing array length respectivel . 15. Hyperspectral camei^a according to any of the claims 1, 2, 3, 4, 5, 6 or 7
characterised by
having a protective, optically transparent window or* lens in front of the array of light mixing elements or on the rear side or on both sides.
16. Hyperspectral camera according to claim 15
characterised by
having a protective atmosphere comprising vacuum or containing a gas in the enclosure formed around the array of light mixing elements enclosed by the optically transparent windows or lenses .
17. Hyperspectral camera according to claim 15 or 16 characterised by
mounting the protective windows or lenses a distance away from the array of light mixing elements so that dust or scratches on the optical surfaces will be out. of focus and disturb the image less.
18. Hyperspectral camera according to any of the claims 1, 2, 4, 5, 6, 7 or 3
characterised by
having a common fore optics , one common light mixing array, and a beam splitter arrangement and two or more image sensors collecting light from a common or partially common relay optics ox* two or more separate relay optical systems .
19. Hyperspectral camera according to claim 18
characterised by
comprising at least two different image sensors having different spatial resolution and/or sensitivity for different wavelength bands.
20. Hyperspectral camera according to claim 18
characterised by
comprising at least two similar" image sensors having similar spatial resolution .
21. Hyperspectral, camera according to claim 8 characterised by
having means for performing calibration routines based on optimising assumptions of values for one or more of;
relative position of light mixing array,
relative length of light mixing array,
point spread function
and
Keystone.
PCT/NO2012/050132 2011-07-08 2012-07-05 Hyperspectral camera and method for acquiring hyperspectral data WO2013009189A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA2839115A CA2839115C (en) 2011-07-08 2012-07-05 Hyperspectral camera and method for acquiring hyperspectral data
EP12754107.6A EP2729774B1 (en) 2011-07-08 2012-07-05 Hyperspectral camera and method for acquiring hyperspectral data
US14/140,598 US9538098B2 (en) 2011-07-08 2013-12-26 Hyperspectral camera and method for acquiring hyperspectral data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NO20111001 2011-07-08
NO20111001A NO337687B1 (en) 2011-07-08 2011-07-08 Hyperspectral camera and method of recording hyperspectral data

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/140,598 Continuation US9538098B2 (en) 2011-07-08 2013-12-26 Hyperspectral camera and method for acquiring hyperspectral data

Publications (1)

Publication Number Publication Date
WO2013009189A1 true WO2013009189A1 (en) 2013-01-17

Family

ID=46796705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NO2012/050132 WO2013009189A1 (en) 2011-07-08 2012-07-05 Hyperspectral camera and method for acquiring hyperspectral data

Country Status (5)

Country Link
US (1) US9538098B2 (en)
EP (1) EP2729774B1 (en)
CA (1) CA2839115C (en)
NO (1) NO337687B1 (en)
WO (1) WO2013009189A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015133476A1 (en) * 2014-03-03 2015-09-11 エバ・ジャパン株式会社 Spectroradiometer
WO2018083209A1 (en) 2016-11-04 2018-05-11 Vaximm Ag Wt1 targeting dna vaccine for combination therapy
WO2018149982A1 (en) 2017-02-17 2018-08-23 Vaximm Ag Novel vegfr-2 targeting immunotherapy approach
WO2020049036A1 (en) 2018-09-05 2020-03-12 Vaximm Ag Neoantigen targeting dna vaccine for combination therapy
EP3626262A1 (en) 2015-06-18 2020-03-25 Vaximm AG Vegfr-2 targeting dna vaccine for combination therapy
WO2021144254A1 (en) 2020-01-13 2021-07-22 Vaximm Ag Salmonella-based dna vaccines in combination with an antibiotic

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170343413A1 (en) * 2014-09-03 2017-11-30 Ocean Optics, Inc. Patterning aperture slit device and mounting assembly for spectrometry
US10230909B2 (en) 2014-09-23 2019-03-12 Flir Systems, Inc. Modular split-processing infrared imaging system
US10182195B2 (en) * 2014-09-23 2019-01-15 Flir Systems, Inc. Protective window for an infrared sensor array
CN105571934B (en) * 2016-01-20 2018-04-10 清华大学 A kind of lens error correction method of the array high speed video system based on digital speckle
US10838190B2 (en) 2016-06-21 2020-11-17 Sri International Hyperspectral imaging methods and apparatuses
JP6765064B2 (en) 2016-06-23 2020-10-07 パナソニックIpマネジメント株式会社 Infrared detector
US10495518B2 (en) * 2016-06-23 2019-12-03 Panasonic Intellectual Property Management Co., Ltd. Infrared detection apparatus
US11092489B2 (en) 2017-02-03 2021-08-17 Gamaya Sa Wide-angle computational imaging spectroscopy method and apparatus
NO20180965A1 (en) * 2018-07-10 2020-01-13 Norsk Elektro Optikk As Hyperspectral camera
WO2020033967A1 (en) * 2018-08-10 2020-02-13 Buffalo Automation Group Inc. Training a deep learning system for maritime applications
US11781914B2 (en) 2021-03-04 2023-10-10 Sivananthan Laboratories, Inc. Computational radiation tolerance for high quality infrared focal plane arrays
CN114719996B (en) * 2022-04-12 2022-11-29 中国科学院云南天文台 High-precision spectral band radiance measuring system and method
CN115880152B (en) * 2022-12-13 2023-11-24 哈尔滨工业大学 Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network
CN118181794B (en) * 2024-05-15 2024-09-17 南方雄狮创建集团股份有限公司 Preparation method of wall material for energy conservation and heat preservation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790188A (en) 1995-09-07 1998-08-04 Flight Landata, Inc. Computer controlled, 3-CCD camera, airborne, variable interference filter imaging spectrometer system
US5880834A (en) 1996-10-16 1999-03-09 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Convex diffraction grating imaging spectrometer
US6552788B1 (en) 2001-07-09 2003-04-22 Ruda & Associates Hyperspectral imaging using linear chromatic aberration
US20060072109A1 (en) 2004-09-03 2006-04-06 Andrew Bodkin Hyperspectral imaging systems
US7773218B2 (en) * 2006-04-17 2010-08-10 Duke University Spatially-registered wavelength coding

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4176923A (en) * 1978-10-10 1979-12-04 Collender Robert B Stereoscopic motion picture large scale scanning reproduction method and apparatus
US5245404A (en) * 1990-10-18 1993-09-14 Physical Optics Corportion Raman sensor
US6501551B1 (en) * 1991-04-29 2002-12-31 Massachusetts Institute Of Technology Fiber optic imaging endoscope interferometer with at least one faraday rotator
US6134003A (en) * 1991-04-29 2000-10-17 Massachusetts Institute Of Technology Method and apparatus for performing optical measurements using a fiber optic imaging guidewire, catheter or endoscope
GB0102529D0 (en) * 2001-01-31 2001-03-21 Thales Optronics Staines Ltd Improvements relating to thermal imaging cameras
US20020163482A1 (en) * 1998-04-20 2002-11-07 Alan Sullivan Multi-planar volumetric display system including optical elements made from liquid crystal having polymer stabilized cholesteric textures
US6687010B1 (en) * 1999-09-09 2004-02-03 Olympus Corporation Rapid depth scanning optical imaging device
US6873335B2 (en) * 2000-09-07 2005-03-29 Actuality Systems, Inc. Graphics memory system for volumeric displays
WO2002085000A1 (en) * 2001-04-13 2002-10-24 The Trustees Of Columbia University In The City Of New York Method and apparatus for recording a sequence of images using a moving optical element
CN1298175C (en) * 2001-07-06 2007-01-31 以克斯普雷有限公司 Image projecting device and method
US20070047043A1 (en) * 2002-07-08 2007-03-01 Explay Ltd. image projecting device and method
WO2004064410A1 (en) * 2003-01-08 2004-07-29 Explay Ltd. An image projecting device and method
US6998614B2 (en) * 2003-05-23 2006-02-14 Institute For Technology Development Hyperspectral imaging workstation having visible/near-infrared and ultraviolet image sensors
AU2003254152A1 (en) * 2003-07-24 2005-03-07 University Of Rochester System and method for image sensing and processing
DE602005004332T2 (en) * 2004-06-17 2009-01-08 Cadent Ltd. Method for providing data related to the oral cavity
SE0402576D0 (en) * 2004-10-25 2004-10-25 Forskarpatent I Uppsala Ab Multispectral and hyperspectral imaging
US20090201498A1 (en) * 2008-02-11 2009-08-13 Ramesh Raskar Agile Spectrum Imaging Apparatus and Method
US8416302B2 (en) * 2009-02-10 2013-04-09 Microsoft Corporation Low-light imaging augmented with non-intrusive lighting
WO2011084863A2 (en) * 2010-01-07 2011-07-14 Cheetah Omni, Llc Fiber lasers and mid-infrared light sources in methods and systems for selective biological tissue processing and spectroscopy
US8649008B2 (en) * 2010-02-04 2014-02-11 University Of Southern California Combined spectral and polarimetry imaging and diagnostics
US9057583B2 (en) * 2010-10-28 2015-06-16 Surefire, Llc Sight system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790188A (en) 1995-09-07 1998-08-04 Flight Landata, Inc. Computer controlled, 3-CCD camera, airborne, variable interference filter imaging spectrometer system
US5880834A (en) 1996-10-16 1999-03-09 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Convex diffraction grating imaging spectrometer
US6552788B1 (en) 2001-07-09 2003-04-22 Ruda & Associates Hyperspectral imaging using linear chromatic aberration
US20080088840A1 (en) 2001-12-21 2008-04-17 Andrew Bodkin Hyperspectral imaging systems
US20060072109A1 (en) 2004-09-03 2006-04-06 Andrew Bodkin Hyperspectral imaging systems
US7773218B2 (en) * 2006-04-17 2010-08-10 Duke University Spatially-registered wavelength coding

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
"A Visible-Infrared Imaging Spectrometer for Planetary Missions.", NASA PIDDP FINAL REPORT, 1 May 1996 (1996-05-01)
"Concept for Future Visible and Infrared Imager", ASTRIUM GMBH FOR ESA, pages: 4 - 33
ARAI K ET AL: "Unmixing method for hyperspectral data based on sub-space method with learning process", ADVANCES IN SPACE RESEARCH, PERGAMON, OXFORD, GB, vol. 44, no. 4, 17 August 2009 (2009-08-17), pages 517 - 523, XP026337910, ISSN: 0273-1177, [retrieved on 20090507], DOI: 10.1016/J.ASR.2009.04.034 *
H GUCKEL: "High-Aspect-Ratio Micromachining Via Deep X-Ray Lithography", PROCEEDINGS OF THE IEEE, vol. 96, no. 8, August 1998 (1998-08-01)
MOUROULIS, P.Z.: "Spectral and spatial uniformity in pushbroom imaging spectrometers", PROC. SPIE, vol. 3753, 27 October 1999 (1999-10-27), pages 133 - 141, XP007921179, DOI: 10.1117/12.366313 *
PANTAZIS MOUROULIS ET AL.: "Optical design of a coastal ocean imaging spectrometer", OPTICS EXPRESS, vol. 16, no. 12, 9 June 2008 (2008-06-09), pages 9096
PANTAZIS MOUROULIS ET AL.: "Optical design of a compact imaging spectrometer for planetary mineralogy", OPTICAL ENGINEERING, vol. 466, June 2007 (2007-06-01), pages 063001
PANTAZIS MOUROULIS, SPECTRAL AND SPATIAL UNIFORMITY IN PUSHBROOM IMAGING SPECTROMETERS, 1999
WAGADARIKAR A ET AL: "Single disperser design for coded aperture snapshot spectral imaging", APPLIED OPTICS, OPTICAL SOCIETY OF AMERICA, WASHINGTON, DC; US, vol. 47, no. 10, 1 April 2008 (2008-04-01), pages B44 - B51, XP001513101, ISSN: 0003-6935, DOI: 10.1364/AO.47.000B44 *
YI-KAI CHENG; JYH-LONG CHERN: "Irradiance formations in hollow straight light pipes with square and circular shapes", J. OPT. SOC. AM A, vol. 23, no. 2, February 2006 (2006-02-01), XP055041312, DOI: doi:10.1364/JOSAA.23.000427

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015133476A1 (en) * 2014-03-03 2015-09-11 エバ・ジャパン株式会社 Spectroradiometer
JP2015166682A (en) * 2014-03-03 2015-09-24 エバ・ジャパン 株式会社 Spectral radiance meter
EP3626262A1 (en) 2015-06-18 2020-03-25 Vaximm AG Vegfr-2 targeting dna vaccine for combination therapy
WO2018083209A1 (en) 2016-11-04 2018-05-11 Vaximm Ag Wt1 targeting dna vaccine for combination therapy
WO2018149982A1 (en) 2017-02-17 2018-08-23 Vaximm Ag Novel vegfr-2 targeting immunotherapy approach
WO2020049036A1 (en) 2018-09-05 2020-03-12 Vaximm Ag Neoantigen targeting dna vaccine for combination therapy
WO2021144254A1 (en) 2020-01-13 2021-07-22 Vaximm Ag Salmonella-based dna vaccines in combination with an antibiotic

Also Published As

Publication number Publication date
US9538098B2 (en) 2017-01-03
CA2839115C (en) 2019-01-08
US20140293062A1 (en) 2014-10-02
CA2839115A1 (en) 2013-01-17
NO20111001A1 (en) 2013-01-09
EP2729774B1 (en) 2019-04-17
EP2729774A1 (en) 2014-05-14
NO337687B1 (en) 2016-06-06

Similar Documents

Publication Publication Date Title
EP2729774B1 (en) Hyperspectral camera and method for acquiring hyperspectral data
US9927300B2 (en) Snapshot spectral imaging based on digital cameras
CN103314571B (en) Camera device and camera system
US9459148B2 (en) Snapshot spectral imaging based on digital cameras
JP5681954B2 (en) Imaging apparatus and imaging system
US10605660B2 (en) Spectral imaging method and system
Schöberl et al. Dimensioning of optical birefringent anti-alias filters for digital cameras
Høye et al. Method for quantifying image quality in push-broom hyperspectral cameras
Henriksen et al. Real-time corrections for a low-cost hyperspectral instrument
Renhorn et al. High spatial resolution hyperspectral camera based on exponentially variable filter
JP2020508469A (en) Wide-angle computer imaging spectroscopy and equipment
Li et al. Modulation transfer function measurements using a learning approach from multiple diffractive grids for optical cameras
Fridman et al. Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for optical design and data quality
RU2735901C2 (en) Multichannel spectral device for obtaining images with fourier transformation
Bakker et al. Determining smile and keystone of lab hyperspectral line cameras
CN108700462B (en) Double-spectrum imager without moving part and drift correction method thereof
Brown et al. Characterization of Earth observing satellite instruments for response to spectrally and spatially variable scenes
KR101054017B1 (en) Calibration method of the spectrometer
Valdes et al. The NOAO High-Performance Pipeline System: The Mosaic Camera Pipeline Algorithms
Brückner et al. Advanced artificial compound-eye imaging systems
Qian et al. Effect of keystone on coded aperture spectral imaging
Høye et al. Method for calibrating the image from a mixel camera based solely on the acquired hyperspectral data
Fridman The mixel camera—keystone-free hyperspectral images
Jörsäter Methods in astronomical image processing with special applications to the reduction of CCD data
US20110037979A1 (en) Imaging spectrograph

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12754107

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2839115

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2012754107

Country of ref document: EP