WO2016041079A1 - Optical sensor systems and image processing methods for remote sensing - Google Patents

Optical sensor systems and image processing methods for remote sensing Download PDF

Info

Publication number
WO2016041079A1
WO2016041079A1 PCT/CA2015/050905 CA2015050905W WO2016041079A1 WO 2016041079 A1 WO2016041079 A1 WO 2016041079A1 CA 2015050905 W CA2015050905 W CA 2015050905W WO 2016041079 A1 WO2016041079 A1 WO 2016041079A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
spectral
spatial resolution
pan
Prior art date
Application number
PCT/CA2015/050905
Other languages
French (fr)
Inventor
Yun Zhang
Original Assignee
University Of New Brunswick
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of New Brunswick filed Critical University Of New Brunswick
Priority to US15/511,772 priority Critical patent/US10225449B2/en
Publication of WO2016041079A1 publication Critical patent/WO2016041079A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/30Measuring the intensity of spectral lines directly on the spectrum itself
    • G01J3/36Investigating two or more bands of a spectrum by separate detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • G01J2003/2826Multispectral imaging, e.g. filter imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to remote sensing in genera! and optical sensor systems and image processing methods for remote sensing in particular
  • Hyperspectral remote sensing (with more than a dozen and up to hundreds of spectral bands) is important for land cover classification, target detection, and applications in natural resources, forestry, agriculture, geology, and military.
  • (1) limited spatial resolution, and (2) enormous data volume are two of the major limitations of hyperspectral remote sensing, which significantly hinder the collection and application of hyperspectral images in a broad range of areas. For large ground coverage, the data volume will be even greater.
  • a prior art solution for airborne hyperspectraf sensors is to fly at a lower altitude but at the cost of reducing the ground coverage (i.e. the ground coverage is narrower for each flight as compared to images taken at a higher attitude).
  • the solution is to increase the focal length of the lens, which also significantly reduces the ground coverage per orbit circle.
  • the reduction of ground coverage means a reduction in the efficiency of remote sensing data collection More flight time is needed to cover the same area
  • pan-sharpening techniques to fuse panchromatic (“Pan ”) images with a few selected hyperspectral bands [03] [04] [05].
  • MS selected multispectral bands to fuse with selected HS bands [06].
  • pan -sharpening techniques were used in the fusions. Although more detail can be seen in ⁇ he fusion results, their quality is still poor with obvious noise, colour distortion, and/or unnatural integration between spectral and spatial information.
  • Hyperspectral imaging typically involves relatively large voJumes of data.
  • data compression such as JPEG format
  • Bui data compression
  • lossless compression the compression rate will not be high. Not much data volume can be reduced
  • lossy compression some image information will be lost which is not acceptable for most remote sensing applications.
  • FIG. 1 is a graph of the spectral ranges of a multi-sensor assembly according to an embodiment of the present invention
  • FIG. 2 is a diagram depicting the relative spectral resolutions of Pan, MS, superspectral ("SS”), and HS sensors with the same field of view (“FOV”) and the same swath,
  • FIG. 3 is a diagram depicting a method of fusion according to an embodiment of the present invention.
  • FIG 4. is a graph depicting the spatial and spectral resolutions of the sensors of the WorldView-3 satellite. DESCRIPTION
  • hyperspectral images Remote sensing images with more than a dozen and up to hundreds of spectral bands are broadly defined as hyperspectral ("HS") images.
  • hyperspectral images can also be further divided into superspectral (“SS”) images and HS images, where the former contain more than a dozen but less than 100 spectral bands and the latter contain more than 100 spectral bands.
  • SS superspectral
  • HS high-spectral
  • SS narrow HS definition
  • the present invention relates to (1) a sensor system configuration to record multi level spatial and spectral information for creating high spatial resolution, large coverage, and high spectral resolution (hyperspectral) images, and (2) a multi-level spatial and spectral resolution sharpening method to create high spatial resolution, large coverage hyperspectral images.
  • the present invention relates to a multi-sensor system for recording necessary information of high spatial resolution, large coverage, and high spectral resolution for creating high spatiaf resolution and large coverage hyperspectral images.
  • the present invention relates to a multi-level spatial and spectral resolution sharpening method for creating high spatial resolution and large coverage hyperspectral images from the images collected by the multi-sensor system
  • a multi-sensor system optimally utilizes the complementary relationship between spatial resolution (pixel size) and spectral resolution ⁇ bandwidth) of optical sensors to collect desired high spatial resolution, high spectral resolution and large ground coverage information using a set of different sensors respectively
  • the present invention relates to a multi-level spatial and spectral resolution sharpening method for fusing high spatial resolution and high spectral resolution information from different images into one image to create a high spatial resolution and large coverage hyperspectral image.
  • the present invention relates to a method for producing a sharpened image comprising the steps of obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution, obtaining image data defining a second image, the second image having a second spatial resolution and a second spectral resolution, the second spatial resolution being lower than the first spatial resolution and the second spectral resolution being higher than the first spectral resolution, obtaining image data defining a third image, the third image having a third spatial resolution and a third spectral resolution, the third spatial resolution being lower than the second spatial resolution and the third spectral resolution being higher than the second spectral resolution, fusing the image data of the second image and the third image to produce a fourth image, and fusing the image data of the first image and the fourth image to produce a fifth image, wherein the fifth image is a final sharpened image.
  • the image data for the first image may have been collected by one or more panchromatic sensors
  • the image data for the second image may have been collected by one or more multi-spectral sensors
  • the image data for the third image may have been collected by one or more sensors selected from the group consisting of superspectral and hyperspectral sensors. All the sensors may be either airborne based sensors or satellite-based sensors or terrestrial sensors All of the sensors may have the same FOV and the same ground swath.
  • the first, second, and third images may have a common overlapping area
  • the image data fused may comprise image data defining all or a part of a common overlapping area.
  • the first image may be a panchromatic image type
  • the second image may be a multi-spectral image type
  • the third image may be a hyperspectral or a superspectral image type
  • the image data fused may further comprise all of the spectral bands of the second and third images defining the spectral resolution of the common overlapping area.
  • the present invention relates to a method for producing a sharpened image comprising the steps of obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution, obtaining image data defining a second image, the second image having a second spatiaf resolution and a second spectral resolution, the second spatial resolution being lower than the first spatial resolution and the second spectral resolution being higher than the first spectral resolution, obtaining image data defining a third image, the third image having a third spatial resolution and a third spectral resolution, the third spatial resolution being lower than the second spatial resolution and the third spectral resolution being higher than the second spectral resolution, obtaining image data defining a fourth image, the fourth image having a fourth spatial resolution and a fourth spectral resolution, the fourth spatial resolution being lower than the third spatial resolution and the fourth spectral resolution being higher than the third spectral resofution, fusing the image data of the third image and the fourth image to produce a fifth image, f
  • the image data for the first image may have been collected by one or more panchromatic sensors
  • the image data for the second image may have been collected by one or more multi- spectral sensors
  • the image data for the third image may have been collected by one or more superspectral sensors
  • the image data for the fourth image may have been collected by one or more hyperspectral sensors.
  • All the sensors may be either airborne based sensors or satellite-based sensors or terrestrial sensors. Alf the sensors have the same FOV and the same ground swath.
  • the first, second, third, and fourth images may have a common overlapping area.
  • the image data fused may comprise image data defining all or a part of the common overlapping area
  • the first image may be a panchromatic image type
  • the second image may be a multi-spectral image type
  • the third image may be a superspectral image type
  • the fourth image may be a hyperspectral image type
  • the image data fused may further comprise all of the spectra! bands of the second, third, and fourth images defining the spectral resolution of the common overlapping area
  • the present invention relates to a method for producing a sharpened image comprising obtaining image data defining a high spatial resolution, low spectral resolution image, such image being called a HR-Pan image; obtaining a medium spatial resolution, medium spectral resolution such image being called a MR-MS image; obtaining image data defining a low spatial resolution, high spectral resolution image, such image being called a LR-HS image; fusing the LR-HS image and the MR-MS image to produce a sharpened MR-HS image; and fusing the sharpened MR-HS image and the HR-Pan image to produce a sharpened HR-HS image
  • the HR-Pan image may have a spatial resolution of 1m and 1 spectral band; the MR-MS image may have a spatial resolution of 3m and 10 spectra!
  • the LR- MS image may have a spatial resolution of 9m and 200 spectral bands.
  • the LR-HS hands being fused may be located in the same or similar spectral range as the bands of the MR-MS image being fused : to produce fused MR-HS bands, and in the step of fusing the band of the HR-Pan image and the fused MR-HS bands, the fused MR-HS bands may be located in the same or similar spectral range as the band of the HR-Pan image.
  • the LR-HS bands being fused may not be in the same or similar spectral range as the bands of the MR-MS image being fused, to produce fused MR-HS bands, and in the step of fusing the band of the HR-Pan image and the fused MR-HS bands, the fused MR-HS bands may not be located in the same or similar spectral range as the band of the HR-Pan image.
  • the present invention relates to a method for producing a sharpened image comprising obtaining image data defining a high spatial resolution, iow spectral resolution image, such image being called a HR-Pan (VNIR) image; obtaining image data defining a medium spatial resolution, medium spectral resolution, such image being called a MR-MS (VNIR) image; obtaining image data defining a low spatial resolution, medium spectral resolution image in another spectral range, such image being called a LR-MS (SWIR) image; obtaining medium spatial resolution Pan (VINR) image [MR-Pan (VNIR)] from the HR-Pan (VNIR) image; obtaining low spatial resolution MS (VNIR) image [LR-MS (VNIR)] from the MR-MS (VNIR) image, fusing the LR-MS (SWIR) image and the MR-Pan (VNIR) image to produce a sharpened MR-MS (SWIR) image; and fusing the
  • At least one spectraf band of the LR-MS (VNIR) image may be used as a reference to fuse the LR-MS (SWIR) image and the MR-Pan (VNIR) image, to produce a sharpened MR-MS (SWIR) image
  • At least one spectral band of the MR-MS (VNIR) image may be used as a reference to fuse the MR-MS (VNIR) image and the MR-MS (SWIR) image with the HR- Pan (VNIR) image, to produce a sharpened HR-MS (VNIR) image and HR-MS (SWIR) image-
  • the present invention relates to a method for producing a sharpened image comprising the steps of obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution, obtaining image data defining a second image, the second image having a second spatial resolution and a second spectral resolution, obtaining image data defining a third image, the third image having a
  • the first, second and third images may have substantiaJiy the same ground coverage Prior to production of the fourth image, at least one of the first, second and third spatial resolutions may be reduced whereby the first, second and third spatial resolutions are not equal.
  • the present invention relates to an image sensor configuration for airborne or satellite-based or terrestrial imagery, the sensor configuration comprising a Pan sensor, a MS sensor, and a HS sensor, wherein the Pan sensor is configured to obtain image data defining a high spatial resolution and low spectral resolution image; the MS sensor is configured to obtain image data defining a medium spatia! resolution and medium spectral resolution image; and the HS sensor is configured to obtain image data defining a low spatial resolution and high spectral resolution image.
  • the image sensor configuration may further comprise a SS sensor configured to obtain image data defining a spatial resolution between those of the MS sensor and the HS sensor, and a spectral resolution between those of the MS sensor and the HS sensor.
  • the present invention relates to an image sensor configuration for airborne or satellite-based or terrestrial imagery, the sensor configuration comprising first, second and third image sensors, wherein the first image sensor having a first spatial resolution and a first spectra! resolution, the second image sensor having a second spatial resolution and a second spectral resolution, the second spatial resolution being lower than the first spatiaf resolution and the second spectral resolution being higher than the first spectral resolution, and the third image sensor having a third spatial resolution and a third spectra! resolution, the third spatial resolution being lower than the second spatial resolution and the third spectral resolution being higher than the second spectral resolution.
  • the present invention relates to a combination of a mu!ti-sensor system and a multi-level sharpening method for collecting and creating a high spatial resolution and large coverage hyperspectral image, which has up to or more than 27 times higher spatial resolution and more than 10 times larger ground coverage, compared to prior art single sensor hyperspectral technology
  • the combination of the multi-sensor system and the multi-level sharpening method can reduce the data volume by up to or more than 83 times, compared to prior art single sensor hyperspectral technology.
  • the present invention relates to a sensor system which can collect multi-level spatial and spectral information for high spatial resolution, high spectral resolution (hyperspectral), and large ground coverage images by effectively utilizing the complementary relationship between spectral bandwidth and spatial resolution of optical sensors.
  • a set of specifically configured sensors is needed for the data collection.
  • a high spatial resolution, large coverage hyperspectral image can then be created by effectively combining the multi-level spatial and spectral information from the set of sensors through an automated multi-level spatial and spectral resolution sharpening process.
  • Fused/sharpened images generally contain more data volume than the original images before the fusion/sharpening, because the spatial resolution of the fused/sharpened images is increased.
  • the final high spatial resolution and large coverage hyperspectral images are produced "on- the-fly :i as they are needed.
  • the original images before the fusion/sharpening are kept for image data transmission purposes (such as from satellite to ground, from data vendor to end user, or others) and for image data storage purposes to keep the data volume small.
  • FIG. 1 The configuration of a sensor system according Lo an embodiment of the present invention, in terms of spectral bandwidth ⁇ spectral resolution) and spatial resolution, is illustrated in Figure 1 It includes three different types of sensors: Pan (panchromatic), MS (multispectral), and HS (hyperspectral) sensors. The number of each type of sensors may vary. More particularly, in the sensor system of FIG. 1 , there are the following sensors:
  • Pan sensors each covers one broad spectral bandwidth (such as Pan 1 and Pan 2), to collect two high spatial resolution panchromatic (HR-Pan) (monochrome) images, at the cost of losing spectral resolution (information);
  • HR-Pan high spatial resolution panchromatic
  • the number of sensors may vary depending on the sensor design and the sensor materials used.
  • the spectral range of the sensors can also go beyond 2500 nm as illustrated in FIG 1
  • the spatial resolution can also be adjusted to allow each sensor to collect a different spatial resolution.
  • the tight energy coming to a sensor is much weaker in short wave infrared (SW!R) range than in visible and near infrared (VIMIR) range. Therefore, the spatial resolution of the Pan, MS, or HS sensor in short wave infrared range can be lower than that of the corresponding sensor in visible and near infrared range
  • the sensor configuration illustrated in FIG 1 can also be split into two or more sets of sensors, such as the Pan, MS, and HS sensors in visible and near infrared (VNIR) spectral range are grouped as one set of mufti-level sensors, and the Pan, MS, and HS sensors in short wave infrared (SWIR) are grouped as another set of sensors.
  • the two sets of sensors can be used together and also separately.
  • a set of specifically configured sensors can be designed; each cotlects a high spatial and tow spectral resolution image, a medium spatial and medium spectral resolution image, and a high spectral and low spatial resolution image respectively.
  • a sensor system according to the present invention can also be extended to include four different types of sensors, such as Pan (panchromatic), MS (multispectral), S3 (superspectral) and HS (hyperspectral) sensors, to make the spatial and spectral resolution changes between different sensors more gradually.
  • Pan panchromatic
  • MS multispectral
  • S3 superspectral
  • HS hyperspectral
  • the spatial resolutions of the sensors are different, they cover the same ground area. Because the spatial resolution of the HS sensor is lower than that of SS sensors and much lower than MS and Pan sensors, it is feasible for the HS sensor to cover the same area as do the SS ; MS. and Pan sensors and still receive sufficient photons to form an image.
  • all the sensors have the same sensor FOV and the same ground swath, but different pixel sizes (spatial resolutions) and different spectral bandwidths. This allows for easier and better co-registration between all images, ensuring better fusion quality between the images of the sensors. This also maximizes the common overlapping area of all the sensors, minimizing unused image areas.
  • Other sensor assemblies with varying FOV or varying swaths may be useable in sensor systems according to the present invention, but the image processing efficiency or image use efficiency may be reduced.
  • a multi-level spatial and spectral resolution sharpening method can be used to create high spatial resolution and large coverage hyperspectral images.
  • the general principle of the method is illustrated in FIG. 3, which includes the following steps.
  • each of the MR-MS bands is fused with those LR-HS bands that locate in the same or similar spectral range of the MR-MS band, respectively (as partially shown in Figure 1 ), to produce MR-HS bands.
  • the HR-Pan is then fused with those MR-HS bands located in the same or similar Spectral range of the Pan, to produce HR-HS bands.
  • a MR-MS band can also be fused with those LR-HS bands that do not locate in the same or similar spectral range The same can be applied to the fusion between HR-Pan and MR-HS image bands as well. However, certain colour distortion may occur
  • a sensor system has four different sensors, such as Pan, MS, SS and HS sensors, as described above in example 1
  • the multi-level spatial and spectral resolution sharpening method will include three steps:
  • a sensor system collects 2 Pan bands at 1 m, 10 MS bands at 3m, 50 SS bands at 9m, and 250 HS bands at 27m, as described in example 1 , the spatial resolution of the HS bands will be increased from 27m to 1 m; an increase of 27 times.
  • the data volume will be reduced from 1 ,822.5 MB to 29.68 MB; a data volume reduction of 61 .4 times.
  • the 29 68 MB is the sum of the data vo!umes of the original HS (100x100 pixels per band, 250 bands), SS (300x300 pixefs per band, 50 bands), MS (900x900 pixels per band, 10 bands), and Pan (2700x2700 pixels per band, 2 bands.
  • a sensor system collects HR- Pan, MR-MS and LR-HS images like those described above in example 2, the spatiaf resolution of the final HR-HS image will be 9 times higher than that of the original LR- HS image, and the data volume will be reduced by 44 times compared to that of a 1 m, 200 bands HR-HS image directly collected by a HR-HS sensor,
  • the spatial resolution ratio between HR-Pan and MR-MS is 1/4 and between MR-MS and LR-HS is also 1 /4, the spatial resolution of HR-HS image will be increased by 16 times.
  • the data volume will be reduced by 83 times [assuming HR-Pan has 1 band, MR-MS 10 bands, LR-HS 200 bands; and the resolution ratio between HR-Pan, MR-MS and LR-HS is 1/4).
  • a spatial resolution increase of 9 to 27 times and a data volume reduction of 44 to 83 times are revolutionary improvements, compared to conventional hyperspectral data collection technologies.
  • the new sensor system disclosed in this invention can also enlarge the ground coverage (swath) by up to 27 times for the same spatial resolution, compared to the current technology.
  • the concept of the aforementioned multi-level spatial and spectral resolution sharpening method can also be adjusted to fuse multi-level spatial resolution images collected by sensor systems with varying configurations.
  • the Wor!dView-3 satellite (to be launched in 2014) carries one high resolution Pan sensor to capture 1 band of high spatial resolution (0.31 m) Pan image (HR-Pan) in visible and near infrared (VNIR) spectral range, and multiple MS sensors to capture 8 bands of medium spatial resolution (1 .24m) MS image ⁇ MR-MS) in VNIR range and 8 bands of low spatial resolution (3.7m) MS Image (LR- MS) in short wave infrared (SWIR) range.
  • WorldView-3 carries multiple sensors and has 3 different spatial resolutions, its sensor configuration is still a traditional configuration, because it has only one or at most two spatial resolution levels in each spectral range (i.e. VMIR or MWlR), And, it has only one or at most two different types of sensors in each of the spectral ranges.
  • the WorldView-3 images in the VNIR range can be resampled to create multi-level spatial resolutions (i.e. 3 spatial resolution levels, more than the traditional 2 levels).
  • the mufti-level spatial and spectraf resolution sharpening can then be conducted to increase not oniy the spatial resolution of the MS image (1 24m) in VNIR range, but also that of the MS image (3.7m) in MWlR range.
  • the resampling and the multi-level sharpening can be conducted in the following steps:
  • the present invention can in accordance with embodiments of the present invention, he further extended to fuse two or more MS images with different spatial resolutions that cover the same or different spectral ranges.
  • the following steps may be applied to fuse the MS images (1) if the spectra! ranges of the high spatial resolution MS image is the same as or similar to that of the low resolution MS image: a.
  • the spectral ranges of the high spatial resolution MS image is not the same as or similar to that of the low resolution MS image (such as the MS (VNlR) image and MS (SWIR) image as shown in FIG. 4 but the original Pan (VNIR ⁇ is not available): a simulating a high resolution Pan (VNIR) image from the high resolution MS (VNIR) image by combining all the MS bands into one band, b. reducing the spatial resofution of the high resolution MS (VNIR) image to that of the low resolution MS (SWIR) image through pixel binning to simulate a low resolution MS (VNIR) image, and c.
  • methods of the present invention can be used to fuse/sharpen images from the Pan, MS and HS sensors on board NASA's EO-1 satellite
  • the spatial resolution of MS and HS sensors on board EO-1 are the same (30m); whereas ground coverage of the EO-1 HS sensor with a 7.5km width is much narrower than that of the EO-1 MS sensor with a 30km width. Even with narrower ground coverage, the HS sensor cannot collect good quality HS images.
  • the spatial resolution of an EO-1 HS image is reduced from 30m to 90m to improve the signal to noise ratio (SNR) (I.e.
  • the 30m MS image is fused with the 90m HS image to create a 30m HS image (this image has much better quality than the original 30m HS image).
  • the 10m Pan image is fused with the fused 30m HS to create a 10m HS image-
  • Methods according to embodiments of the present invention can be used to fuse/sharpen images from 3 or more sensors where the spatial resolutions of the 3 or more sensors are different.
  • the spatial resolution can be adjusted by adjusting the pixel size or the focal length.
  • the ground coverage of the images used in the methods of the present invention are preferably the same but may be different, If the ground coverage is different, overlap areas of the images may be used.

Abstract

According to one embodiment, the present invention relates to (1) a sensor system configuration to record multi-level spatial and spectral information for creating high spatial resolution, large coverage, and high spectral resolution (hyperspectral) images, and (2) a multi-level spatial and spectral resolution sharpening method to create high spatial resolution, large coverage hyperspectral images.

Description

OPTICAL SENSOR SYSTEMS AND IMAGE PROCESSING METHODS
FOR REMOTE SENSING
The present invention relates to remote sensing in genera! and optical sensor systems and image processing methods for remote sensing in particular
BACKGROUND
Hyperspectral remote sensing (with more than a dozen and up to hundreds of spectral bands) is important for land cover classification, target detection, and applications in natural resources, forestry, agriculture, geology, and military. However, (1) limited spatial resolution, and (2) enormous data volume are two of the major limitations of hyperspectral remote sensing, which significantly hinder the collection and application of hyperspectral images in a broad range of areas. For large ground coverage, the data volume will be even greater.
To increase the spatial resolution, a prior art solution for airborne hyperspectraf sensors is to fly at a lower altitude but at the cost of reducing the ground coverage (i.e. the ground coverage is narrower for each flight as compared to images taken at a higher attitude). For satellite sensors, the solution is to increase the focal length of the lens, which also significantly reduces the ground coverage per orbit circle. The reduction of ground coverage means a reduction in the efficiency of remote sensing data collection More flight time is needed to cover the same area
Therefore, to overcome the limitation of spatial resolution without narrowing the ground coverage, research on pixel unmixing of hyperspectral images has been an active topic for decades, in order to interpret spectral information that is smaller than one ground pixel To date, prior art pixel unmixing techniques still need to be supported by some known information from other sources. Nonetheless, only very limited success has been achieved [01] [02],
On the other hand, to increase the spatial resolution of hyperspectral ("HS") images, some researchers have recently begun to explore the potential of using pan-sharpening techniques to fuse panchromatic ("Pan ") images with a few selected hyperspectral bands [03] [04] [05]. A few others have used selected multispectral ("MS") bands to fuse with selected HS bands [06]. Some existing pan -sharpening techniques were used in the fusions. Although more detail can be seen in†he fusion results, their quality is still poor with obvious noise, colour distortion, and/or unnatural integration between spectral and spatial information.
Hyperspectral imaging typically involves relatively large voJumes of data. To reduce the data volume, a common solution is the use of data compression (such as JPEG format) Bui, if lossless compression is used, the compression rate will not be high. Not much data volume can be reduced If lossy compression is used, some image information will be lost which is not acceptable for most remote sensing applications.
In view of the foregoing, there is a need for an improved sensor system and method to enlarge ground coverage, but still keep the spatial resolution and maintain manageable data volume for hyperspectral images.
BRIEF DESCRIPTION OF THE DRAWINGS
Various embodiments of the present invention will now be described with reference to the drawings, in which;
FIG. 1 is a graph of the spectral ranges of a multi-sensor assembly according to an embodiment of the present invention;
FIG. 2 is a diagram depicting the relative spectral resolutions of Pan, MS, superspectral ("SS"), and HS sensors with the same field of view ("FOV") and the same swath,
FIG. 3 is a diagram depicting a method of fusion according to an embodiment of the present invention; and,
FIG 4. is a graph depicting the spatial and spectral resolutions of the sensors of the WorldView-3 satellite. DESCRIPTION
Remote sensing images with more than a dozen and up to hundreds of spectral bands are broadly defined as hyperspectral ("HS") images. However, more narrowly, hyperspectral images can also be further divided into superspectral ("SS") images and HS images, where the former contain more than a dozen but less than 100 spectral bands and the latter contain more than 100 spectral bands. In the present invention, if not specifically defined, the broad HS definition is used. If SS is used, the narrow HS definition is used
According to one embodiment, the present invention relates to (1) a sensor system configuration to record multi level spatial and spectral information for creating high spatial resolution, large coverage, and high spectral resolution (hyperspectral) images, and (2) a multi-level spatial and spectral resolution sharpening method to create high spatial resolution, large coverage hyperspectral images.
According to another embodiment, the present invention relates to a multi-sensor system for recording necessary information of high spatial resolution, large coverage, and high spectral resolution for creating high spatiaf resolution and large coverage hyperspectral images.
According to a further embodiment, the present invention relates to a multi-level spatial and spectral resolution sharpening method for creating high spatial resolution and large coverage hyperspectral images from the images collected by the multi-sensor system
According to a still further embodiment, a multi-sensor system according to the present invention optimally utilizes the complementary relationship between spatial resolution (pixel size) and spectral resolution {bandwidth) of optical sensors to collect desired high spatial resolution, high spectral resolution and large ground coverage information using a set of different sensors respectively
According to another embodiment, the present invention relates to a multi-level spatial and spectral resolution sharpening method for fusing high spatial resolution and high spectral resolution information from different images into one image to create a high spatial resolution and large coverage hyperspectral image.
According †o another embodiment, the present invention relates to a method for producing a sharpened image comprising the steps of obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution, obtaining image data defining a second image, the second image having a second spatial resolution and a second spectral resolution, the second spatial resolution being lower than the first spatial resolution and the second spectral resolution being higher than the first spectral resolution, obtaining image data defining a third image, the third image having a third spatial resolution and a third spectral resolution, the third spatial resolution being lower than the second spatial resolution and the third spectral resolution being higher than the second spectral resolution, fusing the image data of the second image and the third image to produce a fourth image, and fusing the image data of the first image and the fourth image to produce a fifth image, wherein the fifth image is a final sharpened image. The image data for the first image may have been collected by one or more panchromatic sensors, the image data for the second image may have been collected by one or more multi-spectral sensors, and the image data for the third image may have been collected by one or more sensors selected from the group consisting of superspectral and hyperspectral sensors. All the sensors may be either airborne based sensors or satellite-based sensors or terrestrial sensors All of the sensors may have the same FOV and the same ground swath. The first, second, and third images may have a common overlapping area The image data fused may comprise image data defining all or a part of a common overlapping area. The first image may be a panchromatic image type, the second image may be a multi-spectral image type, and the third image may be a hyperspectral or a superspectral image type The image data fused may further comprise all of the spectral bands of the second and third images defining the spectral resolution of the common overlapping area.
According to another embodiment, the present invention relates to a method for producing a sharpened image comprising the steps of obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution, obtaining image data defining a second image, the second image having a second spatiaf resolution and a second spectral resolution, the second spatial resolution being lower than the first spatial resolution and the second spectral resolution being higher than the first spectral resolution, obtaining image data defining a third image, the third image having a third spatial resolution and a third spectral resolution, the third spatial resolution being lower than the second spatial resolution and the third spectral resolution being higher than the second spectral resolution, obtaining image data defining a fourth image, the fourth image having a fourth spatial resolution and a fourth spectral resolution, the fourth spatial resolution being lower than the third spatial resolution and the fourth spectral resolution being higher than the third spectral resofution, fusing the image data of the third image and the fourth image to produce a fifth image, fusing the image data of the second image and the fifth image to produce a sixth image, and fusing the image data of the first image and the sixth image to produce a seventh image, wherein the seventh image is a final sharpened image. The image data for the first image may have been collected by one or more panchromatic sensors, the image data for the second image may have been collected by one or more multi- spectral sensors, the image data for the third image may have been collected by one or more superspectral sensors, and the image data for the fourth image may have been collected by one or more hyperspectral sensors. All the sensors may be either airborne based sensors or satellite-based sensors or terrestrial sensors. Alf the sensors have the same FOV and the same ground swath. The first, second, third, and fourth images may have a common overlapping area. The image data fused may comprise image data defining all or a part of the common overlapping area The first image may be a panchromatic image type, the second image may be a multi-spectral image type, the third image may be a superspectral image type, and the fourth image may be a hyperspectral image type, the image data fused may further comprise all of the spectra! bands of the second, third, and fourth images defining the spectral resolution of the common overlapping area,
According to another embodiment, the present invention relates to a method for producing a sharpened image comprising obtaining image data defining a high spatial resolution, low spectral resolution image, such image being called a HR-Pan image; obtaining a medium spatial resolution, medium spectral resolution such image being called a MR-MS image; obtaining image data defining a low spatial resolution, high spectral resolution image, such image being called a LR-HS image; fusing the LR-HS image and the MR-MS image to produce a sharpened MR-HS image; and fusing the sharpened MR-HS image and the HR-Pan image to produce a sharpened HR-HS image the HR-Pan image may have a spatial resolution of 1m and 1 spectral band; the MR-MS image may have a spatial resolution of 3m and 10 spectra! bands; and the LR- MS image may have a spatial resolution of 9m and 200 spectral bands. In the step of fusing bands of the MR-MS image and bands of the LR-HS image to produce MR-HS bands, the LR-HS hands being fused may be located in the same or similar spectral range as the bands of the MR-MS image being fused : to produce fused MR-HS bands, and in the step of fusing the band of the HR-Pan image and the fused MR-HS bands, the fused MR-HS bands may be located in the same or similar spectral range as the band of the HR-Pan image. In the step of fusing bands of the MR-MS image and bands of the LR-HS image to produce MR-HS bands, the LR-HS bands being fused may not be in the same or similar spectral range as the bands of the MR-MS image being fused, to produce fused MR-HS bands, and in the step of fusing the band of the HR-Pan image and the fused MR-HS bands, the fused MR-HS bands may not be located in the same or similar spectral range as the band of the HR-Pan image.
According to another embodiment, the present invention relates to a method for producing a sharpened image comprising obtaining image data defining a high spatial resolution, iow spectral resolution image, such image being called a HR-Pan (VNIR) image; obtaining image data defining a medium spatial resolution, medium spectral resolution, such image being called a MR-MS (VNIR) image; obtaining image data defining a low spatial resolution, medium spectral resolution image in another spectral range, such image being called a LR-MS (SWIR) image; obtaining medium spatial resolution Pan (VINR) image [MR-Pan (VNIR)] from the HR-Pan (VNIR) image; obtaining low spatial resolution MS (VNIR) image [LR-MS (VNIR)] from the MR-MS (VNIR) image, fusing the LR-MS (SWIR) image and the MR-Pan (VNIR) image to produce a sharpened MR-MS (SWIR) image; and fusing the MR-MS (VNJR) image and the sharpened MR-MS (SWIR) image with the HR-Pan (VNfR) image to produce a sharpened HR-MS (VNIR) image and a HR-MS (SWIR) image. In the previous step, fusing the MR-MS (VNIR) image with the HR-Pan (VNIR) image to produce a sharpened HR-MS (VNIR) image or fusing the MR-MS (SWIR) image with the HR-Pan (VNIR) image to produce a sharpened HR-MS (SWIR) image. At least one spectraf band of the LR-MS (VNIR) image may be used as a reference to fuse the LR-MS (SWIR) image and the MR-Pan (VNIR) image, to produce a sharpened MR-MS (SWIR) image At least one spectral band of the MR-MS (VNIR) image may be used as a reference to fuse the MR-MS (VNIR) image and the MR-MS (SWIR) image with the HR- Pan (VNIR) image, to produce a sharpened HR-MS (VNIR) image and HR-MS (SWIR) image- According to another embodiment, the present invention relates to a method for producing a sharpened image comprising the steps of obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution, obtaining image data defining a second image, the second image having a second spatial resolution and a second spectral resolution, obtaining image data defining a third image, the third image having a spatial resolution and a third spectral resolution, fusing the image data of two of the first, second and third images to produce a fourth image, and fusing the image data of the fourth image with the image data of the image not fused to product the fourth image, to produce a fifth image, wherein the fifth image is a sharpened image. The first, second and third images may have substantiaJiy the same ground coverage Prior to production of the fourth image, at least one of the first, second and third spatial resolutions may be reduced whereby the first, second and third spatial resolutions are not equal. According to another embodiment, the present invention relates to an image sensor configuration for airborne or satellite-based or terrestrial imagery, the sensor configuration comprising a Pan sensor, a MS sensor, and a HS sensor, wherein the Pan sensor is configured to obtain image data defining a high spatial resolution and low spectral resolution image; the MS sensor is configured to obtain image data defining a medium spatia! resolution and medium spectral resolution image; and the HS sensor is configured to obtain image data defining a low spatial resolution and high spectral resolution image. The image sensor configuration may further comprise a SS sensor configured to obtain image data defining a spatial resolution between those of the MS sensor and the HS sensor, and a spectral resolution between those of the MS sensor and the HS sensor.
According to another embodiment, the present invention relates to an image sensor configuration for airborne or satellite-based or terrestrial imagery, the sensor configuration comprising first, second and third image sensors, wherein the first image sensor having a first spatial resolution and a first spectra! resolution, the second image sensor having a second spatial resolution and a second spectral resolution, the second spatial resolution being lower than the first spatiaf resolution and the second spectral resolution being higher than the first spectral resolution, and the third image sensor having a third spatial resolution and a third spectra! resolution, the third spatial resolution being lower than the second spatial resolution and the third spectral resolution being higher than the second spectral resolution.
According to certain embodiments, the present invention relates to a combination of a mu!ti-sensor system and a multi-level sharpening method for collecting and creating a high spatial resolution and large coverage hyperspectral image, which has up to or more than 27 times higher spatial resolution and more than 10 times larger ground coverage, compared to prior art single sensor hyperspectral technology The combination of the multi-sensor system and the multi-level sharpening method can reduce the data volume by up to or more than 83 times, compared to prior art single sensor hyperspectral technology.
According to another embodiment, the present invention relates to a sensor system which can collect multi-level spatial and spectral information for high spatial resolution, high spectral resolution (hyperspectral), and large ground coverage images by effectively utilizing the complementary relationship between spectral bandwidth and spatial resolution of optical sensors. A set of specifically configured sensors is needed for the data collection. A high spatial resolution, large coverage hyperspectral image can then be created by effectively combining the multi-level spatial and spectral information from the set of sensors through an automated multi-level spatial and spectral resolution sharpening process.
Fused/sharpened images generally contain more data volume than the original images before the fusion/sharpening, because the spatial resolution of the fused/sharpened images is increased. According to certain embodiments of the present invention, the final high spatial resolution and large coverage hyperspectral images are produced "on- the-fly:i as they are needed. The original images before the fusion/sharpening are kept for image data transmission purposes (such as from satellite to ground, from data vendor to end user, or others) and for image data storage purposes to keep the data volume small.
Sensor configuration
For rnu!ti-sensor systems according to embodiments of the present invention, different variations of sensor configurations are possible, as long as the sensor system follows the general principle of the complementary relationship between spectral bandwidth and spatial resolution of optical sensors.
The configuration of a sensor system according Lo an embodiment of the present invention, in terms of spectral bandwidth {spectral resolution) and spatial resolution, is illustrated in Figure 1 It includes three different types of sensors: Pan (panchromatic), MS (multispectral), and HS (hyperspectral) sensors. The number of each type of sensors may vary. More particularly, in the sensor system of FIG. 1 , there are the following sensors:
(1) two Pan sensors, each covers one broad spectral bandwidth (such as Pan 1 and Pan 2), to collect two high spatial resolution panchromatic (HR-Pan) (monochrome) images, at the cost of losing spectral resolution (information);
(2) two MS sensors with medium spectral bandwidths and a total of 16 spectral bands (such as MS1 , MS2, MS16) to collect medium spatial resolution multispectral (MR-MS) images, at the cost of losing some spatial resolution while gaining some spectral resolution; and (3) two HS sensors with narrow spectral bandwidths and a total of 220 spectral bands (such as HS1 , HS2 HS220) to collect low spatial resolution hyperspectral (LR-HS) images, at the cost of significantly losing spatial resolution while significantly gaining spectral resolution.
For each sensor type, the number of sensors may vary depending on the sensor design and the sensor materials used. The spectral range of the sensors can also go beyond 2500 nm as illustrated in FIG 1
For each type of sensori the spatial resolution can also be adjusted to allow each sensor to collect a different spatial resolution. The tight energy coming to a sensor is much weaker in short wave infrared (SW!R) range than in visible and near infrared (VIMIR) range. Therefore, the spatial resolution of the Pan, MS, or HS sensor in short wave infrared range can be lower than that of the corresponding sensor in visible and near infrared range
In addition, the sensor configuration illustrated in FIG 1 can also be split into two or more sets of sensors, such as the Pan, MS, and HS sensors in visible and near infrared (VNIR) spectral range are grouped as one set of mufti-level sensors, and the Pan, MS, and HS sensors in short wave infrared (SWIR) are grouped as another set of sensors. The two sets of sensors can be used together and also separately.
Because broader spectral bandwidths allow for more photons coming into the sensor's pixels, the pixels can be smaller but still receive sufficient photons to form an image. On the other hand, narrower spectral bandwidths allow for less photons coming into the sensor's pixels, so the pixels must be bigger in order to receive sufficient photons Utilizing this complementary relationship between spatial and spectral resolutions, a set of specifically configured sensors can be designed; each cotlects a high spatial and tow spectral resolution image, a medium spatial and medium spectral resolution image, and a high spectral and low spatial resolution image respectively.
In other embodiments, a sensor system according to the present invention can also be extended to include four different types of sensors, such as Pan (panchromatic), MS (multispectral), S3 (superspectral) and HS (hyperspectral) sensors, to make the spatial and spectral resolution changes between different sensors more gradually. For example {example 7), a set of sensors with 2 Pan bands at 1m, 10 MS bands at 3m, 50 SS bands at 9m, and 250 HS bands at 27m, Other sets ot sensor configurations are also possible
Referring to FIG. 2 and FIG 3, although the spatial resolutions of the sensors are different, they cover the same ground area. Because the spatial resolution of the HS sensor is lower than that of SS sensors and much lower than MS and Pan sensors, it is feasible for the HS sensor to cover the same area as do the SS; MS. and Pan sensors and still receive sufficient photons to form an image.
Referring to FIG. 2 and FIG. 3, in an assembly of the Pan, MS, SS, and HS sensors according to an embodiment of the present invention, all the sensors have the same sensor FOV and the same ground swath, but different pixel sizes (spatial resolutions) and different spectral bandwidths. This allows for easier and better co-registration between all images, ensuring better fusion quality between the images of the sensors. This also maximizes the common overlapping area of all the sensors, minimizing unused image areas. Other sensor assemblies with varying FOV or varying swaths may be useable in sensor systems according to the present invention, but the image processing efficiency or image use efficiency may be reduced.
Methods to create high spatial revolution hyperspectral images
A multi-level spatial and spectral resolution sharpening method, according to another embodiment of the present invention, can be used to create high spatial resolution and large coverage hyperspectral images. The general principle of the method is illustrated in FIG. 3, which includes the following steps.
(1 ) fusing a low spatial resolution but high spectral resolution (hyperspectral) image ("LR-HS") with a medium spatial resolution and medium spectral resolution . (multispectral) image ("MR-MS") to create a medium spatial resolution but high spectral resolution image ("MR-HS"); and then (2) fusing the medium spatial resolution but high spectral resolution image ("MR- HS") from step (1 ) above, with a high spatial resolution but low spectral resolution (panchromatic) image ("HR-Pan") to create a high spatial resolution and high spectral resolution image ("HR-HS").
For example {example 2), if a sensor system according to an embodiment of the present invention collects:
a high spatial resolution (1 m) and low spectral resolution (1 band, Pan} image (1 HR-Pan band),
• a medium spatial resolution (3m) and medium spectral resolution (10 bands, MS) image (10 MR-MS bands), and
• a iow spatial resolution (9m) and high spectral resolution (200 bands, HS) image (200 LR-HS bands},
the multi-level spatial and spectral resolution sharpening is conducted as folfows:
(1 ) The 9m LR-HS image (200 bands) will be fused with the 3m MR-MS image (10 bands) to produce a sharpened 3m MR-HS image (200 bands) .
(2) The sharpened 3m MR-HS image (200 bands) will then be fused with the 1 m HR- Pan image (1 band) to create a sharpened 1 m HR-HS image (200 bands)
For better resutts, each of the MR-MS bands is fused with those LR-HS bands that locate in the same or similar spectral range of the MR-MS band, respectively (as partially shown in Figure 1 ), to produce MR-HS bands. The HR-Pan is then fused with those MR-HS bands located in the same or similar Spectral range of the Pan, to produce HR-HS bands.
A MR-MS band can also be fused with those LR-HS bands that do not locate in the same or similar spectral range The same can be applied to the fusion between HR-Pan and MR-HS image bands as well. However, certain colour distortion may occur
If a sensor system according to an embodiment of the present invention has four different sensors, such as Pan, MS, SS and HS sensors, as described above in example 1 , the multi-level spatial and spectral resolution sharpening method according to an embodiment of the present invention will include three steps:
(1) fusing 27m HS image (250 bands} with 9m SS image (50 bands), respectively, to create 9m HS image (250 bands) ,
(2) fusing 9m HS image (250 bands) with 3m MS image (10 bands), respectively, to create 3m HS image (250 bands); and
(3) fusing 3m HS image (250 bands) with 1 m Pan (2 bands), respectively, to create 1 m HS image (250 bands).
Spatial resolution increase and data volume reduction
If a sensor system according to an embodiment of the present invention collects 2 Pan bands at 1 m, 10 MS bands at 3m, 50 SS bands at 9m, and 250 HS bands at 27m, as described in example 1 , the spatial resolution of the HS bands will be increased from 27m to 1 m; an increase of 27 times. The data volume will be reduced from 1 ,822.5 MB to 29.68 MB; a data volume reduction of 61 .4 times. The 1 ,822.5 MB is the data volume of the final high spatial resolution HS image which has 2700 χ 2700 pixels per band, and 250 bands (2700 χ 2700 χ 250 = 1 ,822.5 MB). The 29 68 MB is the sum of the data vo!umes of the original HS (100x100 pixels per band, 250 bands), SS (300x300 pixefs per band, 50 bands), MS (900x900 pixels per band, 10 bands), and Pan (2700x2700 pixels per band, 2 bands.
If a sensor system according to an embodiment of the present invention collects HR- Pan, MR-MS and LR-HS images like those described above in example 2, the spatiaf resolution of the final HR-HS image will be 9 times higher than that of the original LR- HS image, and the data volume will be reduced by 44 times compared to that of a 1 m, 200 bands HR-HS image directly collected by a HR-HS sensor,
If the spatial resolution ratio between HR-Pan and MR-MS is 1/4 and between MR-MS and LR-HS is also 1 /4, the spatial resolution of HR-HS image will be increased by 16 times. The data volume will be reduced by 83 times [assuming HR-Pan has 1 band, MR-MS 10 bands, LR-HS 200 bands; and the resolution ratio between HR-Pan, MR-MS and LR-HS is 1/4).
A spatial resolution increase of 9 to 27 times and a data volume reduction of 44 to 83 times are revolutionary improvements, compared to conventional hyperspectral data collection technologies.
Besides the substantial increase of spatial resolution and massive reduction of data volume, the new sensor system disclosed in this invention can also enlarge the ground coverage (swath) by up to 27 times for the same spatial resolution, compared to the current technology.
Variation of multi-level spatial and spectral resolution sharpening
The concept of the aforementioned multi-level spatial and spectral resolution sharpening method can also be adjusted to fuse multi-level spatial resolution images collected by sensor systems with varying configurations.
For example, as illustrated in FIG 4 the Wor!dView-3 satellite (to be launched in 2014) carries one high resolution Pan sensor to capture 1 band of high spatial resolution (0.31 m) Pan image (HR-Pan) in visible and near infrared (VNIR) spectral range, and multiple MS sensors to capture 8 bands of medium spatial resolution (1 .24m) MS image {MR-MS) in VNIR range and 8 bands of low spatial resolution (3.7m) MS Image (LR- MS) in short wave infrared (SWIR) range. Even though WorldView-3 carries multiple sensors and has 3 different spatial resolutions, its sensor configuration is still a traditional configuration, because it has only one or at most two spatial resolution levels in each spectral range (i.e. VMIR or MWlR), And, it has only one or at most two different types of sensors in each of the spectral ranges.
According to certain embodiments of the present invention, the WorldView-3 images in the VNIR range can be resampled to create multi-level spatial resolutions (i.e. 3 spatial resolution levels, more than the traditional 2 levels). The mufti-level spatial and spectraf resolution sharpening can then be conducted to increase not oniy the spatial resolution of the MS image (1 24m) in VNIR range, but also that of the MS image (3.7m) in MWlR range. The resampling and the multi-level sharpening can be conducted in the following steps:
(1 ) reducing the spatial resolution of MS (VNIR) image (1 24m) to that of MS (SWIR) image (3 7m), to create a simulated low resolution MS (VNIR) image (3.7m}; i.e. creating a low spatial resolution MS (VNIR) image [LR-MS (VNIR}] from the medium spatial resolution MS (VNIR} image [MR-MS (VNIR)],
(2) reducing the spatial resolution of the Pan image (0.31 m) to that of MS (VNIR} image (1 .24m), to create a simulated medium resolution Pan image (1 .24m); i.e. creating a medium spatial resolution Pan (VNIR) image [MR-Pan (VNIR)] from the high spatial resolution Pan (VNIR) image [HR-Pan (VNIR)],
(3) Fusing the medium resolution (1 .24m) Pan image [MR-Pan (VNIR)] with the low resolution (3.7m) MS (SWIR) image [LR-MS (SWiR)], using at least one spectral band of the simulated low resolution (3.7m) MS (VNIR) image [LR-MS (VNIR)] as reference, to create a medium resolution (1 .24m) MS (SWIR) image [MR-MS (SWIR)], and
(4) Fusing the high resolution (0.31 m) Pan image [HR-Pan (VNIR)] with the original medium resolution (1 .24m) MS (VNIR) image [MR-MS (VNIR)] and the created medium resolution (1 .24m) MS (SWIR) image [MR-MS (SWIR)]. to create a high resolution (0 31 m) MS (VNIR) image [HR-MS (VNIR)] and high resolution (0.31 m) MS (SWIR) image [HR-MS (SWIR)].
Referring to FlGs 1 , 2 and 3, and the variation described with respect to Fig 4, and according to the principles of utilizing the complimentary information in a narrow bandwidth image and a broad bandwidth image (such as MS (narrow) versus Pan (broad) or HS (narrow) versus MS (broad) images) to create a high spatial resolution and high spectral resolution image described above, the present invention can in accordance with embodiments of the present invention, he further extended to fuse two or more MS images with different spatial resolutions that cover the same or different spectral ranges. The following steps may be applied to fuse the MS images (1) if the spectra! ranges of the high spatial resolution MS image is the same as or similar to that of the low resolution MS image: a. simulating a high resolution Pan image from the high resolution MS image by combining all the MS bands into one band, and b, fusing the simulated high resolution Pan image with the original low resolution MS to create a high resolution MS image that has the same bandwidth as the original low resolution MS image.
(2) If the spectral ranges of the high spatial resolution MS image is not the same as or similar to that of the low resolution MS image (such as the MS (VNlR) image and MS (SWIR) image as shown in FIG. 4 but the original Pan (VNIR} is not available): a simulating a high resolution Pan (VNIR) image from the high resolution MS (VNIR) image by combining all the MS bands into one band, b. reducing the spatial resofution of the high resolution MS (VNIR) image to that of the low resolution MS (SWIR) image through pixel binning to simulate a low resolution MS (VNIR) image, and c. fusing the simulated high resolution Pan (VNIR) image with the low resolution MS (SWIR) image, using at least one spectral band of the simufated low resolution MS {VNIR) image as reference, to create a high resolution MS (SWIR) image
According to certain embodiments of the present invention, methods of the present invention can be used to fuse/sharpen images from the Pan, MS and HS sensors on board NASA's EO-1 satellite The spatial resolution of MS and HS sensors on board EO-1 are the same (30m); whereas ground coverage of the EO-1 HS sensor with a 7.5km width is much narrower than that of the EO-1 MS sensor with a 30km width. Even with narrower ground coverage, the HS sensor cannot collect good quality HS images. In applying the method according to one embodiment of the present invention to fuse/sharpen EO-1 images, the spatial resolution of an EO-1 HS image is reduced from 30m to 90m to improve the signal to noise ratio (SNR) (I.e. improve the quality of the HS image at the cost of losing spatial resolution}. The 30m MS image is fused with the 90m HS image to create a 30m HS image (this image has much better quality than the original 30m HS image). Then, the 10m Pan image is fused with the fused 30m HS to create a 10m HS image- Methods according to embodiments of the present invention can be used to fuse/sharpen images from 3 or more sensors where the spatial resolutions of the 3 or more sensors are different. The spatial resolution can be adjusted by adjusting the pixel size or the focal length. The ground coverage of the images used in the methods of the present invention are preferably the same but may be different, If the ground coverage is different, overlap areas of the images may be used.
It is understood that the methods according to embodiments of the present invention can be carried out using programed general purpose computers. Conventional fusion methods, such as the UN B PanSharp fusion method, can be employed to carry out the image fusion steps.
It is understood that the method of reducing the spatial resolutions of the Pan and MS (VNIR) images to create 3-levet spatial resolution images which have certain spectra! overlap between each level is just one example. The multi-level fusion of the Pan with MS images is also one exampJe. Other combinations of reducing the spatial resolutions and multi-level fusions exist. ft is also understood that the sensors are either airborne based sensors or satellite- based sensors or terrestrial sensors. Even though the platforms used for carrying the sensors are different, the principle of image data collection and multi-level sharpening is the same List of references
[01] N Keshava and J. F. Mustard, 2002: Spectral Unmixing, IEEE Signal Processing Magazine, vol. 19, no, 1 , pp. 44-57,
[02] J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q Du, P. Gader, and J.
Chanussot 2012: Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches, IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing, Vol. 5, No. 2, April 2012.
[03] K.G. Nikolakopoulos, 2009: Spatial resolution enhancement of Hyperion hyperspectral data, WHiSPEHS'09 - First Workshop on Hyperspectral Image and Signal Processing. 26-28 Aug.
[04] G.A. Licciarcfi, M M. Khan, J. Chanussot, A.Montanvert, LCondat, C. Jutten, 201 1 .
Fusion of Hyperspectrat and panchromatic images using multiresolution analysis and nonlinear PCA band reduction, IEEE IGARRS 2011 , Vancouver, Canada, 24- 29 July.
[05] C. Chisense, J. Engels, M. Hahn and E Gulch, 2012: Pansharpening of Hyperspectral Images in Urban Areas, XXII ISPRS Congress, 25 Aug. - 01 Sep 2012, Melbourne, Australia.
[06] Maoto Yokoya, Norimasa Mayumi, and Akira Iwasaki, 2012: Cross-Calibration for Data Fusion of EQ-1 /Hyperion and Terra/ASTER, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, in press

Claims

I claim:
1 A method for producing a sharpened image comprising the steps of:
(1) obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution,
(2) obtaining image data defining a second image, the second image having a second spatial resolution and a second spectral resolution, the second spatial resolution being lower than the first spatial resolution and the second spectral resolution being higher than the first spectral resolution,
(3) obtaining image data defining a third image: the third image having a third spatial resolution and a third spectral resolution, the third spatial resolution being lower than the second spatial resolution and the third spectral resolution being higher than the second spectral resolution,
(4) fusing the image data of the second image and the third image to produce a fourth image, and
(5) fusing the image data of the first image and the fourth image to produce a fifth image, wherein the fifth image is a final sharpened image.
2, The method of claim 1 , where
the image data for the first image having been collected by one or more panchromatic sensors, the image data for the second image having been collected by one or more multi- spectrai sensors, and the image data for the third image having been collected by one or more sensors selected from the group consisting of superspectral and hyperspectral sensors.
3. The method of claim 2, where all the sensors are either airborne based sensors or satellite -based sensors or terrestrial sensors.
4 The method of claim 3, where
all the sensors have the same FOV and the same ground swath
5. The method of claim 1 , where the first second, and third images have a common overlapping area.
6. The method of claim 5, where the image data fused in steps (4) and (5) comprises image data defining all or a part of the common overlapping area.
7. The method of claim 6, where the first image is a panchromatic image type, the second image is a multi-spectral image type, and the third image is a hyperspectral or a superspectral image type.
8. The method of claim 7, where in image data fused in step (4) further comprises all of the spectral bands of the second and third images defining the spectral resolution of the common overlapping area.
9 A method for producing a sharpened image comprising the steps of:
(1) obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution,
(2) obtaining image data defining a second image, the second image having a second spatial resolution and a second spectral resolution, the second spatial resolution being lower than the first spatial resolution and the second spectral resolution being higher than the first spectral resolution,
(3) obtaining image data defining a third image, the third image having a third spatial resolution and a third spectral resolution, the third spatial resolution being lower than the second spatial resolution and the third spectraf resolution being higher than the second spectral resolution,
(4) obtaining image data defining a fourth image, the fourth image having a fourth spatial resolution and a fourth spectral resolution, the fourth spatial resolution being lower than the third spatial resolution and the fourth spectral resolution being higher than the third spectral resolution,
(5) fusing the image data of the third image and the fourth image to produce a fifth image,
{6) fusing the image data of the second image and the fifth image to produce a sixth image, and
(7) fusing the image data of the first image and the sixth image to produce a seventh image, wherein the seventh image is a final sharpened image.
10. The method of claim 9, where
the image data for the first image having been collected by one or more panchromatic sensors,
the image data for the second image having been collected by one or more multi- spectral sensors,
the image data for the third image having been colfected by one or more superspectrai sensors, and
the image data for the fourth image having been collected by one or more hyperspectral sensors.
1 1. The method of claim 10, where
alJ the sensors are either airborne based sensors or satellite-based sensors or terrestrial sensors.
12 The method of claim 1 1 , where
all the sensors have the same FOV and the same ground swath.
13. The method of claim 9, where the first, second, third, and fourth images have a common overlapping area.
14 The method of claim 13, where the image data fused in steps (5}-(7) comprises image data defining all or a part of the common overlapping area.
15. The method of claim 14, where the first image is a panchromatic image type, the second image is a multi-spectral image type, the third image is a superspectral image type, and the fourth image is a hyperspectral image type
16. The method of claim 15, where the image data fused in steps (5), (6) and (7) further comprises all of the spectral bands of the second, third, and fourth images defining the spectral resolution of the common overlapping area.
1 7. A method for producing a sharpened image comprising
(1) obtaining image data defining a high spatiai resolution, low spectral resolution image, such image being calfed a HR-Pan image; (2) obtaining a medium spatial resolution, medium spectral resolution, such image being cafied a MR-MS image;
(3) obtaining image data defining a low spatial resolution, high spectral resolution image, such image being called a LR-HS image;
(4) fusing the LR-HS image and the MR-MS image to produce a sharpened MR- HS image; and
(5) fusing the sharpened MR-HS image and the HR-Pan image to produce a sharpened HR-HS image,
18 The method of claim 17, wherein the HR-Pan image has a spatial resolution of 1 m and 1 spectral band; the MR-MS image has a spatial resolution of 3m and 10 spectral bands, and the LR-MS image has a spatial resolution of 9m and 200 spectra! bands
19 The method of claim 17, wherein step (4) comprising fusing bands of the MR-MS image and bands of the LR-HS image to produce MR-HS bands, wherein the LR-HS bands being fused are located in the same or similar spectral range as the bands of the MR-MS image being fused, to produce fused MR-HS bands, and step (5) comprising fusing the band of the HR-Pan image and the fused MR-HS bands, wherein the fused MR-HS bands are located in the same or similar spectral range as the band of the HR-Pan image.
20. The method of claim 17, wherein step (4) comprising fusing bands of the MR-MS image and bands of the LR-HS image to produce MR-HS bands, wherein the LR-HS bands being fused are not in the same or similar spectral range as the bands of the MR-MS image being fused, to produce fused MR-HS bands, and step (5) comprising fusing the band of the HR-Pan image and the fused MR-HS bands, wherein the fused MR-HS bands are not located in the same or similar spectral range as the band o† the HR-Pan image.
21 . A method for producing a sharpened image comprising:
{1 ) obtaining image data defining a high spatial resolution, low spectral resolution image, such image being called a HR-Pan (VNiR) image;
(2) obtaining image data defining a medium spatial resolution, medium spectral resolution, such image being called a MR-MS (VNIR) image;
(3) obtaining image data defining a low spatial resolution, medium spectral resolution image in another spectral range, such image being called a LR-MS (SWIR) image;
(4) obtaining medium spatial resolution Pan (VI NR) image, such image being called MR-Pan (VNIR) image, from the HR-Pan (VNIR) image;
(5) obtaining low spatial resolution MS (VNIR) image, such image being called a LR-MS (VNIR) image, from the MR-MS (VNIR) image;
(6) fusing the LR-MS (SWIR) image and the MR-Pan (VNIR) image to produce a sharpened MR-MS (SWIR) image; and
(7) fusing the MR-MS (VNIR) image and the sharpened MR-MS (SWIR) image with the HR-Pan (VNIR) image to produce a sharpened HR-MS (VNIR) image and a HR-MS (SWIR) image.
22. The method of claim 21 , wherein in step (7), fusing the MR-MS (VIM!R) image with the HR-Pan (VNIR) image to produce a sharpened HR-MS (VNIR) image
23. The method of claim 21 , wherein in step (7), fusing the MR-MS (SWIR) image with the HR-Pan (VNiR) image to produce a sharpened HR-MS (SWJR) image.
24. The method of claim 21 , wherein using at least one spectral band of the LR-MS (VNIR) image as reference in the step (6) to fuse the LR-MS (SWIR) image and the MR-Pan (VNIR) image, to produce a sharpened MR-MS (SWIR) image.
25. The method of claim 21. wherein using at least one spectral band of the MR-MS (VNIR) image as reference in the step (7) to fuse the MR-MS (VMfR) image and the MR-MS (SWIR) image with the HR-Pan (VNIR) image, to produce a sharpened HR-MS (VNIR) image and HR- MS (SWIR) image.
26. A method for producing a sharpened image comprising the steps of: obtaining image data defining a first image, the first image having a first spatial resolution and a first spectral resolution, obtaining image data defining a second image, the second image having a second spatial resolution and a second spectral resolution,
obtaining image data defining a third image, the third image having a spatial resolution and a third spectral resolution, fusing the image data of two of the first, second and third images to produce a fourth image, and
fusing the image data of the fourth image with the image data of the image not fused to product the fourth image, to produce a fifth image, wherein the fifth image is a sharpened image.
27. The method of claim 26, wherein
the first, second and third images have substantially the same ground coverage.
28. The method of claim 26, further comprising; prior to production of the fourth image, reducing at least one of the first, second and third spatial resolutions whereby the first, second and third spatial resolutions are not equal.
29 An image sensor configuration for airborne or satellite-based or terrestrial imagery, the sensor configuration comprising: a Pan sensor; a MS sensor; and a HS sensor, wherein the Pan sensor is configured to obtain image data defining a high spatial resolution and low spectra! resolution image; the MS sensor is configured to obtain image data defining a medium spatial resolution and medium spectral resoJution image; and
the HS sensor is configured to obtain image data defining a low spatial resolution and high spectral resolution image.
30. The image sensor configuration of claim 29, further comprising a SS sensor configured to obtain image data defining a spatial resolution between those of the MS sensor and the HS sensor, and a spectral resolution between those of the MS sensor and the HS sensor.
31. An image sensor configuration for airborne or satellite-based or terrestrial imagery, the sensor configuration comprising. first, second and third image sensors, wherein
the first image sensor having a first spatial resolution and a first spectral resolution, the second image sensor having a second spatial resolution and a second spectral resolution, the second spatial resolution being lower than the first spatial resolution and the second spectral resolution being higher than the first spectral resolution, and the third image sensor having a third spatial resolution and a third spectral resolution, the third spatial resolution being lower than the second spatial resolution and the third spectra! resolution being higher than the second spectral resolution
32. A method for producing a sharpened image comprising obtaining image data defining a high spatial resolution MS image, obtaining image data defining a low spatial resolution MS, simulating a high resolution Pan image from the high resolution MS image by combining all the MS bands of the high resolution MS image into one band, and fusing the simulated high resolution Pan image with the low resolution MS image to generate a high resolution MS image that has the same bandwidth as the low resolution MS image
33 The method of claim 32, wherein the spectral ranges of the high spatial resolution MS image is the same as or similar to that of the low resolution MS image,
34 A method for producing a sharpened image comprising: obtaining image data defining a high spatial resolution MS image, obtaining image data defining a low spatial resolution MS, simulating a high resolution Pan image from the high resolution MS image by combining all the MS bands of the high resolution MS image into one band, and reducing the spatial resolution of the high resolution MS image to that of the low resolution MS image through pixel binning to simulate a low resolution MS image, and fusing the simulated high resolution Pan image with the low resolution MS image using at least one spectral band of the simulated low resolution MS image as a reference, to generate a high resolution MS image that has the same bandwidth as the low resolution MS image
35. The metliod of claim 34 wherein the spectral ranges of the high spatial resolution MS image is not the same as or similar to that of the low resolution MS image.
PCT/CA2015/050905 2014-09-16 2015-09-16 Optical sensor systems and image processing methods for remote sensing WO2016041079A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/511,772 US10225449B2 (en) 2014-09-16 2015-09-16 Optical sensor systems and image processing methods for remote sensing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462051105P 2014-09-16 2014-09-16
US62/051,105 2014-09-16

Publications (1)

Publication Number Publication Date
WO2016041079A1 true WO2016041079A1 (en) 2016-03-24

Family

ID=55532389

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2015/050905 WO2016041079A1 (en) 2014-09-16 2015-09-16 Optical sensor systems and image processing methods for remote sensing

Country Status (2)

Country Link
US (1) US10225449B2 (en)
WO (1) WO2016041079A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897668A (en) * 2017-01-20 2017-06-27 浙江大学 A kind of Grassland degradation degree extraction method based on remote sensing image
CN107392208A (en) * 2017-05-23 2017-11-24 三亚中科遥感研究所 Object Spectra feature extracting method with purifying is mapped based on spectral space
US10458904B2 (en) 2015-09-28 2019-10-29 Ball Aerospace & Technologies Corp. Differential absorption lidar
US10921245B2 (en) 2018-06-08 2021-02-16 Ball Aerospace & Technologies Corp. Method and systems for remote emission detection and rate determination
CN114972122A (en) * 2022-07-27 2022-08-30 中国科学院空天信息创新研究院 Hyperspectral remote sensing image bad pixel restoration method and device, electronic equipment and medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016061757A1 (en) * 2014-10-22 2016-04-28 宇龙计算机通信科技(深圳)有限公司 Image generation method based on dual camera module and dual camera module
US10192288B2 (en) * 2016-12-23 2019-01-29 Signal Processing, Inc. Method and system for generating high resolution worldview-3 images
US10685230B2 (en) * 2018-09-06 2020-06-16 National Central University Method of top-of-atmosphere reflectance-based spatiotemporal image fusion using aerosol optical depth
CN110428387B (en) * 2018-11-16 2022-03-04 西安电子科技大学 Hyperspectral and full-color image fusion method based on deep learning and matrix decomposition
CN112883823A (en) * 2021-01-21 2021-06-01 南京航空航天大学 Land cover category sub-pixel positioning method based on multi-source remote sensing data fusion
CN113446998B (en) * 2021-06-29 2022-09-30 哈尔滨工业大学 Hyperspectral target detection data-based dynamic unmixing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8094960B2 (en) * 2008-07-07 2012-01-10 Harris Corporation Spectral calibration of image pairs using atmospheric characterization
US8478067B2 (en) * 2009-01-27 2013-07-02 Harris Corporation Processing of remotely acquired imaging data including moving objects
US8487996B2 (en) * 2011-04-25 2013-07-16 Skybox Imaging, Inc. Systems and methods for overhead imaging and video

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7340099B2 (en) 2003-01-17 2008-03-04 University Of New Brunswick System and method for image fusion
US7835594B2 (en) 2006-12-01 2010-11-16 Harris Corporation Structured smoothing for superresolution of multispectral imagery based on registered panchromatic image
US7936949B2 (en) 2006-12-01 2011-05-03 Harris Corporation Panchromatic modulation of multispectral imagery
US20090318815A1 (en) * 2008-05-23 2009-12-24 Michael Barnes Systems and methods for hyperspectral medical imaging
US8260086B2 (en) * 2009-03-06 2012-09-04 Harris Corporation System and method for fusion of image pairs utilizing atmospheric and solar illumination modeling
US9020274B2 (en) 2009-05-06 2015-04-28 University Of New Brunswick Method of interest point matching for images
CA2703355A1 (en) 2009-05-06 2010-11-06 University Of New Brunswick Method for rpc refinement using ground control information
WO2012106797A1 (en) 2011-02-11 2012-08-16 Canadian Space Agency Method and system of increasing spatial resolution of multi-dimensional optical imagery using sensor's intrinsic keystone

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8094960B2 (en) * 2008-07-07 2012-01-10 Harris Corporation Spectral calibration of image pairs using atmospheric characterization
US8478067B2 (en) * 2009-01-27 2013-07-02 Harris Corporation Processing of remotely acquired imaging data including moving objects
US8487996B2 (en) * 2011-04-25 2013-07-16 Skybox Imaging, Inc. Systems and methods for overhead imaging and video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DE BETHUNE ET AL.: "Fusion of multispectral and panchromatic images by local mean and variance matching filtering techniques", FUSION OF EARTH DATA, 28 January 1998 (1998-01-28) - 30 January 1998 (1998-01-30), Sophia Antipolis, France *
PALUBINSKAS ET AL.: "Multi-resolution, multi-sensor image fusion: general fusion framework", PROC OF JOINT URBAN REMOTE SENSING EVENT JURSE, 11 April 2011 (2011-04-11) - 13 April 2011 (2011-04-13), pages 313 - 316, XP031864473, DOI: doi:10.1109/JURSE.2011.5764782 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10458904B2 (en) 2015-09-28 2019-10-29 Ball Aerospace & Technologies Corp. Differential absorption lidar
CN106897668A (en) * 2017-01-20 2017-06-27 浙江大学 A kind of Grassland degradation degree extraction method based on remote sensing image
CN107392208A (en) * 2017-05-23 2017-11-24 三亚中科遥感研究所 Object Spectra feature extracting method with purifying is mapped based on spectral space
CN107392208B (en) * 2017-05-23 2020-05-22 三亚中科遥感研究所 Object spectral feature extraction method based on spectral space mapping and purification
US10921245B2 (en) 2018-06-08 2021-02-16 Ball Aerospace & Technologies Corp. Method and systems for remote emission detection and rate determination
CN114972122A (en) * 2022-07-27 2022-08-30 中国科学院空天信息创新研究院 Hyperspectral remote sensing image bad pixel restoration method and device, electronic equipment and medium
CN114972122B (en) * 2022-07-27 2022-11-01 中国科学院空天信息创新研究院 Hyperspectral remote sensing image bad pixel restoration method and device, electronic equipment and medium

Also Published As

Publication number Publication date
US10225449B2 (en) 2019-03-05
US20170251134A1 (en) 2017-08-31

Similar Documents

Publication Publication Date Title
US10225449B2 (en) Optical sensor systems and image processing methods for remote sensing
Lanaras et al. Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network
Schmitt et al. SEN12MS--A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion
Schowengerdt Remote sensing: models and methods for image processing
Aasen et al. Quantitative remote sensing at ultra-high resolution with UAV spectroscopy: a review of sensor technology, measurement procedures, and data correction workflows
Grøtte et al. Ocean color hyperspectral remote sensing with high resolution and low latency—The HYPSO-1 CubeSat mission
Loncan et al. Hyperspectral pansharpening: A review
Verger et al. Green area index from an unmanned aerial system over wheat and rapeseed crops
Rogan et al. Remote sensing technology for mapping and monitoring land-cover and land-use change
Al-Wassai et al. Major limitations of satellite images
Ortenberg Hyperspectral sensor characteristics: Airborne, spaceborne, hand-held, and truck-mounted; integration of hyperspectral data with Lidar
Qu et al. Earth Science Satellite Remote Sensing: Vol. 2: Data, Computational Processing, and Tools
Liu Processing of FORMOSAT-2 daily revisit imagery for site surveillance
Wierzbicki et al. Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle
Kim et al. High‐resolution hyperspectral imagery from pushbroom scanners on unmanned aerial systems
Dalla Mura et al. Challenges and opportunities of multimodality and data fusion in remote sensing
Rout et al. Deepswir: A deep learning based approach for the synthesis of short-wave infrared band using multi-sensor concurrent datasets
Pour et al. Remote sensing for mineral exploration
Guo et al. Development of Earth observation satellites
Marcaccio et al. Potential use of remote sensing to support the management of freshwater fish habitat in Canada
Pagano et al. Requirements for a Moderate-resolution infrared imaging sounder (MIRIS)
Chen et al. Constucting a unified framework for multi-source remotely sensed data fusion
Baudoin Beyond SPOT 5: Pléiades, part of the French-Italian program ORFEO
Khorram et al. Data acquisition
Estes et al. Characteristics, sources, and management of remotely-sensed data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15841177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 15511772

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 15841177

Country of ref document: EP

Kind code of ref document: A1