WO2020199205A1 - 一种混合型高光谱图像重构的方法及系统 - Google Patents

一种混合型高光谱图像重构的方法及系统 Download PDF

Info

Publication number
WO2020199205A1
WO2020199205A1 PCT/CN2019/081550 CN2019081550W WO2020199205A1 WO 2020199205 A1 WO2020199205 A1 WO 2020199205A1 CN 2019081550 W CN2019081550 W CN 2019081550W WO 2020199205 A1 WO2020199205 A1 WO 2020199205A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
hyperspectral
rgb
area
filter
Prior art date
Application number
PCT/CN2019/081550
Other languages
English (en)
French (fr)
Inventor
王星泽
赖嘉炜
Original Assignee
合刃科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(深圳)有限公司 filed Critical 合刃科技(深圳)有限公司
Priority to CN201980005542.9A priority Critical patent/CN111386549B/zh
Priority to PCT/CN2019/081550 priority patent/WO2020199205A1/zh
Publication of WO2020199205A1 publication Critical patent/WO2020199205A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Definitions

  • This application relates to the field of artificial intelligence, and in particular to a method and system for hybrid hyperspectral image reconstruction.
  • hyperspectral image collection has played an increasingly important role in various application fields such as remote sensing, agriculture, and industrial inspection.
  • the collection of hyperspectral images requires a filter device to be installed in front of the conventional image sensor to obtain narrowband optical signals of multiple different wavelengths.
  • a liquid crystal tunable filter based on the Lyot filter set Liquid crystal tunable filter, LCTF
  • LCTF Lyot filter set
  • AOTF Acousto-optical tunable filters
  • This application provides a hybrid hyperspectral image reconstruction method and system, which are used to solve the shortcomings of low portability and high cost of hyperspectral image acquisition methods.
  • this application provides a hybrid hyperspectral image reconstruction method.
  • the method includes the following steps:
  • the image sensor captures the target field of view to obtain the target image to be reconstructed, wherein the target image to be reconstructed is divided into a first area image and a second area image, the first area image is an RGB image, and the second area image is Hyperspectral image
  • Image fusion is performed on the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
  • the image sensor includes a first filter and a second filter, the first filter is used to obtain a first area image, and the second filter is used to obtain the second Area image.
  • the second area image is obtained by performing a spatial resolution restoration operation on the multispectral image after the image sensor uses a second filter to capture a multispectral image of the second area Hyperspectral image.
  • the method further includes:
  • the waveband overlapping area includes an area where the edges of each light waveband overlap
  • Image fusion is performed on the hyperspectral image of the one or more target regions, the hyperspectral image of the first region, and the image of the second region to obtain a reconstructed hyperspectral image of the target.
  • reconstructing the image of the first area into a hyperspectral image of the first area includes:
  • the hyperspectral image reconstruction model Y h D h X according to the weight coefficient matrix X, and obtain the hyperspectral image of the first region according to the hyperspectral image dictionary D h , where Y h is the output hyperspectral image , D h is a hyperspectral image dictionary, X is a weight coefficient matrix, the weight coefficient matrix reflects the mapping relationship between the output hyperspectral image Y h and the hyperspectral image dictionary D h , the hyperspectral image dictionary D h is obtained by initializing the weight coefficient matrix X to 0 before acquiring the image to be reconstructed, and training the hyperspectral image reconstruction model using a hyperspectral image sample set.
  • the method further includes:
  • the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel sheets, and the Each of the super pixels is located at the center of the four quadrants of the imaging area of the image sensor.
  • the image sensor is based on semiconductor thin film technology, wherein:
  • the first filter is an RGB dye filter
  • the second filter is an F-P cavity thin film array prepared by any one of deposition, patterning, and etching methods; or,
  • the second filter is a reflective multi-channel filter formed on a CMOS sensor array by using photonic crystals.
  • the method further includes:
  • a hybrid hyperspectral image reconstruction system in a second aspect, includes:
  • An acquisition unit configured to use an image sensor to photograph the target field of view to acquire a target image to be reconstructed, wherein the image to be reconstructed is divided into a first area image and a second area image, and the first area image is RGB An image, the second region image is a hyperspectral image;
  • a reconstruction unit configured to reconstruct the first region image into a hyperspectral image of the first region
  • the fusion unit is configured to perform image fusion between the hyperspectral image of the first region and the image of the second region to obtain a reconstructed target hyperspectral image.
  • the image sensor includes a first filter and a second filter, the first filter is used to obtain an image of a first area, and the second filter is used for the second area image.
  • the second area image is obtained by performing a spatial resolution restoration operation on the multispectral image after the image sensor uses a second filter to capture a multispectral image of the second area Hyperspectral image.
  • the system further includes a compensation unit,
  • the complementary shooting unit is configured to obtain a target area corresponding to the waveband overlapping area in the target field of view in the case that a band overlapping area appears in the hyperspectral image of the first area, wherein the waveband overlapping area Including areas where the edges of each light band overlap;
  • the re-shooting unit is also used to restore the spatial resolution of the one or more multispectral images of the target area to obtain one or more hyperspectral images of the target area;
  • the fusion unit is also used for image fusion of the hyperspectral image of the one or more target regions, the hyperspectral image of the first region, and the image of the second region to obtain the reconstructed target height Spectral image.
  • the image dictionary D RGB is obtained according to the hyperspectral image dictionary D h and the spectral response function S of the image sensor;
  • the system further includes a training unit,
  • the training unit is used to train the hyperspectral image reconstruction model by initializing the weight coefficient matrix X to 0 before the image sensor captures the target field of view to obtain the target image to be reconstructed. , To obtain the hyperspectral image dictionary D h ;
  • the first filter is an RGB dye filter
  • the second filter is an F-P cavity thin film array prepared by any one of deposition, patterning, and etching methods; or,
  • the second filter is a reflective multi-channel filter formed on a CMOS sensor array by using photonic crystals.
  • the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel sheets, and the Each of the super pixels is located at the center of the four quadrants of the imaging area of the image sensor.
  • the system further includes a switching unit,
  • the switching unit is configured to switch the first filter used for shooting the target area to the second filter after the reconstructed target hyperspectral image is obtained.
  • the target field of view is captured by the image sensor to obtain the target image to be reconstructed, and then the first region image is reconstructed into the first region hyperspectral image, thereby Image fusion is performed on the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
  • the image sensor can shoot high-precision hyperspectral images without adding additional spectroscopic devices or filtering devices, and has the advantages of good portability and low cost.
  • Fig. 1 is a hybrid hyperspectral image reconstruction method provided by this application
  • FIG. 2 is a schematic diagram of super pixel distribution in an imaging area of an image sensor provided by the present application
  • FIG. 3 is a schematic structural diagram of a super pixel formed by using an F-P cavity film array provided by the present application
  • FIG. 4 is a schematic diagram of the shooting process of a hybrid hyperspectral image reconstruction method provided by the present application.
  • Fig. 5 is a schematic structural diagram of a hybrid hyperspectral image reconstruction system provided by the present application.
  • Fig. 6 is a schematic structural diagram of an electronic device provided by the present application.
  • the hybrid hyperspectral image reconstruction method and system of the embodiments of the present application can be applied in many fields.
  • various application fields such as remote sensing, agriculture, industrial inspection, military, etc.
  • it can be installed on the aircraft to take the hyperspectral image of the target field of view.
  • the crop growth can be evaluated based on the hyperspectral image of the taken crop.
  • the spectral characteristics can be obtained from the hyperspectral images of the ground objects taken to identify surface minerals; in the field of military detection, it can be based on the hyperspectral shots of the battlefield Images are used to distinguish and identify targets, camouflage objects, and natural objects, so as to improve the accuracy of target strikes, etc., which is not specifically limited in this application.
  • Figure 1 is a hybrid hyperspectral image reconstruction method provided by the present application. The method includes the following steps:
  • the image sensor captures the target field of view to obtain an image of the target to be reconstructed.
  • the image to be reconstructed is divided into a first area image and a second area image, the first area image is an RGB image, and the second area image is a hyperspectral image.
  • the image sensor includes a first filter and a second filter, the first filter is used to obtain a first area image, and the second filter is used to obtain the second area image.
  • a target image to be reconstructed can be obtained.
  • Part of the target image to be reconstructed is an RGB image, which is the first region image, and part is a hyperspectral image, which is the second image.
  • Area image is an RGB image, which is the first region image
  • a hyperspectral image which is the second image.
  • the first area image is an image obtained by the first filter of the image sensor
  • the second area image is an image taken by the image sensor using the second filter.
  • the area of the first region captured by the first filter is much larger than the area of the second region captured by the second filter. This is because the image sensor to shoot hyperspectral images requires additional expensive spectroscopic devices and filtering devices. If you directly use the image sensor with an external device to directly shoot hyperspectral images, it is often used in conjunction with drones to shoot the target field of view. For spectrum acquisition image sensors, the cost is high and portability is poor. Therefore, the solution provided by this application mainly uses the first filter that collects the RGB image to capture the target field of view. The image sensor can obtain high-precision hyperspectral images without additional external devices, thereby greatly reducing production costs. And has good portability.
  • the second area image is after the image sensor uses a second filter to capture a multispectral image of the second area, and then perform a spatial resolution restoration operation on the multispectral image Obtained hyperspectral image.
  • the first area image is directly captured using the image sensor and the first filter, while the second filter can only capture multispectral images, while the second area image is a hyperspectral image. Therefore, the image The sensor can directly use the second filter to shoot the target field of view to obtain a multispectral image, and then perform a spatial resolution restoration operation to obtain the second region.
  • a multispectral image can actually be regarded as a case of a hyperspectral image, that is, the number of imaged bands is less than that of a hyperspectral image, generally only a few to a dozen. Since spectral information actually corresponds to color information, multi-spectral images or multi-band remote sensing images can obtain color information of ground objects, but the spatial resolution is low. Therefore, after using the second filter to obtain a multispectral image of the target field of view, the operation of spatial resolution restoration can be used to obtain a hyperspectral image. It is understandable that the use of the second filter to collect multispectral data can also effectively reduce the calculation cost of RGB reconstruction of hyperspectral images, and shorten the reconstruction calculation time to a certain extent.
  • the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel pieces, so Each super pixel of the plurality of super pixels is located at the center of the four quadrants of the imaging area of the image sensor. It should be understood that each super pixel is used to directly obtain a multispectral image, and after the spatial resolution of the multispectral image is restored, a hyperspectral image can be obtained. Although the second filter to collect multispectral data can also effectively reduce the computational cost of RGB reconstruction of hyperspectral images, the cost of the first filter that directly captures RGB images is much lower than that of the second filter that captures multispectral images.
  • FIG. 2 is a schematic diagram of super pixel distribution in an imaging area of an image sensor provided by the present application.
  • the colorless part in the figure is the area covered by the first filter
  • the dark part is the area covered by the second filter
  • the total number of super pixels included in the second filter is 4 ⁇ 6. They are respectively located at the center of the four quadrants in the imaging area of the image sensor.
  • the center of each quadrant contains 6 super pixels
  • each super pixel consists of 3 ⁇ 3 single-wavelength pixels arranged and combined into a square pixel set.
  • the first filter is an RGB dye filter
  • the second filter is an FP cavity thin film array prepared by any one of deposition, patterning, and etching methods
  • the second filter is a reflective multi-channel filter formed on the CMOS sensor array by using photonic crystals. It should be understood that the second filter is used to take multi-spectral pictures, that is, the second filter has the characteristics of collecting light signals of different wavelength bands, and the FP cavity is a device that uses the phenomenon of multi-beam interference to perform light splitting operations.
  • the structure of the FP cavity at the front end of the sensor can be changed, including but not limited to the length of the cavity, the refractive index of the cavity medium, the material of the reflector, etc., to change the light passing frequency of the second filter, and the composition can be multi-spectral shooting Super pixels.
  • the structure of a super pixel formed by using an FP cavity thin film array is shown in Figure 3, where Figure 3 shows a side view of the super pixel formed by the FP cavity thin film array, and Figure 3 only shows 5 of the super pixels.
  • Each single-wavelength pixel includes a bottom-layer mirror 301, a transparent medium 302, and a top-layer mirror 303.
  • the cavity length of each single-wavelength pixel is different. Taking red light (wavelength 630 nm), green light (wavelength 550 nm) and blue light (wavelength 440 nm) as an example, the calculation of the cavity length of the transparent medium 302 will be described as an example. It should be understood that the optical cavity length of the FP cavity film array shown in FIG. 3 should satisfy the following coherent interference conditions (take normal incidence as an example):
  • n is the refractive index of the transparent medium 302
  • d is the cavity length of the transparent medium 302
  • is the wavelength of the incident light
  • N is a positive integer. Therefore, a possible cavity length combination of the red, green, and blue channels is 201nm, 179nm and 143nm. Therefore, according to the formula (1) and the wavelength of different optical wavelength bands, the corresponding dielectric cavity length can be calculated respectively, thereby fabricating an F-P cavity thin film array to form a super pixel.
  • the super pixels of the image sensor can also use photonic crystals to form a reflective multi-channel filter on the CMOS sensor array, thereby replacing the FP cavity film array for different wavelength bands. Selection, this application will not repeat it.
  • S102 Reconstruct the image of the first area into a hyperspectral image of the first area.
  • the mapping relationship between the image dictionary D RGB , the RGB image dictionary D RGB is obtained according to the hyperspectral image dictionary D h and the spectral response function S of the image sensor;
  • the weight coefficient matrix X reflects the mapping relationship between the output hyperspectral image Y
  • the orthogonal matching pursuit algorithm (Orthogonal Matching Pursuit, OMP) can be used to calculate the corresponding weight coefficient matrix X according to the input RGB image and formula (2):
  • k represents the number of rows of the matrix
  • l represents the number of columns of the matrix
  • D h and D RGB have a certain mapping relationship as shown in formula (3):
  • S is the spectral response function of the image sensor.
  • the spectral response function S can be obtained by the manufacturer of the RGB image sensor or directly measured. Its physical meaning is the response to the signal of a hyperspectral channel in the corresponding R or G or B channel. . Therefore, when D h and D RGB have a certain mapping relationship, the weight coefficient matrix X of the RGB image dictionary D RGB can also be used for the hyperspectral dictionary D h . Through the linear combination of atoms in the hyperspectral dictionary, the hyperspectral image Y h of the input RGB image is reconstructed, where,
  • this application can also use other methods to obtain the reconstructed hyperspectral image, such as using a deep learning neural network.
  • the method before the image sensor captures the target field of view to obtain the target image to be reconstructed, the method further includes: initializing the weight coefficient matrix X to 0, and using a hyperspectral image sample set to compare the hyperspectral image
  • the hyperspectral image dictionary D h used may be a pre-existing hyperspectral image dictionary or a dictionary learning process.
  • An over-complete hyperspectral image dictionary D h obtained after iterative update of a classic algorithm K-SVD algorithm.
  • the Y matrix is divided into Y ⁇ D*X, where D is called a dictionary, each column of D is called an atom, and X is called a coefficient matrix. Randomly select k samples from the sample set Y as the atoms of the dictionary D, and initialize the coefficient matrix X to 0, and update D and X column by column with the following function as the objective function:
  • n represents the number of rows of the matrix
  • k represents the number of columns of the matrix.
  • the sparsest solution can also be found by using the L0 norm constraint according to the prior condition of sparsity, such as greedy ( greedy) algorithm and MP algorithm.
  • the hyperspectral image sample set used in this application may include hyperspectral images in multiple different scenes, including but not limited to urban, suburban, agricultural, animal and plant landscapes, and indoor landscapes. The area where the contour spectrum can be applied.
  • S103 Perform image fusion on the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
  • the method further includes: obtaining a target region corresponding to the band overlap region in the target field of view, wherein, The waveband overlap area includes the area where the edges of the light wavebands overlap; the second filter is used to take a supplementary shot of the target area to obtain one or more multispectral images of the target area; Perform spatial resolution restoration on multiple multispectral images of the target area to obtain one or more hyperspectral images of the target area; combine the hyperspectral images of the one or more target areas and the hyperspectral images of the first area And performing image fusion on the second region image to obtain a reconstructed hyperspectral image of the target.
  • the hybrid hyperspectral reconstruction method can process the second region of the image sensor into super pixels that can be used for hyperspectral image acquisition, which can be used to directly obtain accurate hyperspectral images, and can be used to determine the overlapping regions of light bands.
  • the hyperspectral image is taken directly to improve the reconstruction accuracy of the hyperspectral image.
  • the second area may be an overlapping area between various light wavebands calculated in advance, or may be a waveband overlapping area determined after using the first area to shoot and reconstruct a hyperspectral image. It should be noted that if the image is not clear due to drone vibration, light angle or other external conditions during shooting, the second filter can also be used to make up the shot, so as to maximize the final hyperspectral reconstruction. The accuracy of image reconstruction.
  • FIG. 4 is a schematic diagram of the shooting process of a hybrid hyperspectral reconstruction method provided by the present application.
  • the sensor used for shooting may be a CMOS image sensor array including a thin film array of FP cavity as shown in Figure 3; then, the image to be reconstructed
  • the first area image is the RGB image.
  • take the second area image that is, the multispectral image obtained by using the second filter, and restore the spatial resolution to obtain the second area hyperspectral image;
  • the first area hyperspectral image, one Or multiple hyperspectral images of the target area and hyperspectral images of the second area are fused using image fusion algorithms such as principal component analysis (PCA) to obtain a reconstructed high-precision hyperspectrum image.
  • PCA principal component analysis
  • the RGB image taken by the first filter undergoes the hyperspectral reconstruction in step S102, and there is no band overlapping area at the edge of the light band, there is no need to use the second filter to perform the supplementary shooting of the hyperspectral image.
  • the hyperspectral image in the first region and the hyperspectral image in the second region can be directly used for image fusion operations, and then a reconstructed high-precision hyperspectral image can be obtained.
  • the method further includes: switching the first filter used for shooting the target area to the second filter.
  • the initial distribution of the second filter can be distributed in the center of the four quadrants of the image sensor.
  • the second filter can also be The super pixel distribution of the film is adjusted.
  • the image sensor used in the hybrid hyperspectral image reconstruction method provided in the present application in which the super pixels used to shoot multi-spectral images, can change the number of optical signal channels and super pixels according to the needs of the actual application field. The number and specific distribution.
  • the image of the target to be reconstructed is obtained by capturing the field of view of the target by the image sensor, and then the image of the first area is reconstructed into a hyperspectral image of the first area, so as to combine the hyperspectral image of the first area with the Image fusion is performed on the second area image to obtain a reconstructed hyperspectral image of the target.
  • the image sensor can shoot high-precision hyperspectral images without adding additional spectroscopic devices or filtering devices, and has the advantages of good portability and low cost.
  • FIG. 5 is a schematic structural diagram of a hybrid hyperspectral image reconstruction system provided by this application.
  • the hybrid hyperspectral image reconstruction system provided by this application includes an acquisition unit 510, a reconstruction unit 520, and a fusion The unit 530, the training unit 540, the supplementary shooting unit 550, and the switching unit 560.
  • the acquiring unit 510 is configured to use an image sensor to photograph the target field of view to acquire the target image to be reconstructed.
  • the image to be reconstructed is divided into a first area image and a second area image, the first area image is an RGB image, and the second area image is a hyperspectral image.
  • the image sensor includes a first filter and a second filter, the first filter is used to obtain a first area image, and the second filter is used to obtain the second area image.
  • a target image to be reconstructed can be obtained.
  • Part of the target image to be reconstructed is an RGB image, which is the first region image, and part is a hyperspectral image, which is the second image.
  • Area image is an RGB image, which is the first region image
  • a hyperspectral image which is the second image.
  • the first area image is an image obtained by the first filter of the image sensor
  • the second area image is an image taken by the image sensor using the second filter.
  • the area of the first region captured by the first filter is much larger than the area of the second region captured by the second filter. This is because the image sensor to shoot hyperspectral images requires additional expensive spectroscopic devices and filtering devices. If you directly use the image sensor with an external device to directly shoot hyperspectral images, it is often used in conjunction with drones to shoot the target field of view. For spectrum acquisition image sensors, the cost is high and portability is poor. Therefore, the solution provided by this application mainly uses the first filter that collects the RGB image to capture the target field of view. The image sensor can obtain high-precision hyperspectral images without additional external devices, thereby greatly reducing production costs. And has good portability.
  • the second area image is after the image sensor uses a second filter to capture a multispectral image of the second area, and then perform a spatial resolution restoration operation on the multispectral image Obtained hyperspectral image.
  • the first area image is directly captured using the image sensor and the first filter, while the second filter can only capture multispectral images, while the second area image is a hyperspectral image. Therefore, the image The sensor can directly use the second filter to shoot the target field of view to obtain a multispectral image, and then perform a spatial resolution restoration operation to obtain the second region.
  • a multispectral image can actually be regarded as a case of a hyperspectral image, that is, the number of imaged bands is less than that of a hyperspectral image, generally only a few to a dozen. Since spectral information actually corresponds to color information, multi-spectral images or multi-band remote sensing images can obtain color information of ground objects, but the spatial resolution is low. Therefore, after using the second filter to obtain a multispectral image of the target field of view, the operation of spatial resolution restoration can be used to obtain a hyperspectral image. It is understandable that the use of the second filter to collect multispectral data can also effectively reduce the calculation cost of RGB reconstruction of hyperspectral images, and shorten the reconstruction calculation time to a certain extent.
  • the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel pieces, so Each super pixel of the plurality of super pixels is located at the center of the four quadrants of the imaging area of the image sensor. It should be understood that each super pixel is used to directly obtain a multispectral image, and after the spatial resolution of the multispectral image is restored, a hyperspectral image can be obtained. Although the second filter to collect multispectral data can also effectively reduce the computational cost of RGB reconstruction of hyperspectral images, the cost of the first filter that directly captures RGB images is much lower than that of the second filter that captures multispectral images.
  • FIG. 2 is a schematic diagram of super pixel distribution in an imaging area of an image sensor provided by the present application.
  • the colorless part in the figure is the area covered by the first filter
  • the dark part is the area covered by the second filter
  • the total number of super pixels included in the second filter is 4 ⁇ 6. They are respectively located at the center of the four quadrants in the imaging area of the image sensor.
  • the center of each quadrant contains 6 super pixels
  • each super pixel consists of 3 ⁇ 3 single-wavelength pixels arranged and combined into a square pixel set.
  • the first filter is an RGB dye filter
  • the second filter is an FP cavity thin film array prepared by any one of deposition, patterning, and etching methods
  • the second filter is a reflective multi-channel filter formed on the CMOS sensor array by using photonic crystals. It should be understood that the second filter is used to take multi-spectral pictures, that is, the second filter has the characteristics of collecting optical signals of different wavelength bands, and the FP cavity is a device that uses the phenomenon of multi-beam interference to perform light splitting operations.
  • the structure of the FP cavity at the front end of the sensor can be changed, including but not limited to the length of the cavity, the refractive index of the cavity medium, the material of the reflector, etc., to change the light passing frequency of the second filter, and the composition can be used for multispectral shooting.
  • Super pixels For example, the structure of a super pixel formed by using an FP cavity thin film array is shown in Figure 3, where Figure 3 shows a side view of the super pixel formed by the FP cavity thin film array, and Figure 3 only shows 5 of the super pixels.
  • Each single-wavelength pixel includes a bottom-layer mirror 301, a transparent medium 302, and a top-layer mirror 303.
  • the cavity length of each single-wavelength pixel is different. The following takes red light (wavelength 630nm), green light (wavelength 550nm) and blue light (wavelength 440nm) as an example to illustrate the calculation of cavity length. It should be understood that the optical cavity length of the FP cavity film array shown in FIG.
  • n is the refractive index of the transparent medium 302
  • d is the transparent
  • is the wavelength of the incident light
  • N is a positive integer. Therefore, a possible cavity length combination of the red, green, and blue channels is 201nm, 179nm and 143nm. Therefore, according to formula (1) and the wavelengths of different optical wavelength bands, the corresponding dielectric cavity lengths can be calculated respectively, thereby fabricating FP cavity thin film arrays to form super pixels.
  • the super pixels of the image sensor can also use photonic crystals to form a reflective multi-channel filter on the CMOS sensor array, thereby replacing the FP cavity film array for different wavelength bands. Selection, this application will not repeat it.
  • the reconstruction unit 520 is used to reconstruct the first region image into a hyperspectral image of the first region.
  • the RGB image dictionary D RGB is obtained according to the hyperspectral image dictionary D h and the spectral response function S of the image sensor;
  • the weight coefficient matrix X reflects the mapping relationship between the output hyperspectral image Y
  • the orthogonal matching pursuit algorithm (Orthogonal Matching Pursuit, OMP) can be used to calculate the corresponding weight coefficient matrix X according to the input RGB image and formula (2), where k represents the number of rows of the matrix, l express the number of columns of the matrix, and D h and D RGB have a definite mapping relationship as shown in formula (3), where S is the spectral response function of the image sensor, and the spectral response function S can be obtained by the manufacturer of the RGB image sensor Or direct measurement, its physical meaning is the response to the signal of a hyperspectral channel in the corresponding R or G or B channel.
  • OMP Orthogonal Matching Pursuit
  • the weight coefficient matrix X of the RGB image dictionary D RGB can also be used for the hyperspectral dictionary D h .
  • the hyperspectral image Y h of the input RGB image is reconstructed by linear combination of atoms in the hyperspectral dictionary. It should be understood that the above calculation process is an example, and this application uses the OMP method to calculate the weight coefficient matrix based on the input RGB image X, you can also use other methods to obtain reconstructed hyperspectral images, such as using deep learning neural network methods, using existing RGB image libraries and corresponding hyperspectral image libraries to train the neural network to obtain input RGB images,
  • the reconstruction model of the output hyperspectral image is not specifically limited in this application.
  • the system further includes a training unit 540 configured to initialize the weight coefficient matrix X to 0 before the image sensor captures the target field of view and obtains the target image to be reconstructed.
  • a training unit 540 configured to initialize the weight coefficient matrix X to 0 before the image sensor captures the target field of view and obtains the target image to be reconstructed.
  • the hyperspectral image dictionary D h used may be pre-utilized existing hyperspectral image dictionary, or it may be used in dictionary learning.
  • An over-complete hyperspectral image dictionary D h obtained after iterative update of a classic algorithm K-SVD algorithm. Specifically, given training data Y, that is, an existing hyperspectral image database, the Y matrix is divided into Y ⁇ D*X, where D is called a dictionary, each column of D is called an atom, and X is called a coefficient matrix.
  • step S102 after obtaining the weight coefficient matrix X according to formula (2) according to the input first area image, that is, the RGB image, the reconstructed hyperspectral image of the first area can be obtained according to formula (4).
  • the foregoing calculation process is only used for illustration, and this application does not specifically limit it.
  • the sparsest solution can also be found by using the L0 norm constraint according to the prior condition of sparsity, such as greedy ( greedy) algorithm, MP algorithm and OMP algorithm. For other calculation methods, this application will not repeat them.
  • the hyperspectral image sample set used in this application may include hyperspectral images in multiple different scenes, including but not limited to urban, suburban, agricultural, animal and plant landscapes, and indoor landscapes.
  • the area where the contour spectrum can be applied may include hyperspectral images in multiple different scenes, including but not limited to urban, suburban, agricultural, animal and plant landscapes, and indoor landscapes. The area where the contour spectrum can be applied.
  • the fusion unit 530 is configured to perform image fusion between the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
  • the system further includes a supplementary photographing unit 550, which is used for the case where the hyperspectral image of the first area has an overlapping area of spectral bands in the target area of the first area , Obtain the target area corresponding to the waveband overlapping area in the target field of view, where the waveband overlapping area includes the area where the edges of each light waveband overlap; the complementary shooting unit 550 is configured to use a second filter to perform The target area of the first area is re-photographed to obtain one or more multi-spectral images of the target area; the re-shooting unit 550 is further configured to perform re-shooting of the one or more multi-spectral images of the target area.
  • a supplementary photographing unit 550 which is used for the case where the hyperspectral image of the first area has an overlapping area of spectral bands in the target area of the first area , Obtain the target area corresponding to the waveband overlapping area in the target field of view, where the waveband overlapping area includes the area where the edges
  • the spatial resolution is restored to obtain one or more hyperspectral images of the target area; the fusion unit is also used to combine the hyperspectral images of the one or more target areas, the hyperspectral images of the first area and all Image fusion is performed on the second region image to obtain the reconstructed target hyperspectral image.
  • the method of reconstructing RGB image into hyperspectral image using image dictionary after analyzing the loss function, it can be found that the reconstruction quality has a certain degree of degradation in the edge area of each RGB band. The reason is that between RGB bands The overlap of will cause mapping errors. In other words, in other words, the band overlap area refers to the overlap area between each band.
  • the hybrid hyperspectral reconstruction method can process the second region of the image sensor into super pixels that can be used for hyperspectral image acquisition, which can be used to directly obtain accurate hyperspectral images, and can be used to determine the overlapping regions of light bands.
  • the hyperspectral image is taken directly to improve the reconstruction accuracy of the hyperspectral image.
  • the second area may be an overlapping area between various light wavebands calculated in advance, or may be a waveband overlapping area determined after using the first area to shoot and reconstruct a hyperspectral image. It should be noted that if the image is not clear due to drone vibration, light angle or other external conditions during shooting, the second filter can also be used to make up the shot, so as to maximize the final hyperspectral reconstruction. The accuracy of image reconstruction.
  • FIG. 4 is a schematic diagram of the shooting process of a hybrid hyperspectral image reconstruction method provided by the present application.
  • the sensor used for shooting may be a CMOS image sensor array including a thin film array of FP cavity as shown in Figure 3; then, the image to be reconstructed
  • the first area image is the RGB image.
  • take the second area image that is, the multispectral image obtained by using the second filter, and restore the spatial resolution to obtain the second area hyperspectral image;
  • the first area hyperspectral image, one Or multiple hyperspectral images of the target area and hyperspectral images of the second area are fused using image fusion algorithms such as principal component analysis (PCA) to obtain a reconstructed high-precision hyperspectrum image.
  • PCA principal component analysis
  • the RGB image taken by the first filter undergoes the hyperspectral reconstruction in step S102, and there is no band overlapping area at the edge of the light band, there is no need to use the second filter to perform the supplementary shooting of the hyperspectral image.
  • the hyperspectral image in the first region and the hyperspectral image in the second region can be directly used for image fusion operations, and then a reconstructed high-precision hyperspectral image can be obtained.
  • the system further includes a switching unit 560, which is configured to, after obtaining the reconstructed hyperspectral image of the target, combine the first filter used for shooting the target area.
  • the light sheet is switched to the second filter. That is to say, the initial distribution of the second filter can be distributed in the center of the four quadrants of the image sensor.
  • the second filter can also be The super pixel distribution of the filter is adjusted.
  • the image sensor used in the hybrid hyperspectral image reconstruction method provided in the present application in which the super pixels used to shoot multi-spectral images, can change the number of optical signal channels and super pixels according to the needs of the actual application field. The number and specific distribution.
  • the image of the target to be reconstructed is obtained by capturing the target field of view by the image sensor, and then the image of the first area is reconstructed into the hyperspectral image of the first area, so as to combine the hyperspectral image of the first area with the Image fusion is performed on the second area image to obtain a reconstructed hyperspectral image of the target.
  • the image sensor can shoot high-precision hyperspectral images without adding additional spectroscopic devices or filtering devices, and has the advantages of good portability and low cost.
  • FIG. 6 is a schematic block diagram of the structure of an electronic device provided by an embodiment of the present application.
  • the electronic device in this embodiment may include: one or more processors 601; one or more input devices 602, one or more output devices 603, and a memory 604.
  • the aforementioned processor 601, input device 602, output device 603, and memory 604 are connected through a bus 605.
  • the memory 602 is used to store a computer program including program instructions, and the processor 601 is used to execute the program instructions stored in the memory 602.
  • the so-called processor 601 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, DSPs, application specific integrated circuits (ASICs), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the input device 602 may include a touch panel, a fingerprint sensor (used to collect user fingerprint information and fingerprint orientation information), a microphone, etc.
  • the output device 603 may include a display (LCD, etc.), a speaker, etc.
  • the memory 604 may include volatile memory, such as RAM; the memory may also include non-volatile memory, such as read-only memory (ROM), flash memory, hard disk drive (HDD), or solid state drive (Solid-State Drive, SSD), the storage may also include a combination of the foregoing types of storage.
  • the memory 604 may adopt centralized storage or distributed storage, which is not specifically limited here. It can be understood that the memory 604 is used to store computer programs, such as computer program instructions. In the embodiment of the present application, the memory 604 may provide instructions and data to the processor 601.
  • the processor 601, the input device 602, the output device 603, the memory 604, and the bus 605 described in the embodiments of the present application can execute any of the embodiments of the hybrid hyperspectral image reconstruction method provided in the present application. The described implementation mode will not be repeated here.
  • a computer-readable storage medium stores a computer program.
  • the computer program includes program instructions. When the program instructions are executed by a processor, the present application is implemented. The implementation described in any embodiment of the provided hybrid hyperspectral image reconstruction method will not be repeated here.
  • the computer-readable storage medium may be the internal storage unit of the terminal described in any of the foregoing embodiments, such as the hard disk or memory of the terminal.
  • the computer-readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk equipped on the terminal, a Smart Media Card (SMC), or a Secure Digital (SD) card , Flash Card, etc.
  • the computer-readable storage medium may also include both an internal storage unit of the terminal and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the terminal.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Spectrometry And Color Measurement (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

本申请提供了一种混合型高光谱图像重构的方法及系统。所述方法包括:图像传感器拍摄目标视野获取目标待重构图像,其中,所述目标待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像;将所述第一区域图像重构为第一区域的高光谱图像;将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。

Description

一种混合型高光谱图像重构的方法及系统 技术领域
本申请涉及人工智能领域,尤其涉及一种混合型高光谱图像重构的方法及系统。
背景技术
近年来,高光谱图像采集在遥感、农业、工业检测等各应用领域中扮演着越发重要的角色。一般情况下,高光谱图像的采集需要在常规的图像传感器前加装滤波装置以获得多个不同波段的窄带光信号,例如,基于利奥(Lyot)滤镜组的液晶可调滤镜(Liquid crystal tunable filter,LCTF),但是由于其原理是按照时间顺序收集多个单一波段的图像来进行高光谱图像采集的,因此只适用于拍摄静止的景物;或者添加分光装置如基于弹光效应的声光可调滤镜(Acousto-optical tunable filter,AOTF)并通过图像传感器的不同区域接收不同波段的光,因此可以实现瞬时的高光谱拍摄。以上进行高光谱图像采集的两种方法均需要添加外置装置,降低了整体系统的便携性,提高了成本;并且为了获得较高的光谱分辨率,均一定程度上牺牲了时域分辨率或空间分辨率。
发明内容
本申请提供了一种混合型高光谱图像重构的方法及系统,用于解决高光谱图像采集方法便携性低、成本高的缺点。
第一方面,本申请提供了一种混合型高光谱图像重构的方法,所述方法包括以下步骤:
图像传感器拍摄目标视野获取目标待重构图像,其中,所述目标待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像;
将所述第一区域图像重构为第一区域的高光谱图像;
将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
可选地,所述图像传感器包括第一滤光片和第二滤光片,所述第一滤光片用于获得第一区域图像,所述第二滤光片用于获得所述第二区域图像。
可选地,所述第二区域图像是所述图像传感器使用第二滤光片拍摄获得所述第二区域的多光谱图像后,对所述多光谱图像进行空间分辨率的恢复操作后获得的高光谱图像。
可选地,在所述第一区域的高光谱图像中出现波段重叠区域的情况下,所述方法还包括:
获得所述波段重叠区域在所述目标视野中对应的目标区域,其中,所述波段重叠区域包括各个光波段边缘重叠的区域;
使用第二滤光片对所述目标区域进行补拍,获得一张或者多张所述目标区域的多光谱图像;
对所述一张或者多张目标区域的多光谱图像进行空间分辨率恢复,获得一张或者多张目标区域的高光谱图像;
将所述一个或者多个目标区域的高光谱图像、所述第一区域的高光谱图像以及所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
可选地,将所述第一区域图像重构为第一区域的高光谱图像包括:
将所述第一区域图像输入RGB重构模型Y RGB=D RGBX,根据RGB图像字典D RGB,获得所述权重系数矩阵X,其中,Y RGB为输出的RGB图像,D RGB为RGB图像字典,X为权重系数矩阵,所述权重系数矩阵反映了所述输出的RGB图像Y RGB和RGB图像字典D RGB之间的映射关系,所述RGB图像字典D RGB根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S获得;
根据所述权重系数矩阵X输入高光谱图像重构模型Y h=D hX,根据高光谱图像字典D h,获得所述第一区域的高光谱图像,其中,Y h为输出的高光谱图像,D h为高光谱图像字典,X为权重系数矩阵,所述权重系数矩阵反映了所述输出的高光谱图像Y h和高光谱图像字典D h之间的映射关系,所述高光谱图像字典D h是在所述获取待重构图像之前,通过将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练后获得的。
可选地,在所述图像传感器拍摄目标视野获取目标待重构图像之前,所述方法还包括:
将所述权重系数矩阵X初始化0,并使用高光谱图像样本集对高光谱图像重构模型进行训练,获得高光谱图像字典D h
根据所述高光谱图像字典D h以及所述图像传感器的光谱响应函数S,获得RGB图像字典D RGB,其中,D RGB=SD h
可选地,所述第二滤光片包括多个超级像素,其中,所述多个超级像素中的每个超级像素是多个单波长像素片排列组合形成的正方形像素的集合,所述多个超级像素中的每个超级像素位于所述图像传感器成像区域的四个象限的中心位置。
可选地,所述图像传感器基于半导体薄膜技术,其中,
所述第一滤光片是RGB染料滤光片;
所述第二滤光片是通过沉积、图形化、刻蚀方法中的任意一种制备的F-P腔薄膜阵列;或者,
所述第二滤光片是利用光子晶体在CMOS传感器阵列上构成的反射式多通道滤波片。
可选地,所述获得重构后的目标高光谱图像之后,所述方法还包括:
将拍摄所述目标区域使用的第一滤光片切换为第二滤光片。
第二方面,提供了一种混合型高光谱图像重构系统,所述系统包括:
获取单元,所述获取单元用于使用图像传感器拍摄目标视野获取目标待重构图像,其中,所述待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像;
重构单元,所述重构单元用于将所述第一区域图像重构为第一区域的高光谱图像;
融合单元,所述融合单元用于将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
可选地,所述图像传感器包括第一滤光片和第二滤光片,所述第一滤光片用于获得第一区域图像,所述第二滤光片用于所述第二区域图像。
可选地,所述第二区域图像是所述图像传感器使用第二滤光片拍摄获得所述第二区域的多光谱图像后,对所述多光谱图像进行空间分辨率的恢复操作后获得的高光谱图像。
可选地,所述系统还包括补拍单元,
所述补拍单元用于在所述第一区域的高光谱图像中出现波段重叠区域的情况下,获得所述波段重叠区域在所述目标视野中对应的目标区域,其中,所述波段重叠区域包括各个光波段边缘重叠的区域;
使用第二滤光片对所述目标区域进行补拍,获得一张或者多张所述目标区域的多光谱图像;
所述补拍单元还用于对所述一张或者多张目标区域的多光谱图像进行空间分辨率恢复,获得一张或者多张目标区域的高光谱图像;
所述融合单元还用于将所述一个或者多个目标区域的高光谱图像、所述第一区域的高光谱图像以及所述第二区域图像进行图像融合,获得所述重构后的目标高光谱图像。
可选地,所述重构单元具体用于将所述第一区域图像输入RGB重构模型Y RGB=D RGBX,根据RGB图像字典D RGB,获得所述权重系数矩阵X,其中,Y RGB为输出的RGB图像,D RGB为RGB图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的RGB图像Y RGB和RGB图像字典D RGB之间的映射关系,所述RGB图像字典D RGB根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S获得;
所述重构单元具体用于根据所述权重系数矩阵X输入高光谱图像重构模型Y h=D hX,根据高光谱图像字典D h,获得所述第一区域的高光谱图像,其中,Y h为输出的高光谱图像,D h为高光谱图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的高光谱图像Y h和高光谱图像字典D h之间的映射关系,所述高光谱图像字典D h是在所述获取待重构图像之前,通过将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练后获得的。
可选地,所述系统还包括训练单元,
所述训练单元用于在所述图像传感器拍摄目标视野获取目标待重构图像之前,通过将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练,获得高光谱图像字典D h
所述训练单元还用于根据所述高光谱图像字典D h以及所述图像传感器的光谱响应函数S,获得RGB图像字典D RGB,其中,D RGB=SD h
可选地,所述第一滤光片是RGB染料滤光片;
所述第二滤光片是通过沉积、图形化、刻蚀方法中的任意一种制备的F-P腔薄膜阵列;或者,
所述第二滤光片是利用光子晶体在CMOS传感器阵列上构成的反射式多通道滤波片。
可选地,所述第二滤光片包括多个超级像素,其中,所述多个超级像素中的每个超级像素是多个单波长像素片排列组合形成的正方形像素的集合,所述多个超级像素中的每个超级像素位于所述图像传感器成像区域的四个象限的中心位置。
可选地,所述系统还包括切换单元,
所述切换单元用于在所述获得重构后的目标高光谱图像之后,将所述与拍摄所述目标区域使用的第一滤光片切换为第二滤光片。
基于本申请提供的混合型高光谱图像重构的方法及系统,通过图像传感器拍摄目标视野获取目标待重构图像,再将所述第一区域图像重构为第一区域的高光谱图像,从而将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。使得图像传感器无需添加额外的分光装置或者滤波装置就可以拍摄出高精度的高光谱图像,具有便携性好,成本低的优点。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请提供的一种混合型高光谱图像重构的方法;
图2是本申请提供的一种图像传感器成像区域中的超级像素分布示意图;
图3是本申请提供的一种使用F-P腔薄膜阵列形成的超级像素的结构示意图;
图4是本申请提供的一种混合型高光谱图像重构的方法的拍摄流程示意图;
图5是本申请提供的一种混合型高光谱图像重构系统的结构示意图;
图6是本申请提供的一种电子设备的结构示意图。
具体实施方式
本申请实施例的混合型高光谱图像重构的方法及系统可以应用在多个领域。例如,遥感、农业、工业检测、军事等各个应用领域,可以装在飞行器上进行目标视野的高光谱图像拍摄,例如,在农业领域中,可以根据拍摄的农作物的高光谱图像进行农作物长势评估、灾害预警和生产管理;在地质勘探和矿产领域中,可以根据拍摄的地物的高光谱图像获取波谱特征,从而进行地表矿物的识别;在军事侦测领域中,可以根据拍摄的战场的高光谱图像来区分识别目标、伪装物以及自然物,从而提高目标打击的准确率等等,本申请不作具体限定。
图1是本申请提供的一种混合型高光谱图像重构的方法,所述方法包括以下步骤:
S101:图像传感器拍摄目标视野获取目标待重构图像。
在一实施例中,所述待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像。所述图像传感器包括第一滤光片和第二滤光片,所述第一滤光片用于获得第一区域图像,所述第二滤光片用于获得所述第二区域图像。也就是说,图像传感器拍摄目标视野后,可以得到一张目标待重构图像,这张目标待重构图像一部分是RGB图像,也就是第一区域图像,一部分是高光谱图像,也就是第二区域图像。其中,第一区域图像是图像传感器第一滤光片得到的图像,第二区域图像是图像传感器使用第二滤光片拍摄得到的图像。应理解,对于本申请提供的图像传感器来说,第一滤光片拍摄的第一区域的面积,远远大于第二滤光片拍摄的第二区域的面积。这是因为图像传感器拍摄高光谱图像需要额外添加昂贵的分光装置以及滤波装置,如果直接使用添加外置装置的图像传感器直接拍摄高光谱图像,对于经常与无人机搭配使用进行目标视野拍摄的高光谱采集图像传感器来说,成本高,并且便携性差。因而本申请提供的方案,主要是由采集RGB图像的第一滤光片进行目标视野的拍摄的,图像传感器无需额外添加外置装置即可获得高精度的高光谱图像,从而大大降低制作成本,且具有良好的便携性。
在一实施例中,所述第二区域图像是所述图像传感器使用第二滤光片拍摄获得所述第二区域的多光谱图像后,对所述多光谱图像进行空间分辨率的恢复操作后获得的高光谱图像。也就是说,第一区域图像是直接使用图像传感器及第一滤光片拍摄获得的,而第二滤光片只能拍摄出多光谱图像,而第二区域图像是高光谱图像,因此,图像传感器可以直接使用第二滤光片拍摄目标视野获得多光谱图像后,再进行空间分辨率的恢复操作得到第二区域。应理解,多光谱图像其实可以看做是高光谱图像的一种情况,即成像的波段数量比高光谱图像少,一般只有几个到十几个。由于光谱信息其实也就对应了色彩信息,所以多光谱图像或者说多波段遥感图像可以得到地物的色彩信息,但是空间分辨率较低。因此使用第二滤光片获得目标视野的多光谱图像后,可以使用空间分辨率恢复的操作,获得高光谱图像。可以理解的是,使用第二滤波片采集多光谱数据还可以有效降低RGB重构高光谱图像的运算成本,在一定程度上缩短重构计算的时间。
在一实施例中,所述第二滤光片包括多个超级像素,其中,所述多个超级像素中的每个超级像素是多个单波长像素片排列组合形成的正方形像素的集合,所述多个超级像素中的每个超级像素位于所述图像传感器成像区域的四个象限的中心位置。应理解,每个超级像素用于直接获得多光谱图像,对多光谱图像进行空间分辨率的恢复后,可以获得高光谱图像。虽然第二滤光片采集多光谱数据还可以有效降低RGB重构高光谱图像的运算成本,但是,直接拍摄RGB图像的第一滤光片的成本远远低于拍摄多光谱的第二滤光片的成本。因此,第二滤光片中的超级像素个数是有限的,不能太多。优选地,图像传感器成像区域每个象限的超级像素的数目为4-6个,组成超级像素的单波长像素种类数目为2×2或者3×3。例如,图2是本申请提供的一种图像传感器成像区域中的超级像素分布示意图。其中,图中的无色部分为第一滤光片覆盖的区域,深色部分 为第二滤光片覆盖的区域,第二滤光片包含的超级像素的总个数为4×6个,分别位于图像传感器的成像区域中的四个象限中心位置,每个象限的中心包含6个超级像素,每个超级像素由3×3个单波长像素排列组合成正方形像素集合。应理解,上述举例仅用于说明,并不能构成具体限定。
在一实施例中,所述第一滤光片是RGB染料滤光片;所述第二滤光片是通过沉积、图形化、刻蚀方法中的任意一种制备的F-P腔薄膜阵列;或者,所述第二滤光片是利用光子晶体在CMOS传感器阵列上构成的反射式多通道滤波片。应理解,第二滤光片用于拍摄多光谱图片,即第二滤光片具有采集不同波段的光信号的特性,而F-P腔是一种利用多光束干涉现象来进行分光操作的装置,因此可以通过改变传感器前端F-P腔的结构,包括但不限于腔长、腔体介质的折射率、反射镜的材质等等,从而改变第二滤光片的光通过频率,组成可以进行多光谱拍摄的超级像素。例如,一种使用F-P腔薄膜阵列形成的超级像素的结构如图3所示,其中,图3显示的是F-P腔薄膜阵列形成的超级像素的侧视图,图3仅显示了超级像素中的5个单波长像素,每个单波长像素包括底层反射镜301,透明介质302以及顶层反射镜303。其中,透明介质302的材料为二氧化硅Si0 2,其折射率约为n=1.54,底层反射镜301采用银Ag作为反射材料,其在可见光波段反射率约为90%以上,由图3可知,每个单波长像素的腔长是不同的。下面以红光(波长630nm)绿光(波长550nm)蓝光(波长440nm)为例,对透明介质302的介质腔腔长的计算进行举例说明。应理解,图3所示的F-P腔薄膜阵列的光学腔长应满足下列相干干涉条件(以正入射为例):
2nd=Nλ,N∈Z            (1)
其中,n为透明介质302的折射率,d为透明介质302的介质腔腔长,λ为入射光波长,N为一个正整数。因此,上述红、绿、蓝三个通道一种可能的腔长组合为:201nm、179nm以及143nm。因此,根据公式(1)以及不同的光波段的波长,可以分别计算出对应的介质腔腔长,从而制作出F-P腔薄膜阵列形成超级像素。应理解,上述举例仅用于说明,并不能构成具体限定,图像传感器的超级像素还可以是利用光子晶体在COMS传感器阵列上构成反射式多通道滤波片,从而代替F-P腔薄膜阵列对不同波段进行选择,本申请不再赘述。
S102:将所述第一区域图像重构为第一区域的高光谱图像。
在一实施例中,将所述第一区域图像重构为第一区域的高光谱图像包括:将所述第一区域图像输入RGB重构模型Y RGB=D RGBX,根据RGB图像字典D RGB,获得所述权重系数矩阵X,其中,Y RGB为输出的RGB图像,D RGB为RGB图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的RGB图像Y RGB和RGB图像字典D RGB之间的映射关系,所述RGB图像字典D RGB根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S获得;根据所述权重系数矩阵X输入高光谱图像重构模型Y h=D hX,根据高光谱图像字典D h,获得所述第一区域的高光谱图像,其中,Y h为输出的高光谱图像,D h为高光谱图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的高光谱图像Y h和高光谱图像字典D h之间的映射关系,所述高光谱图像字典D h是在所述获取待重构图像之前,通过将所述权重系数矩阵X初始化为0,并使用高光谱图像样 本集对高光谱图像重构模型进行训练后获得的。
优选地,可以通过正交匹配追踪算法(Orthogonal Matching Pursuit,OMP),根据输入的RGB图像以及公式(2)计算对应的权重系数矩阵X:
Y RGB=D RGBX,W∈R k×l           (2)
其中,k表示矩阵的行数,l表述矩阵的列数,并且D h和D RGB有如公式(3)所示的确定的映射关系:
D RGB=SD h={c 1,c 2,…,c k},S∈R 3×n,c i=(R i,G i,B i) T         (3)
其中,S为图像传感器的光谱响应函数,光谱响应函数S可以通过RGB图像传感器的生产厂家获取或直接测定,其物理意义为对一个高光谱通道的信号在对应的R或G或B通道的响应。因此,D h和D RGB有确定映射关系的情况下,RGB图像字典D RGB的权重系数矩阵X同样可以用于高光谱字典D h。通过对高光谱字典中原子的线性组合,重构出输入RGB图像的高光谱图像Y h,其中,
Y h=D hX             (4)
应理解,上述计算过程为举例说明,本申请除了通过OMP方法根据输入的RGB图像计算出权重系数矩阵X,还可以使用其他方法获得重构的高光谱图像,例如利用深度学习神经网络的方法,使用已有的RGB图像库和对应的高光谱图像库对神经网络进行训练,从而获得输入RGB图像,输出高光谱图像的重构模型,本申请不作具体限定。
在一实施例中,在所述图像传感器拍摄目标视野获取目标待重构图像之前,所述方法还包括:将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练,获得高光谱图像字典D h;根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S,获得RGB图像字典D RGB,其中,D RGB=SD h
优选地,步骤S102根据输入的第一区域图像进行高光谱图像的重构过程中,使用的高光谱图像字典D h可以是利用预先已有的高光谱图像字典,也可以是通过字典学习中的一种经典算法K-SVD算法迭代更新后获得的过完备的高光谱图像字典D h。具体地,给定训练数据Y,即已有的高光谱图像数据库,把Y矩阵分为Y≈D*X,其中D称为字典,D的每一列称为原子,X称为系数矩阵。从样本集Y随机挑选k个样品,作为字典D的原子,并且初始化系数矩阵X为0,以如下函数为目标函数对D和X进行逐列更新:
D,X=argmin D,X{‖X‖ 0},st‖Y-DX‖ 2≤ε           (5)
其中ε是重构误差中允许的最大值,经过反复迭代更新,获得高光谱图像字典D h
D h={h 1,h 2,…,h k},h i∈R n×1,D h∈R n×k             (6)
其中n表示矩阵的行数,k表示矩阵的列数。获得高光谱图像字典D h后,再通过公式(3)以及光谱响应函数S,将获得的高光谱图像字典D h降维映射至RGB空间,从而获得对应的RGB字典D RGB。使得步骤S102可以根据输入的第一区域图像,也就是RGB图像,根据公式(2)获得权重系数矩阵X后,再根据公式(4)即可获得重构的第一区域的高光谱图像。应理解,上述计算过程仅用于举例说明,本申请不作具体限定。其中,在计算Y≈D*X是一欠定方程时,除了使用本申请提供的OMP算法,还可以根据稀疏性的先验条件,通过使用L0范数约 束找到最稀疏的解,比如贪婪(greedy)算法以及MP算法,对于其他计算方法,本申请不再作赘述。其中,为了提高高光谱图像重构的准确率,本申请使用的高光谱图像样本集可以包括多个不同场景下的高光谱图像,包括但不限于城市、郊区、农业、动植物景观、室内景观等高光谱可以应用的区域。
S103:将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
在一实施例中,在所述第一区域的高光谱图像中出现波段重叠区域的情况下,所述方法还包括:获得所述波段重叠区域在所述目标视野中对应的目标区域,其中,所述波段重叠区域包括各个光波段边缘重叠的区域;使用第二滤光片对所述目标区域进行补拍,获得一张或者多张所述目标区域的多光谱图像;对所述一张或者多张目标区域的多光谱图像进行空间分辨率恢复,获得一张或者多张目标区域的高光谱图像;将所述一个或者多个目标区域的高光谱图像、所述第一区域的高光谱图像以及所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。应理解,利用图像字典进行RGB图像重构为高光谱图像的方法,经过对损失函数的分析可以发现,在RGB各个波带边缘区域,重构质量有一定程度的下降,原因是RGB各个波段之间的交叠会引起映射误差。换句话说,波段重叠区域指的就是各个波段之间的交叠区域。因此,本申请提供的混合高光谱重构的方法,可以将图像传感器的第二区域加工成为可以进行高光谱图像采集的超级像素,用于直接获得准确的高光谱图像,对光波段交叠区域直接进行高光谱图像的拍摄,从而提高高光谱图像的重构精度。其中第二区域可以是预先通过计算得出的各个光波段之间的交叠区域,还可以是在使用第一区域拍摄并重构高光谱图像后确定的波段重叠区域。需要说明的,如果拍摄时由于无人机振动、光线角度或者其他外界情况造成图片不清晰的区域,也可以使用第二滤光片进行补拍,从而最大程度地确保最后获得的高光谱重构图像的重构精度。
例如,图4是本申请提供的一种混合型高光谱重构的方法的拍摄流程示意图。首先,使用图像传感器对目标视野区域进行拍摄,获得待重构图像,其中,拍摄使用的传感器可以是包含如图3所示使用F-P腔薄膜阵列的COMS图像传感器阵列;接着,将待重构图像的第一区域图像也就是RGB图像,进行步骤S102的RGB重构高光谱操作,对于RGB各波段边缘的成像区域,系统将会提示需要利用图像传感器的第二滤波片对目标区域进行补拍,获得目标区域的多光谱图像,以弥补RGB各个波段边缘重构精度低的问题,再对补拍得到的多光谱图像进行空间分辨率的恢复后,获得一张或者多张目标区域的高光谱图像;再将第二区域图像也就是使用第二滤波片拍摄获得的多光谱图像,进行空间分辨率的恢复后,获得第二区域的高光谱图像;最后将第一区域的高光谱图像、一张或者多张目标区域的高光谱图像、以及第二区域的高光谱图像,使用主成分分析(Principal Component Analysis,PCA)等图像融合算法进行融合后,即可获得重构后的高精度的高光谱图像。其中,如果第一滤波片拍摄的RGB图像进行步骤S102的高光谱重构后,没有处于光波段边缘的波段重叠区域,则不需要再使用第二滤波片进行高光谱图像的补拍,这里,可以直接使用第一区域的高光谱图 像与第二区域的高光谱图像进行图像融合操作,即可获得重构后的高精度的高光谱图像。由图4可知,使用本申请提供的混合型高光谱图像重构的方法,利用RGB图像重构出高光谱图像,不但大大降低了硬件成本,由于添加了具备多光谱拍摄能力的超级像素,同时也大大提高了高光谱重构的精度,并且,超级像素采集的多光谱数据还可以有效降低RGB重构高光谱的运算成本,从而缩短重构高光谱图像的计算时间。
在一实施例中,所述获得重构后的目标高光谱图像之后,所述方法还包括:将拍摄所述目标区域使用的第一滤光片切换为第二滤光片。也就是说,第二滤光片的初始分布可以是分布于图像传感器的四个象限的中心位置,在第一区域的高光谱图像多次出现光谱重叠的情况时,还可以对第二滤光片的超级像素分布进行调整。换句话说,本申请提供的混合型高光谱图像重构的方法使用的图像传感器,其中用于拍摄多光谱图像的超级像素,可以根据实际应用领域的需要,更改光信号的通道数目、超级像素的数量及具体分布。
上述方法中,通过图像传感器拍摄目标视野获取目标待重构图像,再将所述第一区域图像重构为第一区域的高光谱图像,从而将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。通过上述方法,使得图像传感器无需添加额外的分光装置或者滤波装置就可以拍摄出高精度的高光谱图像,具有便携性好,成本低的优点。
图5是本申请提供的一种混合型高光谱图像重构系统的结构示意图,如图5所示,本申请提供的混合型高光谱图像重构系统包括获取单元510,重构单元520、融合单元530、训练单元540、补拍单元550以及切换单元560。
所述获取单元510用于使用图像传感器拍摄目标视野获取目标待重构图像。
在一实施例中,所述待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像。所述图像传感器包括第一滤光片和第二滤光片,所述第一滤光片用于获得第一区域图像,所述第二滤光片用于获得所述第二区域图像。也就是说,图像传感器拍摄目标视野后,可以得到一张目标待重构图像,这张目标待重构图像一部分是RGB图像,也就是第一区域图像,一部分是高光谱图像,也就是第二区域图像。其中,第一区域图像是图像传感器第一滤光片得到的图像,第二区域图像是图像传感器使用第二滤光片拍摄得到的图像。应理解,对于本申请提供的图像传感器来说,第一滤光片拍摄的第一区域的面积,远远大于第二滤光片拍摄的第二区域的面积。这是因为图像传感器拍摄高光谱图像需要额外添加昂贵的分光装置以及滤波装置,如果直接使用添加外置装置的图像传感器直接拍摄高光谱图像,对于经常与无人机搭配使用进行目标视野拍摄的高光谱采集图像传感器来说,成本高,并且便携性差。因而本申请提供的方案,主要是由采集RGB图像的第一滤光片进行目标视野的拍摄的,图像传感器无需额外添加外置装置即可获得高精度的高光谱图像,从而大大降低制作成本,且具有良好的便携性。
在一实施例中,所述第二区域图像是所述图像传感器使用第二滤光片拍摄获得所述第二区域的多光谱图像后,对所述多光谱图像进行空间分辨率的恢复 操作后获得的高光谱图像。也就是说,第一区域图像是直接使用图像传感器及第一滤光片拍摄获得的,而第二滤光片只能拍摄出多光谱图像,而第二区域图像是高光谱图像,因此,图像传感器可以直接使用第二滤光片拍摄目标视野获得多光谱图像后,再进行空间分辨率的恢复操作得到第二区域。应理解,多光谱图像其实可以看做是高光谱图像的一种情况,即成像的波段数量比高光谱图像少,一般只有几个到十几个。由于光谱信息其实也就对应了色彩信息,所以多光谱图像或者说多波段遥感图像可以得到地物的色彩信息,但是空间分辨率较低。因此使用第二滤光片获得目标视野的多光谱图像后,可以使用空间分辨率恢复的操作,获得高光谱图像。可以理解的是,使用第二滤波片采集多光谱数据还可以有效降低RGB重构高光谱图像的运算成本,在一定程度上缩短重构计算的时间。
在一实施例中,所述第二滤光片包括多个超级像素,其中,所述多个超级像素中的每个超级像素是多个单波长像素片排列组合形成的正方形像素的集合,所述多个超级像素中的每个超级像素位于所述图像传感器成像区域的四个象限的中心位置。应理解,每个超级像素用于直接获得多光谱图像,对多光谱图像进行空间分辨率的恢复后,可以获得高光谱图像。虽然第二滤光片采集多光谱数据还可以有效降低RGB重构高光谱图像的运算成本,但是,直接拍摄RGB图像的第一滤光片的成本远远低于拍摄多光谱的第二滤光片的成本。因此,第二滤光片中的超级像素个数是有限的,不能太多。优选地,图像传感器成像区域每个象限的超级像素的数目为4-6个,组成超级像素的单波长像素种类数目为2×2或者3×3。例如,图2是本申请提供的一种图像传感器成像区域中的超级像素分布示意图。其中,图中的无色部分为第一滤光片覆盖的区域,深色部分为第二滤光片覆盖的区域,第二滤光片包含的超级像素的总个数为4×6个,分别位于图像传感器的成像区域中的四个象限中心位置,每个象限的中心包含6个超级像素,每个超级像素由3×3个单波长像素排列组合成正方形像素集合。应理解,上述举例仅用于说明,并不能构成具体限定。
在一实施例中,所述第一滤光片是RGB染料滤光片;所述第二滤光片是通过沉积、图形化、刻蚀方法中的任意一种制备的F-P腔薄膜阵列;或者,所述第二滤光片是利用光子晶体在CMOS传感器阵列上构成的反射式多通道滤波片。应理解,第二滤光片用于拍摄多光谱图片,即第二滤光片具有采集不同波段的光信号的特性,而F-P腔是一种利用多光束干涉现象来进行分光操作的装置,因此可以通过改变传感器前端F-P腔的结构,包括但不限于腔长、腔体介质的折射率、反射镜的材质等等,从而改变第二滤光片的光通过频率,组成可以进行多光谱拍摄的超级像素。例如,一种使用F-P腔薄膜阵列形成的超级像素的结构如图3所示,其中,图3显示的是F-P腔薄膜阵列形成的超级像素的侧视图,图3仅显示了超级像素中的5个单波长像素,每个单波长像素包括底层反射镜301,透明介质302以及顶层反射镜303。其中,透明介质302的材料为二氧化硅Si0 2,其折射率约为n=1.54,底层反射镜301采用银Ag作为反射材料,其在可见光波段反射率约为90%以上,由图3可知,每个单波长像素的腔长是不同的。下面以红光(波长630nm)绿光(波长550nm)蓝光(波长440nm) 为例,对腔长的计算进行举例说明。应理解,图3所示的F-P腔薄膜阵列的光学腔长应满足公式(1)所示的相干干涉条件(以正入射为例),其中,n为透明介质302的折射率,d为透明介质302的介质腔腔长,λ为入射光波长,N为一个正整数。因此,上述红、绿、蓝三个通道一种可能的腔长组合为:201nm、179nm以及143nm。因此,根据公式(1)以及不同的光波段的波长,可以分别计算出对应的介质腔腔长,从而制作出F-P腔薄膜阵列形成超级像素。应理解,上述举例仅用于说明,并不能构成具体限定,图像传感器的超级像素还可以是利用光子晶体在COMS传感器阵列上构成反射式多通道滤波片,从而代替F-P腔薄膜阵列对不同波段进行选择,本申请不再赘述。
所述重构单元520用于将所述第一区域图像重构为第一区域的高光谱图像。
在一实施例中,所述重构单元520具体用于将所述第一区域图像输入RGB重构模型Y RGB=D RGBX,根据RGB图像字典D RGB,获得所述权重系数矩阵X,其中,Y RGB为输出的RGB图像,D RGB为RGB图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的RGB图像Y RGB和RGB图像字典D RGB之间的映射关系,所述RGB图像字典D RGB根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S获得;所述重构单元520具体用于根据所述权重系数矩阵X输入高光谱图像重构模型Y h=D hX,根据高光谱图像字典D h,获得所述第一区域的高光谱图像,其中,Y h为输出的高光谱图像,D h为高光谱图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的高光谱图像Y h和高光谱图像字典D h之间的映射关系,所述高光谱图像字典D h是在所述获取待重构图像之前,通过将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练后获得的。
在一具体的实施例中,可以通过正交匹配追踪算法(Orthogonal Matching Pursuit,OMP),根据输入的RGB图像以及公式(2)计算对应的权重系数矩阵X,其中,k表示矩阵的行数,l表述矩阵的列数,并且D h和D RGB有如公式(3)所示的确定的映射关系,其中,S为图像传感器的光谱响应函数,光谱响应函数S可以通过RGB图像传感器的生产厂家获取或直接测定,其物理意义为对一个高光谱通道的信号在对应的R或G或B通道的响应。因此,D h和D RGB有确定映射关系的情况下,RGB图像字典D RGB的权重系数矩阵X同样可以用于高光谱字典D h。通过对高光谱字典中原子的线性组合,重构出输入RGB图像的高光谱图像Y h,应理解,上述计算过程为举例说明,本申请除了通过OMP方法根据输入的RGB图像计算出权重系数矩阵X,还可以使用其他方法获得重构的高光谱图像,例如利用深度学习神经网络的方法,使用已有的RGB图像库和对应的高光谱图像库对神经网络进行训练,从而获得输入RGB图像,输出高光谱图像的重构模型,本申请不作具体限定。
在一实施例中,所述系统还包括训练单元540,所述训练单元540用于在所述图像传感器拍摄目标视野获取目标待重构图像之前,将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练,获得高光谱图像字典D h;所述训练单元540还用于根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S,获得RGB图像字典D RGB,其中,D RGB=SD h
优选地,步骤S102根据输入的第一区域图像进行高光谱图像的重构过程中,使用的高光谱图像字典D h可以是预先利用已有的高光谱图像字典,也可以是通过字典学习中的一种经典算法K-SVD算法迭代更新后获得的过完备的高光谱图像字典D h。具体地,给定训练数据Y,即已有的高光谱图像数据库,把Y矩阵分为Y≈D*X,其中D称为字典,D的每一列称为原子,X称为系数矩阵。从样本集Y随机挑选k个样品,作为字典D的原子,并且初始化系数矩阵X为0,以公式(5)为目标函数对D和X进行逐列更新,其中ε是重构误差中允许的最大值,经过反复迭代更新,获得公式(6)所示的高光谱图像字典D h。其中n表示矩阵的行数,k表示矩阵的列数。获得高光谱图像字典D h后,再通过公式(3)以及光谱响应函数S,将获得的高光谱图像字典D h降维映射至RGB空间,从而获得对应的RGB字典D RGB。使得步骤S102可以根据输入的第一区域图像,也就是RGB图像,根据公式(2)获得权重系数矩阵X后,再根据公式(4)即可获得重构的第一区域的高光谱图像。应理解,上述计算过程仅用于举例说明,本申请不作具体限定。其中,在计算Y≈D*X这一欠定方程时,除了使用本申请提供的OMP算法,还可以根据稀疏性的先验条件,通过使用L0范数约束找到最稀疏的解,比如贪婪(greedy)算法、MP算法以及OMP算法,对于其他计算方法,本申请不再作赘述。其中,为了提高高光谱图像重构的准确率,本申请使用的高光谱图像样本集可以包括多个不同场景下的高光谱图像,包括但不限于城市、郊区、农业、动植物景观、室内景观等高光谱可以应用的区域。
所述融合单元530用于将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
在一实施例中,所述系统还包括补拍单元550,所述补拍单元用于在所述第一区域的高光谱图像在所述第一区域的目标区域出现光谱波段重叠区域的情况下,获得所述波段重叠区域在所述目标视野中对应的目标区域,其中,所述波段重叠区域包括各个光波段边缘重叠的区域;所述补拍单元550用于使用第二滤光片对所述第一区域的目标区域进行补拍,获得一张或者多张所述目标区域的多光谱图像;所述补拍单元550还用于对所述一张或者多张目标区域的多光谱图像进行空间分辨率恢复,获得一张或者多张目标区域的高光谱图像;所述融合单元还用于将所述一个或者多个目标区域的高光谱图像、所述第一区域的高光谱图像以及所述第二区域图像进行图像融合,获得所述重构后的目标高光谱图像。应理解,利用图像字典进行RGB图像重构为高光谱图像的方法,经过对损失函数的分析可以发现,在RGB各个波段边缘区域,重构质量有一定程度的下降,原因是RGB各个波段之间的交叠会引起映射误差。换句话说,换句话说,波段重叠区域指的就是各个波段之间的交叠区域。因此,本申请提供的混合高光谱重构的方法,可以将图像传感器的第二区域加工成为可以进行高光谱图像采集的超级像素,用于直接获得准确的高光谱图像,对光波段交叠区域直接进行高光谱图像的拍摄,从而提高高光谱图像的重构精度。其中第二区域可以是预先通过计算得出的各个光波段之间的交叠区域,还可以是在使用第一区域拍摄并重构高光谱图像后确定的波段重叠区域。需要说明的,如果拍摄时由于无人机振动、光线角度或者其他外界情况造成图片不清晰的区域,也可以使 用第二滤光片进行补拍,从而最大程度地确保最后获得的高光谱重构图像的重构精度。
例如,图4是本申请提供的一种混合型高光谱图像重构的方法的拍摄流程示意图。首先,使用图像传感器对目标视野区域进行拍摄,获得待重构图像,其中,拍摄使用的传感器可以是包含如图3所示使用F-P腔薄膜阵列的COMS图像传感器阵列;接着,将待重构图像的第一区域图像也就是RGB图像,进行步骤S102的RGB重构高光谱操作,对于RGB各波段边缘的成像区域,系统将会提示需要利用图像传感器的第二滤波片对目标区域进行补拍,获得目标区域的多光谱图像,以弥补RGB各个波段边缘重构精度低的问题,再对补拍得到的多光谱图像进行空间分辨率的恢复后,获得一张或者多张目标区域的高光谱图像;再将第二区域图像也就是使用第二滤波片拍摄获得的多光谱图像,进行空间分辨率的恢复后,获得第二区域的高光谱图像;最后将第一区域的高光谱图像、一张或者多张目标区域的高光谱图像、以及第二区域的高光谱图像,使用主成分分析(Principal Component Analysis,PCA)等图像融合算法进行融合后,即可获得重构后的高精度的高光谱图像。其中,如果第一滤波片拍摄的RGB图像进行步骤S102的高光谱重构后,没有处于光波段边缘的波段重叠区域,则不需要再使用第二滤波片进行高光谱图像的补拍,这里,可以直接使用第一区域的高光谱图像与第二区域的高光谱图像进行图像融合操作,即可获得重构后的高精度的高光谱图像。由图4可知,使用本申请提供的混合型高光谱图像重构的方法,利用RGB图像重构出高光谱图像,不但大大降低了硬件成本,由于添加了具备多光谱拍摄能力的超级像素,同时也大大提高了高光谱重构的精度,并且,超级像素采集的多光谱数据还可以有效降低RGB重构高光谱的运算成本,从而缩短重构高光谱图像的计算时间。
在一实施例中,所述系统还包括切换单元560,所述切换单元560用于在所述获得重构后的目标高光谱图像之后,将所述与拍摄所述目标区域使用的第一滤光片切换为第二滤光片。也就是说,第二滤光片的初始分布可以是分布于图像传感器的四个象限的中心位置,在第一区域的高光谱图像多次出现光谱波段重叠区域的情况时,还可以对第二滤光片的超级像素分布进行调整。换句话说,本申请提供的混合型高光谱图像重构的方法使用的图像传感器,其中用于拍摄多光谱图像的超级像素,可以根据实际应用领域的需要,更改光信号的通道数目、超级像素的数量及具体分布。
上述系统中,通过图像传感器拍摄目标视野获取目标待重构图像,再将所述第一区域图像重构为第一区域的高光谱图像,从而将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。通过上述系统,使得图像传感器无需添加额外的分光装置或者滤波装置就可以拍摄出高精度的高光谱图像,具有便携性好,成本低的优点。
图6是本申请实施例提供的一种电子设备结构示意框图。如图6所示,本实施例中的电子设备可以包括:一个或多个处理器601;一个或多个输入设备602,一个或多个输出设备603和存储器604。上述处理器601、输入设备602、输出设备603和存储器604通过总线605连接。存储器602用于存储计算机程 序,所述计算机程序包括程序指令,处理器601用于执行存储器602存储的程序指令。
在本申请实施例中,所称处理器601可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、DSP、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
输入设备602可以包括触控板、指纹采传感器(用于采集用户的指纹信息和指纹的方向信息)、麦克风等,输出设备603可以包括显示器(LCD等)、扬声器等。
存储器604可以包括易失性存储器,例如RAM;存储器也可以包括非易失性存储器,例如只读存储器(Read-Only Memory,ROM)、快闪存储器、硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD),存储器还可以包括上述种类的存储器的组合。存储器604可以采用集中式存储,也可以采用分布式存储,此处不作具体限定。可以理解的是,存储器604用于存储计算机程序,例如:计算机程序指令等。在本申请实施例中,存储器604可以向处理器601提供指令和数据。
具体实现中,本申请实施例中所描述的处理器601、输入设备602、输出设备603、存储器604、总线605可执行本申请提供的混合型高光谱图像重构的方法的任一实施例中所描述的实现方式,在此不再赘述。
在本申请的另一实施例中提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时实现本申请提供的混合型高光谱图像重构的方法的任一实施例中所描述的实现方式,在此不再赘述。
所述计算机可读存储介质可以是前述任一实施例所述的终端的内部存储单元,例如终端的硬盘或内存。所述计算机可读存储介质也可以是所述终端的外部存储设备,例如所述终端上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述计算机可读存储介质还可以既包括所述终端的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述终端所需的其他程序和数据。所述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的设备和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。

Claims (10)

  1. 一种混合型高光谱图像重构的方法,其特征在于,所述方法包括:
    图像传感器拍摄目标视野获取目标待重构图像,其中,所述目标待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像;
    将所述第一区域图像重构为第一区域的高光谱图像;
    将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
  2. 根据权利要求1所述的方法,其特征在于,所述图像传感器包括第一滤光片和第二滤光片,所述第一滤光片用于获得第一区域图像,所述第二滤光片用于获得所述第二区域图像。
  3. 根据权利要求2所述的方法,其特征在于,所述第二区域图像是所述图像传感器使用第二滤光片拍摄获得所述第二区域的多光谱图像后,对所述多光谱图像进行空间分辨率的恢复操作后获得的高光谱图像。
  4. 根据权利要求1所述的方法,其特征在于,在所述第一区域的高光谱图像中出现波段重叠区域的情况下,所述方法还包括:
    获得所述波段重叠区域在所述目标视野中对应的目标区域,其中,所述波段重叠区域包括各个光波段边缘重叠的区域;
    使用第二滤光片对所述目标区域进行补拍,获得一张或者多张所述目标区域的多光谱图像;
    对所述一张或者多张目标区域的多光谱图像进行空间分辨率恢复,获得一张或者多张目标区域的高光谱图像;
    将所述一个或者多个目标区域的高光谱图像、所述第一区域的高光谱图像以及所述第二区域图像进行图像融合,获得所述重构后的目标高光谱图像。
  5. 根据权利要求1所述的方法,其特征在于,将所述第一区域图像重构为第一区域的高光谱图像包括:
    将所述第一区域图像输入RGB重构模型Y RGB=D RGBX,根据RGB图像字典D RGB,获得所述权重系数矩阵X,其中,Y RGB为输出的RGB图像,D RGB为RGB图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的RGB图像Y RGB和RGB图像字典D RGB之间的映射关系,所述RGB图像字典D RGB根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S获得;
    根据所述权重系数矩阵X输入高光谱图像重构模型Y h=D hX,根据高光谱图像字典D h,获得所述第一区域的高光谱图像,其中,Y h为输出的高光谱图像,D h为高光谱图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的高光谱图像Y h和高光谱图像字典D h之间的映射关系,所述高光谱图像字典 D h是在所述获取待重构图像之前,通过将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练后获得的。
  6. 根据权利要求5所述的方法,其特征在于,在所述图像传感器拍摄目标视野获取目标待重构图像之前,所述方法还包括:
    将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练,获得高光谱图像字典D h
    根据所述高光谱图像字典D h以及所述图像传感器的光谱响应函数S,获得RGB图像字典D RGB,其中,D RGB=SD h
  7. 根据权利要求3所述的方法,其特征在于,
    所述第二滤光片包括多个超级像素,其中,所述多个超级像素中的每个超级像素是多个单波长像素片排列组合形成的正方形像素的集合,所述多个超级像素中的每个超级像素位于所述图像传感器成像区域的四个象限的中心位置。
  8. 根据权利要求7所述的方法,其特征在于,所述图像传感器基于半导体薄膜技术,其中,
    所述第一滤光片是RGB染料滤光片;
    所述第二滤光片是通过沉积、图形化、刻蚀方法中的任意一种制备的F-P腔薄膜阵列;或者,
    所述第二滤光片是利用光子晶体在CMOS传感器阵列上构成的反射式多通道滤波片。
  9. 根据权利要求4所述的方法,其特征在于,所述获得重构后的目标高光谱图像之后,所述方法还包括:
    将拍摄所述目标区域使用的第一滤光片切换为第二滤光片。
  10. 一种混合型高光谱图像重构系统,其特征在于,所述系统包括:
    获取单元,所述获取单元用于使用图像传感器拍摄目标视野获取目标待重构图像,其中,所述待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像;
    重构单元,所述重构单元用于将所述第一区域图像重构为第一区域的高光谱图像;
    融合单元,所述融合单元用于将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
PCT/CN2019/081550 2019-04-04 2019-04-04 一种混合型高光谱图像重构的方法及系统 WO2020199205A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980005542.9A CN111386549B (zh) 2019-04-04 2019-04-04 一种混合型高光谱图像重构的方法及系统
PCT/CN2019/081550 WO2020199205A1 (zh) 2019-04-04 2019-04-04 一种混合型高光谱图像重构的方法及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/081550 WO2020199205A1 (zh) 2019-04-04 2019-04-04 一种混合型高光谱图像重构的方法及系统

Publications (1)

Publication Number Publication Date
WO2020199205A1 true WO2020199205A1 (zh) 2020-10-08

Family

ID=71219149

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/081550 WO2020199205A1 (zh) 2019-04-04 2019-04-04 一种混合型高光谱图像重构的方法及系统

Country Status (2)

Country Link
CN (1) CN111386549B (zh)
WO (1) WO2020199205A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561883A (zh) * 2020-12-17 2021-03-26 成都亚讯星科科技股份有限公司 农作物rgb图像重建高光谱图像的方法
CN112766102A (zh) * 2021-01-07 2021-05-07 武汉大学 一种基于空谱特征融合的无监督高光谱视频目标跟踪方法
CN113554578A (zh) * 2021-07-23 2021-10-26 奥比中光科技集团股份有限公司 一种光谱图像的确定方法、装置、终端和存储介质
CN114332607A (zh) * 2021-12-17 2022-04-12 清华大学 针对多帧图像光谱字典构建的增量学习方法和系统
EP4181509A4 (en) * 2020-07-27 2023-08-09 Huawei Technologies Co., Ltd. FILTER MATRIX, MOBILE TERMINAL, AND DEVICE

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765871A (zh) * 2019-09-19 2020-02-07 北京航空航天大学 一种基于字典表示的高光谱图像波段质量分析方法
CN114004960B (zh) * 2021-11-17 2024-06-18 湖南大学 一种医药检测的高光谱双模成像系统及方法
CN116939383A (zh) * 2022-04-08 2023-10-24 华为技术有限公司 图像传感器、成像模组、图像采集设备和图像处理方法
CN116071237B (zh) * 2023-03-01 2023-06-20 湖南大学 基于滤光片采样融合的视频高光谱成像方法、系统及介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282506A (zh) * 2015-10-16 2016-01-27 浙江工业大学 基于物联网的全色-多光谱图像融合视频监控方法及其监控装置
WO2018047171A1 (en) * 2016-09-06 2018-03-15 B. G. Negev Technologies And Applications Ltd., At Ben-Gurion University Recovery of hyperspectral data from image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247034B (zh) * 2013-05-08 2016-01-20 中国科学院光电研究院 一种基于稀疏光谱字典的压缩感知高光谱图像重构方法
CN106170052B (zh) * 2015-05-22 2020-11-06 微软技术许可有限责任公司 双传感器超光谱运动成像系统
CN105227867A (zh) * 2015-09-14 2016-01-06 联想(北京)有限公司 一种图像处理方法及电子设备
CN107707831A (zh) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 图像处理方法和装置、电子装置和计算机可读存储介质

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282506A (zh) * 2015-10-16 2016-01-27 浙江工业大学 基于物联网的全色-多光谱图像融合视频监控方法及其监控装置
WO2018047171A1 (en) * 2016-09-06 2018-03-15 B. G. Negev Technologies And Applications Ltd., At Ben-Gurion University Recovery of hyperspectral data from image

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4181509A4 (en) * 2020-07-27 2023-08-09 Huawei Technologies Co., Ltd. FILTER MATRIX, MOBILE TERMINAL, AND DEVICE
CN112561883A (zh) * 2020-12-17 2021-03-26 成都亚讯星科科技股份有限公司 农作物rgb图像重建高光谱图像的方法
CN112766102A (zh) * 2021-01-07 2021-05-07 武汉大学 一种基于空谱特征融合的无监督高光谱视频目标跟踪方法
CN112766102B (zh) * 2021-01-07 2024-04-26 武汉大学 一种基于空谱特征融合的无监督高光谱视频目标跟踪方法
CN113554578A (zh) * 2021-07-23 2021-10-26 奥比中光科技集团股份有限公司 一种光谱图像的确定方法、装置、终端和存储介质
CN113554578B (zh) * 2021-07-23 2024-05-31 奥比中光科技集团股份有限公司 一种光谱图像的确定方法、装置、终端和存储介质
CN114332607A (zh) * 2021-12-17 2022-04-12 清华大学 针对多帧图像光谱字典构建的增量学习方法和系统
CN114332607B (zh) * 2021-12-17 2024-06-11 清华大学 针对多帧图像光谱字典构建的增量学习方法和系统

Also Published As

Publication number Publication date
CN111386549B (zh) 2023-10-13
CN111386549A (zh) 2020-07-07

Similar Documents

Publication Publication Date Title
WO2020199205A1 (zh) 一种混合型高光谱图像重构的方法及系统
US10989595B2 (en) Hybrid spectral imager
Nie et al. Deeply learned filter response functions for hyperspectral reconstruction
Xiong et al. Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections
US10274420B2 (en) Compact multifunctional system for imaging spectroscopy
CN106896069B (zh) 一种基于彩色数码相机单幅rgb图像的光谱重建方法
Kelcey et al. Sensor correction and radiometric calibration of a 6-band multispectral imaging sensor for UAV remote sensing
Rabatel et al. Getting NDVI spectral bands from a single standard RGB digital camera: a methodological approach
KR102139858B1 (ko) 프리즘을 이용한 초분광 영상 재구성 방법 및 시스템
JP2008518229A (ja) マルチスペクトル及びハイパースペクトル撮像を行うシステム
CN104457708A (zh) 一种紧凑型多光谱相机
US20090180115A1 (en) Single-lens computed tomography imaging spectrometer and method of capturing spatial and spectral information
de Oliveira et al. Geometric calibration of a hyperspectral frame camera
Collings et al. Empirical models for radiometric calibration of digital aerial frame mosaics
US11092489B2 (en) Wide-angle computational imaging spectroscopy method and apparatus
Wang et al. A novel low rank smooth flat-field correction algorithm for hyperspectral microscopy imaging
CN107170013B (zh) 一种rgb相机光谱响应曲线的标定方法
KR20180137842A (ko) 농작물 작황분석을 위한 분광 카메라 시스템 제작 방법
CN110647781B (zh) 一种基于谱图融合的农作物生长信息获取方法及装置
CN109827658B (zh) 面向绿色植被检测的凝视型光谱芯片结构及其制备方法
JP7355008B2 (ja) 分光計測装置、および分光計測方法
CN108051087A (zh) 一种针对快速成像的八通道多光谱相机设计方法
KR102362278B1 (ko) 다중 공진 모드를 가지는 가변 분광 필터를 포함하는 분광 장치, 그리고 이의 분광 정보 획득 방법
KR20180137795A (ko) 하이퍼스펙트럼 이미지 장치
Soszyńska et al. Feasibility study of hyperspectral line-scanning camera imagery for remote sensing purposes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19923693

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19923693

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19923693

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.03.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19923693

Country of ref document: EP

Kind code of ref document: A1