WO2020199205A1 - 一种混合型高光谱图像重构的方法及系统 - Google Patents
一种混合型高光谱图像重构的方法及系统 Download PDFInfo
- Publication number
- WO2020199205A1 WO2020199205A1 PCT/CN2019/081550 CN2019081550W WO2020199205A1 WO 2020199205 A1 WO2020199205 A1 WO 2020199205A1 CN 2019081550 W CN2019081550 W CN 2019081550W WO 2020199205 A1 WO2020199205 A1 WO 2020199205A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- hyperspectral
- rgb
- area
- filter
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000004927 fusion Effects 0.000 claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims description 61
- 230000003595 spectral effect Effects 0.000 claims description 24
- 238000013507 mapping Methods 0.000 claims description 16
- 238000005316 response function Methods 0.000 claims description 16
- 239000010409 thin film Substances 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 10
- 239000004038 photonic crystal Substances 0.000 claims description 7
- 230000000295 complement effect Effects 0.000 claims description 5
- 230000008021 deposition Effects 0.000 claims description 5
- 238000000151 deposition Methods 0.000 claims description 5
- 238000005530 etching Methods 0.000 claims description 5
- 238000000059 patterning Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 2
- 239000004065 semiconductor Substances 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 16
- 238000003860 storage Methods 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 14
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 13
- 238000010586 diagram Methods 0.000 description 11
- 238000009826 distribution Methods 0.000 description 9
- 239000010408 film Substances 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 6
- 239000000463 material Substances 0.000 description 6
- 238000001914 filtration Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000000513 principal component analysis Methods 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 229910004298 SiO 2 Inorganic materials 0.000 description 2
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 229910052500 inorganic mineral Inorganic materials 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000011707 mineral Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- 235000012239 silicon dioxide Nutrition 0.000 description 2
- 239000000377 silicon dioxide Substances 0.000 description 2
- 229910052709 silver Inorganic materials 0.000 description 2
- 239000004332 silver Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000001795 light effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Definitions
- This application relates to the field of artificial intelligence, and in particular to a method and system for hybrid hyperspectral image reconstruction.
- hyperspectral image collection has played an increasingly important role in various application fields such as remote sensing, agriculture, and industrial inspection.
- the collection of hyperspectral images requires a filter device to be installed in front of the conventional image sensor to obtain narrowband optical signals of multiple different wavelengths.
- a liquid crystal tunable filter based on the Lyot filter set Liquid crystal tunable filter, LCTF
- LCTF Lyot filter set
- AOTF Acousto-optical tunable filters
- This application provides a hybrid hyperspectral image reconstruction method and system, which are used to solve the shortcomings of low portability and high cost of hyperspectral image acquisition methods.
- this application provides a hybrid hyperspectral image reconstruction method.
- the method includes the following steps:
- the image sensor captures the target field of view to obtain the target image to be reconstructed, wherein the target image to be reconstructed is divided into a first area image and a second area image, the first area image is an RGB image, and the second area image is Hyperspectral image
- Image fusion is performed on the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
- the image sensor includes a first filter and a second filter, the first filter is used to obtain a first area image, and the second filter is used to obtain the second Area image.
- the second area image is obtained by performing a spatial resolution restoration operation on the multispectral image after the image sensor uses a second filter to capture a multispectral image of the second area Hyperspectral image.
- the method further includes:
- the waveband overlapping area includes an area where the edges of each light waveband overlap
- Image fusion is performed on the hyperspectral image of the one or more target regions, the hyperspectral image of the first region, and the image of the second region to obtain a reconstructed hyperspectral image of the target.
- reconstructing the image of the first area into a hyperspectral image of the first area includes:
- the hyperspectral image reconstruction model Y h D h X according to the weight coefficient matrix X, and obtain the hyperspectral image of the first region according to the hyperspectral image dictionary D h , where Y h is the output hyperspectral image , D h is a hyperspectral image dictionary, X is a weight coefficient matrix, the weight coefficient matrix reflects the mapping relationship between the output hyperspectral image Y h and the hyperspectral image dictionary D h , the hyperspectral image dictionary D h is obtained by initializing the weight coefficient matrix X to 0 before acquiring the image to be reconstructed, and training the hyperspectral image reconstruction model using a hyperspectral image sample set.
- the method further includes:
- the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel sheets, and the Each of the super pixels is located at the center of the four quadrants of the imaging area of the image sensor.
- the image sensor is based on semiconductor thin film technology, wherein:
- the first filter is an RGB dye filter
- the second filter is an F-P cavity thin film array prepared by any one of deposition, patterning, and etching methods; or,
- the second filter is a reflective multi-channel filter formed on a CMOS sensor array by using photonic crystals.
- the method further includes:
- a hybrid hyperspectral image reconstruction system in a second aspect, includes:
- An acquisition unit configured to use an image sensor to photograph the target field of view to acquire a target image to be reconstructed, wherein the image to be reconstructed is divided into a first area image and a second area image, and the first area image is RGB An image, the second region image is a hyperspectral image;
- a reconstruction unit configured to reconstruct the first region image into a hyperspectral image of the first region
- the fusion unit is configured to perform image fusion between the hyperspectral image of the first region and the image of the second region to obtain a reconstructed target hyperspectral image.
- the image sensor includes a first filter and a second filter, the first filter is used to obtain an image of a first area, and the second filter is used for the second area image.
- the second area image is obtained by performing a spatial resolution restoration operation on the multispectral image after the image sensor uses a second filter to capture a multispectral image of the second area Hyperspectral image.
- the system further includes a compensation unit,
- the complementary shooting unit is configured to obtain a target area corresponding to the waveband overlapping area in the target field of view in the case that a band overlapping area appears in the hyperspectral image of the first area, wherein the waveband overlapping area Including areas where the edges of each light band overlap;
- the re-shooting unit is also used to restore the spatial resolution of the one or more multispectral images of the target area to obtain one or more hyperspectral images of the target area;
- the fusion unit is also used for image fusion of the hyperspectral image of the one or more target regions, the hyperspectral image of the first region, and the image of the second region to obtain the reconstructed target height Spectral image.
- the image dictionary D RGB is obtained according to the hyperspectral image dictionary D h and the spectral response function S of the image sensor;
- the system further includes a training unit,
- the training unit is used to train the hyperspectral image reconstruction model by initializing the weight coefficient matrix X to 0 before the image sensor captures the target field of view to obtain the target image to be reconstructed. , To obtain the hyperspectral image dictionary D h ;
- the first filter is an RGB dye filter
- the second filter is an F-P cavity thin film array prepared by any one of deposition, patterning, and etching methods; or,
- the second filter is a reflective multi-channel filter formed on a CMOS sensor array by using photonic crystals.
- the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel sheets, and the Each of the super pixels is located at the center of the four quadrants of the imaging area of the image sensor.
- the system further includes a switching unit,
- the switching unit is configured to switch the first filter used for shooting the target area to the second filter after the reconstructed target hyperspectral image is obtained.
- the target field of view is captured by the image sensor to obtain the target image to be reconstructed, and then the first region image is reconstructed into the first region hyperspectral image, thereby Image fusion is performed on the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
- the image sensor can shoot high-precision hyperspectral images without adding additional spectroscopic devices or filtering devices, and has the advantages of good portability and low cost.
- Fig. 1 is a hybrid hyperspectral image reconstruction method provided by this application
- FIG. 2 is a schematic diagram of super pixel distribution in an imaging area of an image sensor provided by the present application
- FIG. 3 is a schematic structural diagram of a super pixel formed by using an F-P cavity film array provided by the present application
- FIG. 4 is a schematic diagram of the shooting process of a hybrid hyperspectral image reconstruction method provided by the present application.
- Fig. 5 is a schematic structural diagram of a hybrid hyperspectral image reconstruction system provided by the present application.
- Fig. 6 is a schematic structural diagram of an electronic device provided by the present application.
- the hybrid hyperspectral image reconstruction method and system of the embodiments of the present application can be applied in many fields.
- various application fields such as remote sensing, agriculture, industrial inspection, military, etc.
- it can be installed on the aircraft to take the hyperspectral image of the target field of view.
- the crop growth can be evaluated based on the hyperspectral image of the taken crop.
- the spectral characteristics can be obtained from the hyperspectral images of the ground objects taken to identify surface minerals; in the field of military detection, it can be based on the hyperspectral shots of the battlefield Images are used to distinguish and identify targets, camouflage objects, and natural objects, so as to improve the accuracy of target strikes, etc., which is not specifically limited in this application.
- Figure 1 is a hybrid hyperspectral image reconstruction method provided by the present application. The method includes the following steps:
- the image sensor captures the target field of view to obtain an image of the target to be reconstructed.
- the image to be reconstructed is divided into a first area image and a second area image, the first area image is an RGB image, and the second area image is a hyperspectral image.
- the image sensor includes a first filter and a second filter, the first filter is used to obtain a first area image, and the second filter is used to obtain the second area image.
- a target image to be reconstructed can be obtained.
- Part of the target image to be reconstructed is an RGB image, which is the first region image, and part is a hyperspectral image, which is the second image.
- Area image is an RGB image, which is the first region image
- a hyperspectral image which is the second image.
- the first area image is an image obtained by the first filter of the image sensor
- the second area image is an image taken by the image sensor using the second filter.
- the area of the first region captured by the first filter is much larger than the area of the second region captured by the second filter. This is because the image sensor to shoot hyperspectral images requires additional expensive spectroscopic devices and filtering devices. If you directly use the image sensor with an external device to directly shoot hyperspectral images, it is often used in conjunction with drones to shoot the target field of view. For spectrum acquisition image sensors, the cost is high and portability is poor. Therefore, the solution provided by this application mainly uses the first filter that collects the RGB image to capture the target field of view. The image sensor can obtain high-precision hyperspectral images without additional external devices, thereby greatly reducing production costs. And has good portability.
- the second area image is after the image sensor uses a second filter to capture a multispectral image of the second area, and then perform a spatial resolution restoration operation on the multispectral image Obtained hyperspectral image.
- the first area image is directly captured using the image sensor and the first filter, while the second filter can only capture multispectral images, while the second area image is a hyperspectral image. Therefore, the image The sensor can directly use the second filter to shoot the target field of view to obtain a multispectral image, and then perform a spatial resolution restoration operation to obtain the second region.
- a multispectral image can actually be regarded as a case of a hyperspectral image, that is, the number of imaged bands is less than that of a hyperspectral image, generally only a few to a dozen. Since spectral information actually corresponds to color information, multi-spectral images or multi-band remote sensing images can obtain color information of ground objects, but the spatial resolution is low. Therefore, after using the second filter to obtain a multispectral image of the target field of view, the operation of spatial resolution restoration can be used to obtain a hyperspectral image. It is understandable that the use of the second filter to collect multispectral data can also effectively reduce the calculation cost of RGB reconstruction of hyperspectral images, and shorten the reconstruction calculation time to a certain extent.
- the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel pieces, so Each super pixel of the plurality of super pixels is located at the center of the four quadrants of the imaging area of the image sensor. It should be understood that each super pixel is used to directly obtain a multispectral image, and after the spatial resolution of the multispectral image is restored, a hyperspectral image can be obtained. Although the second filter to collect multispectral data can also effectively reduce the computational cost of RGB reconstruction of hyperspectral images, the cost of the first filter that directly captures RGB images is much lower than that of the second filter that captures multispectral images.
- FIG. 2 is a schematic diagram of super pixel distribution in an imaging area of an image sensor provided by the present application.
- the colorless part in the figure is the area covered by the first filter
- the dark part is the area covered by the second filter
- the total number of super pixels included in the second filter is 4 ⁇ 6. They are respectively located at the center of the four quadrants in the imaging area of the image sensor.
- the center of each quadrant contains 6 super pixels
- each super pixel consists of 3 ⁇ 3 single-wavelength pixels arranged and combined into a square pixel set.
- the first filter is an RGB dye filter
- the second filter is an FP cavity thin film array prepared by any one of deposition, patterning, and etching methods
- the second filter is a reflective multi-channel filter formed on the CMOS sensor array by using photonic crystals. It should be understood that the second filter is used to take multi-spectral pictures, that is, the second filter has the characteristics of collecting light signals of different wavelength bands, and the FP cavity is a device that uses the phenomenon of multi-beam interference to perform light splitting operations.
- the structure of the FP cavity at the front end of the sensor can be changed, including but not limited to the length of the cavity, the refractive index of the cavity medium, the material of the reflector, etc., to change the light passing frequency of the second filter, and the composition can be multi-spectral shooting Super pixels.
- the structure of a super pixel formed by using an FP cavity thin film array is shown in Figure 3, where Figure 3 shows a side view of the super pixel formed by the FP cavity thin film array, and Figure 3 only shows 5 of the super pixels.
- Each single-wavelength pixel includes a bottom-layer mirror 301, a transparent medium 302, and a top-layer mirror 303.
- the cavity length of each single-wavelength pixel is different. Taking red light (wavelength 630 nm), green light (wavelength 550 nm) and blue light (wavelength 440 nm) as an example, the calculation of the cavity length of the transparent medium 302 will be described as an example. It should be understood that the optical cavity length of the FP cavity film array shown in FIG. 3 should satisfy the following coherent interference conditions (take normal incidence as an example):
- n is the refractive index of the transparent medium 302
- d is the cavity length of the transparent medium 302
- ⁇ is the wavelength of the incident light
- N is a positive integer. Therefore, a possible cavity length combination of the red, green, and blue channels is 201nm, 179nm and 143nm. Therefore, according to the formula (1) and the wavelength of different optical wavelength bands, the corresponding dielectric cavity length can be calculated respectively, thereby fabricating an F-P cavity thin film array to form a super pixel.
- the super pixels of the image sensor can also use photonic crystals to form a reflective multi-channel filter on the CMOS sensor array, thereby replacing the FP cavity film array for different wavelength bands. Selection, this application will not repeat it.
- S102 Reconstruct the image of the first area into a hyperspectral image of the first area.
- the mapping relationship between the image dictionary D RGB , the RGB image dictionary D RGB is obtained according to the hyperspectral image dictionary D h and the spectral response function S of the image sensor;
- the weight coefficient matrix X reflects the mapping relationship between the output hyperspectral image Y
- the orthogonal matching pursuit algorithm (Orthogonal Matching Pursuit, OMP) can be used to calculate the corresponding weight coefficient matrix X according to the input RGB image and formula (2):
- k represents the number of rows of the matrix
- l represents the number of columns of the matrix
- D h and D RGB have a certain mapping relationship as shown in formula (3):
- S is the spectral response function of the image sensor.
- the spectral response function S can be obtained by the manufacturer of the RGB image sensor or directly measured. Its physical meaning is the response to the signal of a hyperspectral channel in the corresponding R or G or B channel. . Therefore, when D h and D RGB have a certain mapping relationship, the weight coefficient matrix X of the RGB image dictionary D RGB can also be used for the hyperspectral dictionary D h . Through the linear combination of atoms in the hyperspectral dictionary, the hyperspectral image Y h of the input RGB image is reconstructed, where,
- this application can also use other methods to obtain the reconstructed hyperspectral image, such as using a deep learning neural network.
- the method before the image sensor captures the target field of view to obtain the target image to be reconstructed, the method further includes: initializing the weight coefficient matrix X to 0, and using a hyperspectral image sample set to compare the hyperspectral image
- the hyperspectral image dictionary D h used may be a pre-existing hyperspectral image dictionary or a dictionary learning process.
- An over-complete hyperspectral image dictionary D h obtained after iterative update of a classic algorithm K-SVD algorithm.
- the Y matrix is divided into Y ⁇ D*X, where D is called a dictionary, each column of D is called an atom, and X is called a coefficient matrix. Randomly select k samples from the sample set Y as the atoms of the dictionary D, and initialize the coefficient matrix X to 0, and update D and X column by column with the following function as the objective function:
- n represents the number of rows of the matrix
- k represents the number of columns of the matrix.
- the sparsest solution can also be found by using the L0 norm constraint according to the prior condition of sparsity, such as greedy ( greedy) algorithm and MP algorithm.
- the hyperspectral image sample set used in this application may include hyperspectral images in multiple different scenes, including but not limited to urban, suburban, agricultural, animal and plant landscapes, and indoor landscapes. The area where the contour spectrum can be applied.
- S103 Perform image fusion on the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
- the method further includes: obtaining a target region corresponding to the band overlap region in the target field of view, wherein, The waveband overlap area includes the area where the edges of the light wavebands overlap; the second filter is used to take a supplementary shot of the target area to obtain one or more multispectral images of the target area; Perform spatial resolution restoration on multiple multispectral images of the target area to obtain one or more hyperspectral images of the target area; combine the hyperspectral images of the one or more target areas and the hyperspectral images of the first area And performing image fusion on the second region image to obtain a reconstructed hyperspectral image of the target.
- the hybrid hyperspectral reconstruction method can process the second region of the image sensor into super pixels that can be used for hyperspectral image acquisition, which can be used to directly obtain accurate hyperspectral images, and can be used to determine the overlapping regions of light bands.
- the hyperspectral image is taken directly to improve the reconstruction accuracy of the hyperspectral image.
- the second area may be an overlapping area between various light wavebands calculated in advance, or may be a waveband overlapping area determined after using the first area to shoot and reconstruct a hyperspectral image. It should be noted that if the image is not clear due to drone vibration, light angle or other external conditions during shooting, the second filter can also be used to make up the shot, so as to maximize the final hyperspectral reconstruction. The accuracy of image reconstruction.
- FIG. 4 is a schematic diagram of the shooting process of a hybrid hyperspectral reconstruction method provided by the present application.
- the sensor used for shooting may be a CMOS image sensor array including a thin film array of FP cavity as shown in Figure 3; then, the image to be reconstructed
- the first area image is the RGB image.
- take the second area image that is, the multispectral image obtained by using the second filter, and restore the spatial resolution to obtain the second area hyperspectral image;
- the first area hyperspectral image, one Or multiple hyperspectral images of the target area and hyperspectral images of the second area are fused using image fusion algorithms such as principal component analysis (PCA) to obtain a reconstructed high-precision hyperspectrum image.
- PCA principal component analysis
- the RGB image taken by the first filter undergoes the hyperspectral reconstruction in step S102, and there is no band overlapping area at the edge of the light band, there is no need to use the second filter to perform the supplementary shooting of the hyperspectral image.
- the hyperspectral image in the first region and the hyperspectral image in the second region can be directly used for image fusion operations, and then a reconstructed high-precision hyperspectral image can be obtained.
- the method further includes: switching the first filter used for shooting the target area to the second filter.
- the initial distribution of the second filter can be distributed in the center of the four quadrants of the image sensor.
- the second filter can also be The super pixel distribution of the film is adjusted.
- the image sensor used in the hybrid hyperspectral image reconstruction method provided in the present application in which the super pixels used to shoot multi-spectral images, can change the number of optical signal channels and super pixels according to the needs of the actual application field. The number and specific distribution.
- the image of the target to be reconstructed is obtained by capturing the field of view of the target by the image sensor, and then the image of the first area is reconstructed into a hyperspectral image of the first area, so as to combine the hyperspectral image of the first area with the Image fusion is performed on the second area image to obtain a reconstructed hyperspectral image of the target.
- the image sensor can shoot high-precision hyperspectral images without adding additional spectroscopic devices or filtering devices, and has the advantages of good portability and low cost.
- FIG. 5 is a schematic structural diagram of a hybrid hyperspectral image reconstruction system provided by this application.
- the hybrid hyperspectral image reconstruction system provided by this application includes an acquisition unit 510, a reconstruction unit 520, and a fusion The unit 530, the training unit 540, the supplementary shooting unit 550, and the switching unit 560.
- the acquiring unit 510 is configured to use an image sensor to photograph the target field of view to acquire the target image to be reconstructed.
- the image to be reconstructed is divided into a first area image and a second area image, the first area image is an RGB image, and the second area image is a hyperspectral image.
- the image sensor includes a first filter and a second filter, the first filter is used to obtain a first area image, and the second filter is used to obtain the second area image.
- a target image to be reconstructed can be obtained.
- Part of the target image to be reconstructed is an RGB image, which is the first region image, and part is a hyperspectral image, which is the second image.
- Area image is an RGB image, which is the first region image
- a hyperspectral image which is the second image.
- the first area image is an image obtained by the first filter of the image sensor
- the second area image is an image taken by the image sensor using the second filter.
- the area of the first region captured by the first filter is much larger than the area of the second region captured by the second filter. This is because the image sensor to shoot hyperspectral images requires additional expensive spectroscopic devices and filtering devices. If you directly use the image sensor with an external device to directly shoot hyperspectral images, it is often used in conjunction with drones to shoot the target field of view. For spectrum acquisition image sensors, the cost is high and portability is poor. Therefore, the solution provided by this application mainly uses the first filter that collects the RGB image to capture the target field of view. The image sensor can obtain high-precision hyperspectral images without additional external devices, thereby greatly reducing production costs. And has good portability.
- the second area image is after the image sensor uses a second filter to capture a multispectral image of the second area, and then perform a spatial resolution restoration operation on the multispectral image Obtained hyperspectral image.
- the first area image is directly captured using the image sensor and the first filter, while the second filter can only capture multispectral images, while the second area image is a hyperspectral image. Therefore, the image The sensor can directly use the second filter to shoot the target field of view to obtain a multispectral image, and then perform a spatial resolution restoration operation to obtain the second region.
- a multispectral image can actually be regarded as a case of a hyperspectral image, that is, the number of imaged bands is less than that of a hyperspectral image, generally only a few to a dozen. Since spectral information actually corresponds to color information, multi-spectral images or multi-band remote sensing images can obtain color information of ground objects, but the spatial resolution is low. Therefore, after using the second filter to obtain a multispectral image of the target field of view, the operation of spatial resolution restoration can be used to obtain a hyperspectral image. It is understandable that the use of the second filter to collect multispectral data can also effectively reduce the calculation cost of RGB reconstruction of hyperspectral images, and shorten the reconstruction calculation time to a certain extent.
- the second filter includes a plurality of super pixels, wherein each super pixel of the plurality of super pixels is a set of square pixels formed by the arrangement and combination of a plurality of single-wavelength pixel pieces, so Each super pixel of the plurality of super pixels is located at the center of the four quadrants of the imaging area of the image sensor. It should be understood that each super pixel is used to directly obtain a multispectral image, and after the spatial resolution of the multispectral image is restored, a hyperspectral image can be obtained. Although the second filter to collect multispectral data can also effectively reduce the computational cost of RGB reconstruction of hyperspectral images, the cost of the first filter that directly captures RGB images is much lower than that of the second filter that captures multispectral images.
- FIG. 2 is a schematic diagram of super pixel distribution in an imaging area of an image sensor provided by the present application.
- the colorless part in the figure is the area covered by the first filter
- the dark part is the area covered by the second filter
- the total number of super pixels included in the second filter is 4 ⁇ 6. They are respectively located at the center of the four quadrants in the imaging area of the image sensor.
- the center of each quadrant contains 6 super pixels
- each super pixel consists of 3 ⁇ 3 single-wavelength pixels arranged and combined into a square pixel set.
- the first filter is an RGB dye filter
- the second filter is an FP cavity thin film array prepared by any one of deposition, patterning, and etching methods
- the second filter is a reflective multi-channel filter formed on the CMOS sensor array by using photonic crystals. It should be understood that the second filter is used to take multi-spectral pictures, that is, the second filter has the characteristics of collecting optical signals of different wavelength bands, and the FP cavity is a device that uses the phenomenon of multi-beam interference to perform light splitting operations.
- the structure of the FP cavity at the front end of the sensor can be changed, including but not limited to the length of the cavity, the refractive index of the cavity medium, the material of the reflector, etc., to change the light passing frequency of the second filter, and the composition can be used for multispectral shooting.
- Super pixels For example, the structure of a super pixel formed by using an FP cavity thin film array is shown in Figure 3, where Figure 3 shows a side view of the super pixel formed by the FP cavity thin film array, and Figure 3 only shows 5 of the super pixels.
- Each single-wavelength pixel includes a bottom-layer mirror 301, a transparent medium 302, and a top-layer mirror 303.
- the cavity length of each single-wavelength pixel is different. The following takes red light (wavelength 630nm), green light (wavelength 550nm) and blue light (wavelength 440nm) as an example to illustrate the calculation of cavity length. It should be understood that the optical cavity length of the FP cavity film array shown in FIG.
- n is the refractive index of the transparent medium 302
- d is the transparent
- ⁇ is the wavelength of the incident light
- N is a positive integer. Therefore, a possible cavity length combination of the red, green, and blue channels is 201nm, 179nm and 143nm. Therefore, according to formula (1) and the wavelengths of different optical wavelength bands, the corresponding dielectric cavity lengths can be calculated respectively, thereby fabricating FP cavity thin film arrays to form super pixels.
- the super pixels of the image sensor can also use photonic crystals to form a reflective multi-channel filter on the CMOS sensor array, thereby replacing the FP cavity film array for different wavelength bands. Selection, this application will not repeat it.
- the reconstruction unit 520 is used to reconstruct the first region image into a hyperspectral image of the first region.
- the RGB image dictionary D RGB is obtained according to the hyperspectral image dictionary D h and the spectral response function S of the image sensor;
- the weight coefficient matrix X reflects the mapping relationship between the output hyperspectral image Y
- the orthogonal matching pursuit algorithm (Orthogonal Matching Pursuit, OMP) can be used to calculate the corresponding weight coefficient matrix X according to the input RGB image and formula (2), where k represents the number of rows of the matrix, l express the number of columns of the matrix, and D h and D RGB have a definite mapping relationship as shown in formula (3), where S is the spectral response function of the image sensor, and the spectral response function S can be obtained by the manufacturer of the RGB image sensor Or direct measurement, its physical meaning is the response to the signal of a hyperspectral channel in the corresponding R or G or B channel.
- OMP Orthogonal Matching Pursuit
- the weight coefficient matrix X of the RGB image dictionary D RGB can also be used for the hyperspectral dictionary D h .
- the hyperspectral image Y h of the input RGB image is reconstructed by linear combination of atoms in the hyperspectral dictionary. It should be understood that the above calculation process is an example, and this application uses the OMP method to calculate the weight coefficient matrix based on the input RGB image X, you can also use other methods to obtain reconstructed hyperspectral images, such as using deep learning neural network methods, using existing RGB image libraries and corresponding hyperspectral image libraries to train the neural network to obtain input RGB images,
- the reconstruction model of the output hyperspectral image is not specifically limited in this application.
- the system further includes a training unit 540 configured to initialize the weight coefficient matrix X to 0 before the image sensor captures the target field of view and obtains the target image to be reconstructed.
- a training unit 540 configured to initialize the weight coefficient matrix X to 0 before the image sensor captures the target field of view and obtains the target image to be reconstructed.
- the hyperspectral image dictionary D h used may be pre-utilized existing hyperspectral image dictionary, or it may be used in dictionary learning.
- An over-complete hyperspectral image dictionary D h obtained after iterative update of a classic algorithm K-SVD algorithm. Specifically, given training data Y, that is, an existing hyperspectral image database, the Y matrix is divided into Y ⁇ D*X, where D is called a dictionary, each column of D is called an atom, and X is called a coefficient matrix.
- step S102 after obtaining the weight coefficient matrix X according to formula (2) according to the input first area image, that is, the RGB image, the reconstructed hyperspectral image of the first area can be obtained according to formula (4).
- the foregoing calculation process is only used for illustration, and this application does not specifically limit it.
- the sparsest solution can also be found by using the L0 norm constraint according to the prior condition of sparsity, such as greedy ( greedy) algorithm, MP algorithm and OMP algorithm. For other calculation methods, this application will not repeat them.
- the hyperspectral image sample set used in this application may include hyperspectral images in multiple different scenes, including but not limited to urban, suburban, agricultural, animal and plant landscapes, and indoor landscapes.
- the area where the contour spectrum can be applied may include hyperspectral images in multiple different scenes, including but not limited to urban, suburban, agricultural, animal and plant landscapes, and indoor landscapes. The area where the contour spectrum can be applied.
- the fusion unit 530 is configured to perform image fusion between the hyperspectral image of the first region and the image of the second region to obtain a reconstructed hyperspectral image of the target.
- the system further includes a supplementary photographing unit 550, which is used for the case where the hyperspectral image of the first area has an overlapping area of spectral bands in the target area of the first area , Obtain the target area corresponding to the waveband overlapping area in the target field of view, where the waveband overlapping area includes the area where the edges of each light waveband overlap; the complementary shooting unit 550 is configured to use a second filter to perform The target area of the first area is re-photographed to obtain one or more multi-spectral images of the target area; the re-shooting unit 550 is further configured to perform re-shooting of the one or more multi-spectral images of the target area.
- a supplementary photographing unit 550 which is used for the case where the hyperspectral image of the first area has an overlapping area of spectral bands in the target area of the first area , Obtain the target area corresponding to the waveband overlapping area in the target field of view, where the waveband overlapping area includes the area where the edges
- the spatial resolution is restored to obtain one or more hyperspectral images of the target area; the fusion unit is also used to combine the hyperspectral images of the one or more target areas, the hyperspectral images of the first area and all Image fusion is performed on the second region image to obtain the reconstructed target hyperspectral image.
- the method of reconstructing RGB image into hyperspectral image using image dictionary after analyzing the loss function, it can be found that the reconstruction quality has a certain degree of degradation in the edge area of each RGB band. The reason is that between RGB bands The overlap of will cause mapping errors. In other words, in other words, the band overlap area refers to the overlap area between each band.
- the hybrid hyperspectral reconstruction method can process the second region of the image sensor into super pixels that can be used for hyperspectral image acquisition, which can be used to directly obtain accurate hyperspectral images, and can be used to determine the overlapping regions of light bands.
- the hyperspectral image is taken directly to improve the reconstruction accuracy of the hyperspectral image.
- the second area may be an overlapping area between various light wavebands calculated in advance, or may be a waveband overlapping area determined after using the first area to shoot and reconstruct a hyperspectral image. It should be noted that if the image is not clear due to drone vibration, light angle or other external conditions during shooting, the second filter can also be used to make up the shot, so as to maximize the final hyperspectral reconstruction. The accuracy of image reconstruction.
- FIG. 4 is a schematic diagram of the shooting process of a hybrid hyperspectral image reconstruction method provided by the present application.
- the sensor used for shooting may be a CMOS image sensor array including a thin film array of FP cavity as shown in Figure 3; then, the image to be reconstructed
- the first area image is the RGB image.
- take the second area image that is, the multispectral image obtained by using the second filter, and restore the spatial resolution to obtain the second area hyperspectral image;
- the first area hyperspectral image, one Or multiple hyperspectral images of the target area and hyperspectral images of the second area are fused using image fusion algorithms such as principal component analysis (PCA) to obtain a reconstructed high-precision hyperspectrum image.
- PCA principal component analysis
- the RGB image taken by the first filter undergoes the hyperspectral reconstruction in step S102, and there is no band overlapping area at the edge of the light band, there is no need to use the second filter to perform the supplementary shooting of the hyperspectral image.
- the hyperspectral image in the first region and the hyperspectral image in the second region can be directly used for image fusion operations, and then a reconstructed high-precision hyperspectral image can be obtained.
- the system further includes a switching unit 560, which is configured to, after obtaining the reconstructed hyperspectral image of the target, combine the first filter used for shooting the target area.
- the light sheet is switched to the second filter. That is to say, the initial distribution of the second filter can be distributed in the center of the four quadrants of the image sensor.
- the second filter can also be The super pixel distribution of the filter is adjusted.
- the image sensor used in the hybrid hyperspectral image reconstruction method provided in the present application in which the super pixels used to shoot multi-spectral images, can change the number of optical signal channels and super pixels according to the needs of the actual application field. The number and specific distribution.
- the image of the target to be reconstructed is obtained by capturing the target field of view by the image sensor, and then the image of the first area is reconstructed into the hyperspectral image of the first area, so as to combine the hyperspectral image of the first area with the Image fusion is performed on the second area image to obtain a reconstructed hyperspectral image of the target.
- the image sensor can shoot high-precision hyperspectral images without adding additional spectroscopic devices or filtering devices, and has the advantages of good portability and low cost.
- FIG. 6 is a schematic block diagram of the structure of an electronic device provided by an embodiment of the present application.
- the electronic device in this embodiment may include: one or more processors 601; one or more input devices 602, one or more output devices 603, and a memory 604.
- the aforementioned processor 601, input device 602, output device 603, and memory 604 are connected through a bus 605.
- the memory 602 is used to store a computer program including program instructions, and the processor 601 is used to execute the program instructions stored in the memory 602.
- the so-called processor 601 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, DSPs, application specific integrated circuits (ASICs), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the input device 602 may include a touch panel, a fingerprint sensor (used to collect user fingerprint information and fingerprint orientation information), a microphone, etc.
- the output device 603 may include a display (LCD, etc.), a speaker, etc.
- the memory 604 may include volatile memory, such as RAM; the memory may also include non-volatile memory, such as read-only memory (ROM), flash memory, hard disk drive (HDD), or solid state drive (Solid-State Drive, SSD), the storage may also include a combination of the foregoing types of storage.
- the memory 604 may adopt centralized storage or distributed storage, which is not specifically limited here. It can be understood that the memory 604 is used to store computer programs, such as computer program instructions. In the embodiment of the present application, the memory 604 may provide instructions and data to the processor 601.
- the processor 601, the input device 602, the output device 603, the memory 604, and the bus 605 described in the embodiments of the present application can execute any of the embodiments of the hybrid hyperspectral image reconstruction method provided in the present application. The described implementation mode will not be repeated here.
- a computer-readable storage medium stores a computer program.
- the computer program includes program instructions. When the program instructions are executed by a processor, the present application is implemented. The implementation described in any embodiment of the provided hybrid hyperspectral image reconstruction method will not be repeated here.
- the computer-readable storage medium may be the internal storage unit of the terminal described in any of the foregoing embodiments, such as the hard disk or memory of the terminal.
- the computer-readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk equipped on the terminal, a Smart Media Card (SMC), or a Secure Digital (SD) card , Flash Card, etc.
- the computer-readable storage medium may also include both an internal storage unit of the terminal and an external storage device.
- the computer-readable storage medium is used to store the computer program and other programs and data required by the terminal.
- the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Spectrometry And Color Measurement (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Description
Claims (10)
- 一种混合型高光谱图像重构的方法,其特征在于,所述方法包括:图像传感器拍摄目标视野获取目标待重构图像,其中,所述目标待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像;将所述第一区域图像重构为第一区域的高光谱图像;将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
- 根据权利要求1所述的方法,其特征在于,所述图像传感器包括第一滤光片和第二滤光片,所述第一滤光片用于获得第一区域图像,所述第二滤光片用于获得所述第二区域图像。
- 根据权利要求2所述的方法,其特征在于,所述第二区域图像是所述图像传感器使用第二滤光片拍摄获得所述第二区域的多光谱图像后,对所述多光谱图像进行空间分辨率的恢复操作后获得的高光谱图像。
- 根据权利要求1所述的方法,其特征在于,在所述第一区域的高光谱图像中出现波段重叠区域的情况下,所述方法还包括:获得所述波段重叠区域在所述目标视野中对应的目标区域,其中,所述波段重叠区域包括各个光波段边缘重叠的区域;使用第二滤光片对所述目标区域进行补拍,获得一张或者多张所述目标区域的多光谱图像;对所述一张或者多张目标区域的多光谱图像进行空间分辨率恢复,获得一张或者多张目标区域的高光谱图像;将所述一个或者多个目标区域的高光谱图像、所述第一区域的高光谱图像以及所述第二区域图像进行图像融合,获得所述重构后的目标高光谱图像。
- 根据权利要求1所述的方法,其特征在于,将所述第一区域图像重构为第一区域的高光谱图像包括:将所述第一区域图像输入RGB重构模型Y RGB=D RGBX,根据RGB图像字典D RGB,获得所述权重系数矩阵X,其中,Y RGB为输出的RGB图像,D RGB为RGB图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的RGB图像Y RGB和RGB图像字典D RGB之间的映射关系,所述RGB图像字典D RGB根据所述高光谱图像字典D h以及图像传感器的光谱响应函数S获得;根据所述权重系数矩阵X输入高光谱图像重构模型Y h=D hX,根据高光谱图像字典D h,获得所述第一区域的高光谱图像,其中,Y h为输出的高光谱图像,D h为高光谱图像字典,X为权重系数矩阵,所述权重系数矩阵X反映了所述输出的高光谱图像Y h和高光谱图像字典D h之间的映射关系,所述高光谱图像字典 D h是在所述获取待重构图像之前,通过将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练后获得的。
- 根据权利要求5所述的方法,其特征在于,在所述图像传感器拍摄目标视野获取目标待重构图像之前,所述方法还包括:将所述权重系数矩阵X初始化为0,并使用高光谱图像样本集对高光谱图像重构模型进行训练,获得高光谱图像字典D h;根据所述高光谱图像字典D h以及所述图像传感器的光谱响应函数S,获得RGB图像字典D RGB,其中,D RGB=SD h。
- 根据权利要求3所述的方法,其特征在于,所述第二滤光片包括多个超级像素,其中,所述多个超级像素中的每个超级像素是多个单波长像素片排列组合形成的正方形像素的集合,所述多个超级像素中的每个超级像素位于所述图像传感器成像区域的四个象限的中心位置。
- 根据权利要求7所述的方法,其特征在于,所述图像传感器基于半导体薄膜技术,其中,所述第一滤光片是RGB染料滤光片;所述第二滤光片是通过沉积、图形化、刻蚀方法中的任意一种制备的F-P腔薄膜阵列;或者,所述第二滤光片是利用光子晶体在CMOS传感器阵列上构成的反射式多通道滤波片。
- 根据权利要求4所述的方法,其特征在于,所述获得重构后的目标高光谱图像之后,所述方法还包括:将拍摄所述目标区域使用的第一滤光片切换为第二滤光片。
- 一种混合型高光谱图像重构系统,其特征在于,所述系统包括:获取单元,所述获取单元用于使用图像传感器拍摄目标视野获取目标待重构图像,其中,所述待重构图像分为第一区域图像和第二区域图像,所述第一区域图像为RGB图像,所述第二区域图像为高光谱图像;重构单元,所述重构单元用于将所述第一区域图像重构为第一区域的高光谱图像;融合单元,所述融合单元用于将所述第一区域的高光谱图像与所述第二区域图像进行图像融合,获得重构后的目标高光谱图像。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980005542.9A CN111386549B (zh) | 2019-04-04 | 2019-04-04 | 一种混合型高光谱图像重构的方法及系统 |
PCT/CN2019/081550 WO2020199205A1 (zh) | 2019-04-04 | 2019-04-04 | 一种混合型高光谱图像重构的方法及系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/081550 WO2020199205A1 (zh) | 2019-04-04 | 2019-04-04 | 一种混合型高光谱图像重构的方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020199205A1 true WO2020199205A1 (zh) | 2020-10-08 |
Family
ID=71219149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/081550 WO2020199205A1 (zh) | 2019-04-04 | 2019-04-04 | 一种混合型高光谱图像重构的方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111386549B (zh) |
WO (1) | WO2020199205A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112561883A (zh) * | 2020-12-17 | 2021-03-26 | 成都亚讯星科科技股份有限公司 | 农作物rgb图像重建高光谱图像的方法 |
CN112766102A (zh) * | 2021-01-07 | 2021-05-07 | 武汉大学 | 一种基于空谱特征融合的无监督高光谱视频目标跟踪方法 |
CN113554578A (zh) * | 2021-07-23 | 2021-10-26 | 奥比中光科技集团股份有限公司 | 一种光谱图像的确定方法、装置、终端和存储介质 |
CN114332607A (zh) * | 2021-12-17 | 2022-04-12 | 清华大学 | 针对多帧图像光谱字典构建的增量学习方法和系统 |
EP4181509A4 (en) * | 2020-07-27 | 2023-08-09 | Huawei Technologies Co., Ltd. | FILTER MATRIX, MOBILE TERMINAL, AND DEVICE |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765871A (zh) * | 2019-09-19 | 2020-02-07 | 北京航空航天大学 | 一种基于字典表示的高光谱图像波段质量分析方法 |
CN114004960B (zh) * | 2021-11-17 | 2024-06-18 | 湖南大学 | 一种医药检测的高光谱双模成像系统及方法 |
CN116939383A (zh) * | 2022-04-08 | 2023-10-24 | 华为技术有限公司 | 图像传感器、成像模组、图像采集设备和图像处理方法 |
CN116071237B (zh) * | 2023-03-01 | 2023-06-20 | 湖南大学 | 基于滤光片采样融合的视频高光谱成像方法、系统及介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105282506A (zh) * | 2015-10-16 | 2016-01-27 | 浙江工业大学 | 基于物联网的全色-多光谱图像融合视频监控方法及其监控装置 |
WO2018047171A1 (en) * | 2016-09-06 | 2018-03-15 | B. G. Negev Technologies And Applications Ltd., At Ben-Gurion University | Recovery of hyperspectral data from image |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103247034B (zh) * | 2013-05-08 | 2016-01-20 | 中国科学院光电研究院 | 一种基于稀疏光谱字典的压缩感知高光谱图像重构方法 |
CN106170052B (zh) * | 2015-05-22 | 2020-11-06 | 微软技术许可有限责任公司 | 双传感器超光谱运动成像系统 |
CN105227867A (zh) * | 2015-09-14 | 2016-01-06 | 联想(北京)有限公司 | 一种图像处理方法及电子设备 |
CN107707831A (zh) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | 图像处理方法和装置、电子装置和计算机可读存储介质 |
-
2019
- 2019-04-04 WO PCT/CN2019/081550 patent/WO2020199205A1/zh active Application Filing
- 2019-04-04 CN CN201980005542.9A patent/CN111386549B/zh active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105282506A (zh) * | 2015-10-16 | 2016-01-27 | 浙江工业大学 | 基于物联网的全色-多光谱图像融合视频监控方法及其监控装置 |
WO2018047171A1 (en) * | 2016-09-06 | 2018-03-15 | B. G. Negev Technologies And Applications Ltd., At Ben-Gurion University | Recovery of hyperspectral data from image |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4181509A4 (en) * | 2020-07-27 | 2023-08-09 | Huawei Technologies Co., Ltd. | FILTER MATRIX, MOBILE TERMINAL, AND DEVICE |
CN112561883A (zh) * | 2020-12-17 | 2021-03-26 | 成都亚讯星科科技股份有限公司 | 农作物rgb图像重建高光谱图像的方法 |
CN112766102A (zh) * | 2021-01-07 | 2021-05-07 | 武汉大学 | 一种基于空谱特征融合的无监督高光谱视频目标跟踪方法 |
CN112766102B (zh) * | 2021-01-07 | 2024-04-26 | 武汉大学 | 一种基于空谱特征融合的无监督高光谱视频目标跟踪方法 |
CN113554578A (zh) * | 2021-07-23 | 2021-10-26 | 奥比中光科技集团股份有限公司 | 一种光谱图像的确定方法、装置、终端和存储介质 |
CN113554578B (zh) * | 2021-07-23 | 2024-05-31 | 奥比中光科技集团股份有限公司 | 一种光谱图像的确定方法、装置、终端和存储介质 |
CN114332607A (zh) * | 2021-12-17 | 2022-04-12 | 清华大学 | 针对多帧图像光谱字典构建的增量学习方法和系统 |
CN114332607B (zh) * | 2021-12-17 | 2024-06-11 | 清华大学 | 针对多帧图像光谱字典构建的增量学习方法和系统 |
Also Published As
Publication number | Publication date |
---|---|
CN111386549B (zh) | 2023-10-13 |
CN111386549A (zh) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020199205A1 (zh) | 一种混合型高光谱图像重构的方法及系统 | |
US10989595B2 (en) | Hybrid spectral imager | |
Nie et al. | Deeply learned filter response functions for hyperspectral reconstruction | |
Xiong et al. | Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections | |
US10274420B2 (en) | Compact multifunctional system for imaging spectroscopy | |
CN106896069B (zh) | 一种基于彩色数码相机单幅rgb图像的光谱重建方法 | |
Kelcey et al. | Sensor correction and radiometric calibration of a 6-band multispectral imaging sensor for UAV remote sensing | |
Rabatel et al. | Getting NDVI spectral bands from a single standard RGB digital camera: a methodological approach | |
KR102139858B1 (ko) | 프리즘을 이용한 초분광 영상 재구성 방법 및 시스템 | |
JP2008518229A (ja) | マルチスペクトル及びハイパースペクトル撮像を行うシステム | |
CN104457708A (zh) | 一种紧凑型多光谱相机 | |
US20090180115A1 (en) | Single-lens computed tomography imaging spectrometer and method of capturing spatial and spectral information | |
de Oliveira et al. | Geometric calibration of a hyperspectral frame camera | |
Collings et al. | Empirical models for radiometric calibration of digital aerial frame mosaics | |
US11092489B2 (en) | Wide-angle computational imaging spectroscopy method and apparatus | |
Wang et al. | A novel low rank smooth flat-field correction algorithm for hyperspectral microscopy imaging | |
CN107170013B (zh) | 一种rgb相机光谱响应曲线的标定方法 | |
KR20180137842A (ko) | 농작물 작황분석을 위한 분광 카메라 시스템 제작 방법 | |
CN110647781B (zh) | 一种基于谱图融合的农作物生长信息获取方法及装置 | |
CN109827658B (zh) | 面向绿色植被检测的凝视型光谱芯片结构及其制备方法 | |
JP7355008B2 (ja) | 分光計測装置、および分光計測方法 | |
CN108051087A (zh) | 一种针对快速成像的八通道多光谱相机设计方法 | |
KR102362278B1 (ko) | 다중 공진 모드를 가지는 가변 분광 필터를 포함하는 분광 장치, 그리고 이의 분광 정보 획득 방법 | |
KR20180137795A (ko) | 하이퍼스펙트럼 이미지 장치 | |
Soszyńska et al. | Feasibility study of hyperspectral line-scanning camera imagery for remote sensing purposes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19923693 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19923693 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19923693 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.03.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19923693 Country of ref document: EP Kind code of ref document: A1 |