WO2024027095A1 - Hyperspectral imaging method and system based on double rgb image fusion, and medium - Google Patents

Hyperspectral imaging method and system based on double rgb image fusion, and medium Download PDF

Info

Publication number
WO2024027095A1
WO2024027095A1 PCT/CN2022/143123 CN2022143123W WO2024027095A1 WO 2024027095 A1 WO2024027095 A1 WO 2024027095A1 CN 2022143123 W CN2022143123 W CN 2022143123W WO 2024027095 A1 WO2024027095 A1 WO 2024027095A1
Authority
WO
WIPO (PCT)
Prior art keywords
hyperspectral
image
spectral
rgb
hyperspectral image
Prior art date
Application number
PCT/CN2022/143123
Other languages
French (fr)
Chinese (zh)
Inventor
李树涛
佃仁伟
刘海波
郭安静
段普宏
Original Assignee
湖南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 湖南大学 filed Critical 湖南大学
Publication of WO2024027095A1 publication Critical patent/WO2024027095A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Definitions

  • the invention relates to synthetic imaging technology of hyperspectral fusion images, and specifically relates to a hyperspectral imaging method, system and medium based on dual RGB image fusion.
  • Hyperspectral images have dozens or hundreds of spectral bands, covering everything from visible light bands to short-wave infrared bands. They are rich in spectral information and play a significant role in face recognition, medical diagnosis, military detection, etc.
  • hyperspectral imagers there are three main types of hyperspectral imagers on the market: spectral sweep, swing sweep, and push-broom. Due to limitations of optical imaging hardware facilities, the scanning speed is slow, and it is difficult to directly obtain high-resolution hyperspectral images.
  • hyperspectral image acquisition equipment is relatively expensive, which greatly limits the application of hyperspectral images. Existing imaging equipment can quickly obtain high spatial resolution RGB images and the cost of RGB cameras is low.
  • Obtaining high-resolution hyperspectral images through dual RGB hyperspectral fusion imaging is a feasible method.
  • This technology utilizes complementary sampling of characteristic spectra, breaks through the limitations of a single imaging sensor, significantly improves the application value of hyperspectral images, and has huge application potential.
  • There are currently two popular methods for acquiring hyperspectral images one is the fusion imaging method, and the other is the RGB image super-resolution method.
  • Fusion imaging methods mainly fuse low spatial resolution hyperspectral images and high spatial multispectral images.
  • low spatial resolution hyperspectral images are also difficult to obtain, making the practical application of this method low.
  • the RGB image super-resolution method obtains hyperspectral images directly from RGB images, but the effect of hyperspectral images obtained by this method is not as good as that of the fusion imaging method.
  • a hyperspectral imaging method, system and medium based on dual RGB image fusion are provided.
  • the present invention can obtain high spatial resolution RGB images from different sensors. After fusion, a hyperspectral image with high spatial resolution is obtained, which has the advantages of high imaging accuracy, high resolution, fast fusion imaging speed, and low cost.
  • the technical solution adopted by the present invention is:
  • a hyperspectral imaging method based on dual RGB image fusion including:
  • H 0 Conv 1 ⁇ 1 (CAT[Conv 3 ⁇ 3 R 1 ,Conv 3 ⁇ 3 R 2 ]),
  • Conv 1 ⁇ 1 means downsampling through a two-dimensional convolution with a convolution kernel of 1 ⁇ 1
  • Conv 3 ⁇ 3 means upsampling the spectral channel through a two-dimensional convolution with a convolution kernel of 3 ⁇ 3.
  • Extract shallow features, CAT represents channel dimension stacking
  • H k+1 H k - ⁇ k S 1 T (S 1 H k -R 1 ) - ⁇ k S 2 T (S 2 H k -R 2 ),
  • H k+1 is the hyperspectral image obtained at the k+1 iteration
  • H k is the hyperspectral image obtained at the k iteration
  • ⁇ k and ⁇ k both represent the penalty factor updated at the k iteration
  • ⁇ k and ⁇ k are both learnable parameters
  • S 1 is the spectral response function of the camera that collects the RGB image R 1
  • S 2 is the spectral response function of the camera that collects the RGB image R 2 .
  • step S2 also includes predetermining the step of iteratively solving the functional expression of the hyperspectral image H:
  • R represents the RGB image
  • S represents the spectral response function of the camera that collects the RGB image R
  • H represents the hyperspectral image
  • N represents the noise of the RGB image
  • mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras is obtained:
  • R 1 S 1 H+N 1 ,
  • R 2 S 2 H+N 2 ,
  • S 1 is the spectral response function of the camera that collects RGB image R 1
  • S 2 is the spectral response function of the camera that collects RGB image R 2
  • N 1 is the noise of RGB image R 1
  • N 2 is the RGB image R 2 noise
  • is the weight value
  • ⁇ (H) is the regular term of the hyperspectral image H
  • A4 perform gradient descent optimization and update based on the basic model of the hyperspectral image H, and obtain the functional expression for iteratively solving the hyperspectral image H.
  • the iterative solution of the hyperspectral image H in step S2 is completed through a deep convolutional neural network composed of a cascade of spectral reconstruction modules, and any k-th level spectral reconstruction module is used to perform the k-th level in step S2 Solve iteratively.
  • the arbitrary k-th level spectrum reconstruction module is composed of a spectrum segment attention module SAM and a spectrum response curve correction module SCM connected to each other.
  • the spectrum segment attention module SAM is used to deeply mine the previous spectrum reconstruction module.
  • the hyperspectral image obtained in the k-th iteration of the output or the spatial spectrum shallow feature H 0 of the hyperspectral image H, the spectral response curve correction module SCM is used to extract the spatial spectrum mining by the spectral segment attention module SAM.
  • Spectral features are taken as input, and the iteration is performed to solve the functional expression of the hyperspectral image H to obtain the hyperspectral image obtained in the k+1th iteration of the output.
  • the spectral segment attention module SAM is a three-layer network module composed of a feature extraction unit, a channel attention mechanism unit, and a downsampling unit connected in sequence.
  • the feature extraction unit is composed of a convolutional parallel unit and a parametric correction unit. It is composed of linear unit connections, in which the convolutional parallel unit is composed of three convolutions in parallel with convolution kernel sizes of 3 ⁇ 3, 1 ⁇ 3, and 3 ⁇ 1 respectively; the channel attention mechanism unit includes 1 ⁇ 1 connected in sequence.
  • Convolution layer, nonlinear normalization layer, cross product layer, activation layer, 1 ⁇ 1 convolution layer, activation layer and dot product layer, and the other input of the cross product layer and dot product layer is the output of the feature extraction unit ;
  • the down-sampling unit is composed of a 3 ⁇ 3 convolution layer and is used for down-sampling in the spectral dimension.
  • iteratively solving the hyperspectral image H through a deep convolutional neural network includes:
  • the spectral segment attention module SAM in the k-th level spectral reconstruction module in the deep convolutional neural network deeply mine the hyperspectral image obtained in the k-th iteration of the previous spectral reconstruction module output or the initial hyperspectral image H 0 spatial spectrum characteristics, and then execute the iterative solution of the functional expression of the hyperspectral image H through the spectral response curve correction module SCM to obtain the hyperspectral image obtained in the k+1th iteration of the output;
  • step B3 determine whether the number of iterations k is equal to the preset total number of iterations K. If it is true, the final hyperspectral image H k+1 will be used as the final hyperspectral image H; otherwise, add 1 to the number of iterations k and jump Perform step B2.
  • the spectral response curve correction module SCM executes the iterative solution of the functional expression of the hyperspectral image H to obtain the output of the k+1th iteration of the hyperspectral image, which refers to the hyperspectral image H
  • the basic model is regarded as a strongly convex problem with an analytical solution, and the proximal gradient descent algorithm is used to gradually obtain the analytical solution.
  • the present invention also provides a hyperspectral imaging system based on dual RGB image fusion, including a microprocessor and a memory connected to each other, and the microprocessor is programmed or configured to perform the hyperspectral imaging system based on dual RGB image fusion. Steps of the Imaging Method.
  • the present invention also provides a computer-readable storage medium in which a computer program is stored, and the computer program is used to be programmed or configured by a microprocessor to perform the steps of the hyperspectral imaging method based on dual RGB image fusion.
  • the hyperspectral imaging method based on dual RGB image fusion of the present invention includes: extracting shallow features through spectral channel upsampling for dual RGB images respectively, and then stacking the channel dimensions. Downsampling obtains the shallow spatial spectrum feature H 0 of the hyperspectral image H; iteratively solves the hyperspectral based on the shallow spatial spectrum feature H 0 of the hyperspectral image H.
  • the present invention can process high spatial resolution RGB images obtained from different sensors.
  • the fusion results in a hyperspectral image with high spatial resolution, which has the advantages of high imaging accuracy, high resolution, fast fusion imaging speed, and low cost.
  • Figure 1 is a basic flow diagram of the method according to the embodiment of the present invention.
  • Figure 2 is a schematic diagram of the network structure of a deep convolutional neural network in an embodiment of the present invention.
  • Figure 3 is a schematic network structure diagram of the spectral segment attention module SAM in the embodiment of the present invention.
  • Figure 4 is a comparison of imaging results between the method of the embodiment of the present invention and the existing method.
  • the hyperspectral imaging method based on dual RGB image fusion in this embodiment includes:
  • H k+1 H k - ⁇ k S 1 T (S 1 H k -R 1 ) - ⁇ k S 2 T (S 2 H k -R 2 ),
  • H k+1 is the hyperspectral image obtained at the k+1 iteration
  • H k is the hyperspectral image obtained at the k iteration
  • ⁇ k and ⁇ k both represent the penalty factor updated at the k iteration
  • ⁇ k and ⁇ k are both learnable parameters
  • S 1 is the spectral response function of the camera that collects the RGB image R 1
  • S 2 is the spectral response function of the camera that collects the RGB image R 2 .
  • the functional expression of the spatial spectrum shallow feature H 0 of the hyperspectral image H obtained in step S1 is:
  • H 0 Conv 1 ⁇ 1 (CAT[Conv 3 ⁇ 3 R 1 ,Conv 3 ⁇ 3 R 2 ]),
  • Conv 1 ⁇ 1 means downsampling through a two-dimensional convolution with a convolution kernel of 1 ⁇ 1
  • Conv 3 ⁇ 3 means upsampling the spectral channel through a two-dimensional convolution with a convolution kernel of 3 ⁇ 3.
  • CAT represents channel dimension stacking.
  • step S2 also includes predetermining the step of iteratively solving the functional expression of the hyperspectral image H:
  • R represents the RGB image
  • S represents the spectral response function of the camera that collects the RGB image R
  • H represents the hyperspectral image
  • N represents the noise of the RGB image
  • mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras is obtained:
  • R 1 S 1 H+N 1 ,
  • R 2 S 2 H+N 2 ,
  • S 1 is the spectral response function of the camera that collects RGB image R 1
  • S 2 is the spectral response function of the camera that collects RGB image R 2
  • N 1 is the noise of RGB image R 1
  • N 2 is the RGB image R 2 noise
  • is the weight value
  • ⁇ (H) is the regular term of the hyperspectral image H
  • A4 perform gradient descent optimization and update based on the basic model of the hyperspectral image H, and obtain the functional expression for iteratively solving the hyperspectral image H.
  • the iterative solution of the hyperspectral image H in step S2 in this embodiment is completed through a deep convolutional neural network composed of a cascade of spectral reconstruction modules, and any k-th level spectral reconstruction module is used to execute Solve for the k-th iteration in step S2.
  • any k-th level spectral reconstruction module in this embodiment is composed of a spectral segment attention module SAM and a spectral response curve correction module SCM connected to each other.
  • the spectral segment attention module SAM is used to deeply mine the previous The hyperspectral image obtained by the k-th iteration output by the spectral reconstruction module or the spatial spectrum shallow feature H 0 of the hyperspectral image H, the spectral response curve correction module SCM is used to apply the spectral segment attention module SAM
  • the mined spatial spectral features are used as input, and the iteration is performed to solve the function expression of the hyperspectral image H to obtain the hyperspectral image obtained in the k+1th iteration of the output.
  • the spectral attention module SAM can better learn the spatial spectral features of hyperspectral images.
  • the spectral segment attention module SAM in this embodiment is a three-layer network module composed of a feature extraction unit, a channel attention mechanism unit, and a downsampling unit connected in sequence.
  • the feature extraction unit is a convolutional parallel unit.
  • the convolutional parallel unit is composed of three convolutions in parallel with convolution kernel sizes of 3 ⁇ 3, 1 ⁇ 3, and 3 ⁇ 1 respectively;
  • the channel attention mechanism unit includes 1 ⁇ 1 convolution layer, nonlinear normalization layer, cross product layer, activation layer, 1 ⁇ 1 convolution layer, activation layer and dot product layer, and the other input of the cross product layer and dot product layer is the feature The output of the extraction unit;
  • the down-sampling unit is composed of a 3 ⁇ 3 convolution layer and is used for spectral dimension down-sampling.
  • iteratively solving the hyperspectral image H through a deep convolutional neural network includes:
  • the spectral segment attention module SAM in the k-th level spectral reconstruction module in the deep convolutional neural network deeply mine the hyperspectral image or the space of the hyperspectral image H obtained by the k-th iteration output by the previous spectral reconstruction module.
  • the empty spectrum feature of the shallow spectral feature H 0 is then used to perform the iterative solution of the functional expression of the hyperspectral image H through the spectral response curve correction module SCM to obtain the hyperspectral image obtained in the k+1th iteration of the output;
  • step B3 determine whether the number of iterations k is equal to the preset total number of iterations K. If it is true, the final hyperspectral image H k+1 will be used as the final hyperspectral image H; otherwise, add 1 to the number of iterations k and jump Perform step B2.
  • step B2 the spectral response curve correction module SCM executes the iterative solution of the functional expression of the hyperspectral image H to obtain the hyperspectral image obtained in the k+1th iteration of the output, which refers to the hyperspectral image H
  • the basic model of is regarded as a strongly convex problem with an analytical solution. This strongly convex problem can be regarded as an optimization estimation problem.
  • an iterative solution function expression for the hyperspectral image H is obtained.
  • the proximal gradient descent algorithm in the optimization estimation algorithm is selected to gradually obtain the analytical solution.
  • This embodiment uses a hyperspectral imaging method based on dual RGB image fusion to convert the dual RGB hyperspectral fusion imaging problem into an optimization estimation problem by establishing a mapping relationship between dual RGB and hyperspectral images, and uses the proximal gradient descent algorithm to The optimization estimation problem is transformed into a deep feature mining problem and a spectral response curve correction feature problem using the spectral segment attention mechanism. This can simultaneously improve the reconstruction accuracy and speed, thereby effectively realizing dual RGB hyperspectral fast fusion imaging and reducing the cost of hyperspectral Image acquisition cost. It should be noted that on the basis of obtaining the iterative solution function expression of the hyperspectral image H, the existing method is to use the proximal gradient descent algorithm in the optimization estimation algorithm to gradually obtain the analytical solution.
  • a verification experiment is conducted using the 32 pairs of data sets published by CAVE in this embodiment, in which the number of hyperspectral image bands in the CAVE data set is 31 and the spatial size is 512 ⁇ 512.
  • the hyperspectral images in this data set are treated as high-resolution hyperspectral images, and the spectral response functions of different sensors are used to downsample two sets of RGB as input images.
  • 20 pairs of data in the CAVE data set were used as the training set, 2 pairs of data were used as the verification set, and 10 pairs of data were used as the test set, and four typical single RGB hyperspectral imaging methods were compared.
  • SAM spectral angle
  • RMSE root mean square error
  • UIQI unified image quality index
  • SSIM structural similarity
  • Figure 4 shows a comparison of the hyperspectral image imaging results of three typical imaging methods HSCNN-R, AWAN+, HSRnet and the method proposed in this embodiment (TRFS) in the CAVE data set.
  • A) in Figure 4 is HSCNN-R The 25th band image of the hyperspectral image recovered by this method.
  • (a) in Figure 4 is the hyperspectral image error result image of the HSCNN-R method.
  • (B) in Figure 4 is the 25th band image of the hyperspectral image restored by the AWAN+ method
  • (b) in Figure 4 is the error result of the hyperspectral image by the AWAN+ method.
  • (C) in Figure 4 is the 25th band image of the hyperspectral image restored by the HSRnet method
  • (c) in Figure 4 is the error result of the hyperspectral image by the HSRnet method
  • (D) in Figure 4 is the 25th band image of the hyperspectral image recovered by the method (TRFS) proposed in this embodiment
  • (d) in Figure 4 is the hyperspectral image restored by the method (TRFS) proposed in this embodiment. Error result graph results.
  • (E) in Figure 4 is the original hyperspectral image used as a reference.
  • Table 1 shows the objective evaluation indicators of four typical imaging methods (Arad, HSCNN-R, AWAN+, HSRnet) and the method proposed in this embodiment (TRFS) on the CAVE data set. The best numerical results are marked black.
  • Table 1 Objective performance indicators of the method in this embodiment and four typical hyperspectral imaging methods on the CAVE data set.
  • TRFS all objective evaluation indicators of the method in this embodiment
  • TRFS turns the dual RGB hyperspectral fusion imaging problem into an optimization estimation problem.
  • using the spectral response function to correct the extracted features, and more importantly, the spectral segment attention mechanism adopted can better learn the spatial spectral features of hyperspectral images and preserve the spatial and spectral details of the image.
  • the dual RGB hyperspectral fusion imaging method of this embodiment utilizes the powerful learning ability of the deep neural convolution network and is supplemented by the optimization estimation algorithm, which can simultaneously improve imaging accuracy and efficiency.
  • the RGB images are spectrally upsampled respectively, and dimensionality reduction is stacked on the channel dimension, which is called a shallow feature extraction module in this embodiment.
  • a spectral segment attention module is designed to extract the spatial spectral features of hyperspectral images.
  • the proximal gradient descent algorithm is used to correct the extracted features with the help of the spectral response function, making full use of the intrinsic characteristics of hyperspectral.
  • This embodiment estimates hyperspectral high-resolution images from dual RGB images based on the optimization estimation algorithm, and uses a trained convolutional neural network to learn the spatial spectrum characteristics of the hyperspectral image.
  • the entire hyperspectral image is estimated using the proximal gradient.
  • the descent algorithm continues to iterate, eventually obtaining a high-resolution hyperspectral image.
  • the advantage of this embodiment is that it does not require additional hyperspectral data for training. It only needs to be trained on the more easily obtained RGB image data set, and is suitable for different types of hyperspectral data.
  • the hyperspectral image obtained by the dual RGB hyperspectral fusion imaging method in this embodiment has better quality, stronger anti-noise interference ability, and can be used in different scenarios or in different situations.
  • shooting different types of dual RGB fusion imaging with different types of equipment there is no need to change the structure of the network, only a few parameters need to be changed, and it has strong universality and robustness.
  • this embodiment also provides a hyperspectral imaging system based on dual RGB image fusion, including a microprocessor and a memory connected to each other, and the microprocessor is programmed or configured to perform the hyperspectral imaging system based on dual RGB image fusion. Steps of the spectral imaging method.
  • this embodiment also provides a computer-readable storage medium in which a computer program is stored, and the computer program is used to be programmed or configured by a microprocessor to perform the steps of the hyperspectral imaging method based on dual RGB image fusion. .
  • embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • the present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application.
  • each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams can be implemented by computer program instructions.
  • These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine, such that the instructions executed by the processor of the computer or other programmable data processing device produce a use A device for realizing the functions specified in one process or multiple processes of the flowchart and/or one block or multiple blocks of the block diagram.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device. Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Processing (AREA)

Abstract

Disclosed are a hyperspectral imaging method and system based on double RGB image fusion, and a medium. The hyperspectral imaging method based on double RGB image fusion comprises: for a double RGB image, extracting shallow features using spectral channel upsampling, and, following channel dimension stacking, performing downsampling to obtain spatial-spectrum shallow features H0 of a hyperspectral image H; based on the spatial-spectrum shallow features H0 of the hyperspectral image H, iteratively solving the hyperspectral image H, the iterative solving being completed by a deep convolutional neural network formed by cascaded spectral reconstruction modules, and each spectral reconstruction module being composed of a spectral segment attention module SAM and a spectral response curve correction module SCM. The present invention allows for fusing high-spatial-resolution RGB images obtained from various sensors so as to obtain a high-spatial-resolution hyperspectral image, and the method has the advantages of high imaging precision, high resolution, high fusion imaging speed and low cost.

Description

基于双RGB图像融合的高光谱成像方法、系统及介质Hyperspectral imaging method, system and medium based on dual RGB image fusion
相关申请的交叉引用Cross-references to related applications
本申请以申请日为“2022.08.03”、申请号为“202210925152.7”、发明创造名称为“基于双RGB图像融合的高光谱成像方法、系统及介质”的中国专利申请为基础,并主张其优先权,该中国专利申请的全文在此引用至本申请中,以作为本申请的一部分。This application is based on the Chinese patent application with the filing date "2022.08.03", the application number "202210925152.7", and the invention title "Hyperspectral imaging method, system and medium based on dual RGB image fusion", and claims its priority rights, the full text of the Chinese patent application is hereby cited in this application as a part of this application.
【技术领域】【Technical field】
本发明涉及高光谱融图像的合成像技术,具体涉及一种基于双RGB图像融合的高光谱成像方法、系统及介质。The invention relates to synthetic imaging technology of hyperspectral fusion images, and specifically relates to a hyperspectral imaging method, system and medium based on dual RGB image fusion.
【背景技术】【Background technique】
高光谱图像具有几十上百个光谱波段,从可见光波段到短波红外波段均可以覆盖,光谱信息丰富,在人脸识别、医学诊断、军事检测等方面有着显著作用。目前市面上主要的高光谱成像仪有谱扫式、摆扫式、推扫式三种,由于光学成像硬件设施的限制,扫描速度慢,且难以直接获取高分辨率高光谱图像。另一方面,高光谱图像获取设备造价较为昂贵,很大程度上限制了高光谱图像的应用。现有的成像设备可快速获得高空间分辨率的RGB图像且RGB相机成本低,通过双RGB高光谱融合成像获得高分辨率高光谱图像是一种可行的方法。该技术利用了特征光谱的互补采样,突破了单一成像传感器的限制,显著提升了高光谱图像的应用价值,具有巨大的应用潜力。目前流行的高光谱图像获取主要有两种方法,一种是融合成像方法,一种是RGB图像超分辨方法。融合成像方法主要在低空间分辨率高光谱图像和高空间多光谱图像融合。然而事实上,低空间分辨率高光谱图像同样难以获取,使得该方法的实际应用效果不高。RGB图像超分辨方法是直接从RGB图像获取高光谱图像,但该方法得到的高光谱图像效果并没有融合成像方法好。Hyperspectral images have dozens or hundreds of spectral bands, covering everything from visible light bands to short-wave infrared bands. They are rich in spectral information and play a significant role in face recognition, medical diagnosis, military detection, etc. Currently, there are three main types of hyperspectral imagers on the market: spectral sweep, swing sweep, and push-broom. Due to limitations of optical imaging hardware facilities, the scanning speed is slow, and it is difficult to directly obtain high-resolution hyperspectral images. On the other hand, hyperspectral image acquisition equipment is relatively expensive, which greatly limits the application of hyperspectral images. Existing imaging equipment can quickly obtain high spatial resolution RGB images and the cost of RGB cameras is low. Obtaining high-resolution hyperspectral images through dual RGB hyperspectral fusion imaging is a feasible method. This technology utilizes complementary sampling of characteristic spectra, breaks through the limitations of a single imaging sensor, significantly improves the application value of hyperspectral images, and has huge application potential. There are currently two popular methods for acquiring hyperspectral images, one is the fusion imaging method, and the other is the RGB image super-resolution method. Fusion imaging methods mainly fuse low spatial resolution hyperspectral images and high spatial multispectral images. However, in fact, low spatial resolution hyperspectral images are also difficult to obtain, making the practical application of this method low. The RGB image super-resolution method obtains hyperspectral images directly from RGB images, but the effect of hyperspectral images obtained by this method is not as good as that of the fusion imaging method.
【发明内容】[Content of the invention]
本发明要解决的技术问题:针对现有技术的上述问题,提供一种基于双RGB图像融合的高光谱成像方法、系统及介质,本发明能够将从不同传感器获取的高空间分辨率的RGB图像经过融合得到高空间分辨率的高光谱图像,具有成像精度高、分辨率高、融合成像速度快、成本低的优点。Technical problems to be solved by the present invention: In view of the above-mentioned problems of the existing technology, a hyperspectral imaging method, system and medium based on dual RGB image fusion are provided. The present invention can obtain high spatial resolution RGB images from different sensors. After fusion, a hyperspectral image with high spatial resolution is obtained, which has the advantages of high imaging accuracy, high resolution, fast fusion imaging speed, and low cost.
为了解决上述技术问题,本发明采用的技术方案为:In order to solve the above technical problems, the technical solution adopted by the present invention is:
一种基于双RGB图像融合的高光谱成像方法,包括:A hyperspectral imaging method based on dual RGB image fusion, including:
S1,针对来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2分别通过光谱通 道上采样提取浅层特征,将浅层特征在通道维堆叠后再下采样去除冗余信息得到高光谱图像H的空谱浅层特征H 0;其中得到高光谱图像H的空谱浅层特征H 0的函数表达式为: S1, for the fused RGB images R 1 and RGB images R 2 from different physical cameras, shallow features are extracted through spectral channel upsampling respectively, and the shallow features are stacked in the channel dimension and then downsampled to remove redundant information to obtain hyperspectral The spatial spectrum shallow feature H 0 of the image H; where the functional expression to obtain the spatial spectrum shallow feature H 0 of the hyperspectral image H is:
H 0=Conv 1×1(CAT[Conv 3×3R 1,Conv 3×3R 2]), H 0 =Conv 1×1 (CAT[Conv 3×3 R 1 ,Conv 3×3 R 2 ]),
上式中,Conv 1×1表示通过卷积核为1×1的二维卷积进行下采样,Conv 3×3表示通过卷积核为3×3的二维卷积进行光谱通道上采样以提取浅层特征,CAT表示通道维堆叠; In the above formula, Conv 1×1 means downsampling through a two-dimensional convolution with a convolution kernel of 1×1, and Conv 3×3 means upsampling the spectral channel through a two-dimensional convolution with a convolution kernel of 3×3. Extract shallow features, CAT represents channel dimension stacking;
S2,基于高光谱图像H的空谱浅层特征H 0迭代求解高光谱图像H,且迭代求解高光谱图像H的函数表达式为: S2, based on the spatial spectrum shallow feature H 0 of the hyperspectral image H, iteratively solve the hyperspectral image H, and the function expression for iteratively solving the hyperspectral image H is:
H k+1=H kkS 1 T(S 1H k-R 1)-β kS 2 T(S 2H k–R 2), H k+1 =H kk S 1 T (S 1 H k -R 1 ) -β k S 2 T (S 2 H k -R 2 ),
上式中,H k+1为第k+1次迭代得到的高光谱图像,H k为第k次迭代得到的高光谱图像,α k、β k均表示第k次迭代更新的惩罚因子,α k、β k均为可学习参数,S 1为采集RGB图像R 1的相机的光谱响应函数,S 2为采集RGB图像R 2的相机的光谱响应函数。 In the above formula, H k+1 is the hyperspectral image obtained at the k+1 iteration, H k is the hyperspectral image obtained at the k iteration, α k and β k both represent the penalty factor updated at the k iteration, α k and β k are both learnable parameters, S 1 is the spectral response function of the camera that collects the RGB image R 1 , and S 2 is the spectral response function of the camera that collects the RGB image R 2 .
可选地,步骤S2之前还包括预先确定迭代求解高光谱图像H的函数表达式的步骤:Optionally, step S2 also includes predetermining the step of iteratively solving the functional expression of the hyperspectral image H:
A1,建立高光谱图像和RGB图像之间的基本映射关系如下式所示:A1, establish the basic mapping relationship between hyperspectral images and RGB images as shown in the following formula:
R=SH+N,R=SH+N,
上式中,R表示RGB图像,S表示采集RGB图像R的相机的光谱响应函数,H表示高光谱图像,N表示RGB图像的噪声;In the above formula, R represents the RGB image, S represents the spectral response function of the camera that collects the RGB image R, H represents the hyperspectral image, and N represents the noise of the RGB image;
A2,根据高光谱图像和RGB图像之间的基本映射关系,得到高光谱图像H和来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2的映射关系: A2, based on the basic mapping relationship between hyperspectral images and RGB images, the mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras is obtained:
R 1=S 1H+N 1R 1 =S 1 H+N 1 ,
R 2=S 2H+N 2R 2 =S 2 H+N 2 ,
上式中,S 1为采集RGB图像R 1的相机的光谱响应函数,S 2为采集RGB图像R 2的相机的光谱响应函数,N 1为RGB图像R 1的噪声,N 2为RGB图像R 2的噪声; In the above formula, S 1 is the spectral response function of the camera that collects RGB image R 1 , S 2 is the spectral response function of the camera that collects RGB image R 2 , N 1 is the noise of RGB image R 1 , and N 2 is the RGB image R 2 noise;
A3,根据高光谱图像H和来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2的映射关系,建立高光谱图像H的基础模型: A3, based on the mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras, establish the basic model of the hyperspectral image H:
Figure PCTCN2022143123-appb-000001
Figure PCTCN2022143123-appb-000001
上式中,λ为权重值,φ(H)为高光谱图像H的正则项;In the above formula, λ is the weight value, φ(H) is the regular term of the hyperspectral image H;
A4,基于高光谱图像H的基础模型进行梯度下降优化更新,得到所述迭代求解高光谱图像H的函数表达式。A4, perform gradient descent optimization and update based on the basic model of the hyperspectral image H, and obtain the functional expression for iteratively solving the hyperspectral image H.
可选地,步骤S2中迭代求解高光谱图像H为通过由光谱重构模块级联构成的深度卷积神经网络完成的,且任意第k级光谱重构模块用于执行步骤S2中的第k次迭代求解。Optionally, the iterative solution of the hyperspectral image H in step S2 is completed through a deep convolutional neural network composed of a cascade of spectral reconstruction modules, and any k-th level spectral reconstruction module is used to perform the k-th level in step S2 Solve iteratively.
可选地,所述任意第k级光谱重构模块由谱段注意力模块SAM和光谱响应曲线修正模块SCM相互连接构成,所述谱段注意力模块SAM用于深度挖掘上一个光谱重构模块输出的第k次迭代得到的高光谱图像或者高光谱图像H的空谱浅层特征H 0的空谱特征,所述光谱响应曲线修正模块SCM用于将谱段注意力模块SAM挖掘出来的空谱特征作为输入,执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像。 Optionally, the arbitrary k-th level spectrum reconstruction module is composed of a spectrum segment attention module SAM and a spectrum response curve correction module SCM connected to each other. The spectrum segment attention module SAM is used to deeply mine the previous spectrum reconstruction module. The hyperspectral image obtained in the k-th iteration of the output or the spatial spectrum shallow feature H 0 of the hyperspectral image H, the spectral response curve correction module SCM is used to extract the spatial spectrum mining by the spectral segment attention module SAM. Spectral features are taken as input, and the iteration is performed to solve the functional expression of the hyperspectral image H to obtain the hyperspectral image obtained in the k+1th iteration of the output.
可选地,所述谱段注意力模块SAM为特征提取单元、通道注意力机制单元、下采样单元依次相连构成的三层网络模块,所述特征提取单元为由卷积并联单元和参数化修正线性单元连接构成,其中卷积并联单元由卷积核大小分别为3×3、1×3、3×1的3个卷积并联构成;所述通道注意力机制单元包括依次相连的1×1卷积层、非线性归一化层、叉乘层、激活层、1×1卷积层、激活层以及点乘层,且叉乘层和点乘层的另一路输入为特征提取单元的输出;所述下采样单元为由一个3×3的卷积层组成,用于进行光谱维度下采样。Optionally, the spectral segment attention module SAM is a three-layer network module composed of a feature extraction unit, a channel attention mechanism unit, and a downsampling unit connected in sequence. The feature extraction unit is composed of a convolutional parallel unit and a parametric correction unit. It is composed of linear unit connections, in which the convolutional parallel unit is composed of three convolutions in parallel with convolution kernel sizes of 3×3, 1×3, and 3×1 respectively; the channel attention mechanism unit includes 1×1 connected in sequence. Convolution layer, nonlinear normalization layer, cross product layer, activation layer, 1×1 convolution layer, activation layer and dot product layer, and the other input of the cross product layer and dot product layer is the output of the feature extraction unit ; The down-sampling unit is composed of a 3×3 convolution layer and is used for down-sampling in the spectral dimension.
可选地,通过深度卷积神经网络迭代求解高光谱图像H包括:Optionally, iteratively solving the hyperspectral image H through a deep convolutional neural network includes:
B1,初始化深度卷积神经网络的网络参数,迭代次数k以及惩罚因子α k、β kB1, initialize the network parameters of the deep convolutional neural network, the number of iterations k and the penalty factors α k and β k ;
B2,通过深度卷积神经网络中第k级光谱重构模块中的谱段注意力模块SAM深度挖掘上一个光谱重构模块输出的第k次迭代得到的高光谱图像或者初始高光谱图像H 0的空谱特征,再通过光谱响应曲线修正模块SCM执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像; B2, through the spectral segment attention module SAM in the k-th level spectral reconstruction module in the deep convolutional neural network, deeply mine the hyperspectral image obtained in the k-th iteration of the previous spectral reconstruction module output or the initial hyperspectral image H 0 spatial spectrum characteristics, and then execute the iterative solution of the functional expression of the hyperspectral image H through the spectral response curve correction module SCM to obtain the hyperspectral image obtained in the k+1th iteration of the output;
B3,判断迭代次数k等于预设迭代总次数K是否成立,如果成立则将最终得到的高光谱图像H k+1作为最终得到的高光谱图像H;否则,将迭代次数k加1,跳转执行步骤B2。 B3, determine whether the number of iterations k is equal to the preset total number of iterations K. If it is true, the final hyperspectral image H k+1 will be used as the final hyperspectral image H; otherwise, add 1 to the number of iterations k and jump Perform step B2.
可选地,步骤B2中通过光谱响应曲线修正模块SCM执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像是指将高光谱图像H的基础模型视为具有解析解的强凸问题,利用近端梯度下降算法来逐步求得解析解。Optionally, in step B2, the spectral response curve correction module SCM executes the iterative solution of the functional expression of the hyperspectral image H to obtain the output of the k+1th iteration of the hyperspectral image, which refers to the hyperspectral image H The basic model is regarded as a strongly convex problem with an analytical solution, and the proximal gradient descent algorithm is used to gradually obtain the analytical solution.
此外,本发明还提供一种基于双RGB图像融合的高光谱成像系统,包括相互连接的微处理器和存储器,所述微处理器被编程或配置以执行所述基于双RGB图像融合的高光谱成像方法的步骤。In addition, the present invention also provides a hyperspectral imaging system based on dual RGB image fusion, including a microprocessor and a memory connected to each other, and the microprocessor is programmed or configured to perform the hyperspectral imaging system based on dual RGB image fusion. Steps of the Imaging Method.
此外,本发明还提供一种计算机可读存储介质,其中存储有计算机程序,所述计算机程序用于被微处理器编程或配置以执行所述基于双RGB图像融合的高光谱成像方法的步骤。In addition, the present invention also provides a computer-readable storage medium in which a computer program is stored, and the computer program is used to be programmed or configured by a microprocessor to perform the steps of the hyperspectral imaging method based on dual RGB image fusion.
和现有技术相比,本发明主要具有下述优点:本发明基于双RGB图像融合的高光谱 成像方法包括:针对双RGB图像分别通过光谱通道上采样提取浅层特征,在通道维堆叠后再下采样得到高光谱图像H的空谱浅层特征H 0;基于高光谱图像H的空谱浅层特征H 0迭代求解高光谱本发明能够将从不同传感器获取的高空间分辨率的RGB图像经过融合得到高空间分辨率的高光谱图像,具有成像精度高、分辨率高、融合成像速度快、成本低的优点。 Compared with the existing technology, the present invention mainly has the following advantages: The hyperspectral imaging method based on dual RGB image fusion of the present invention includes: extracting shallow features through spectral channel upsampling for dual RGB images respectively, and then stacking the channel dimensions. Downsampling obtains the shallow spatial spectrum feature H 0 of the hyperspectral image H; iteratively solves the hyperspectral based on the shallow spatial spectrum feature H 0 of the hyperspectral image H. The present invention can process high spatial resolution RGB images obtained from different sensors. The fusion results in a hyperspectral image with high spatial resolution, which has the advantages of high imaging accuracy, high resolution, fast fusion imaging speed, and low cost.
【附图说明】[Picture description]
图1为本发明实施例方法的基本流程示意图。Figure 1 is a basic flow diagram of the method according to the embodiment of the present invention.
图2为本发明实施例中深度卷积神经网络的网络结构示意图。Figure 2 is a schematic diagram of the network structure of a deep convolutional neural network in an embodiment of the present invention.
图3为本发明实施例中谱段注意力模块SAM的网络结构示意图。Figure 3 is a schematic network structure diagram of the spectral segment attention module SAM in the embodiment of the present invention.
图4为本发明实施例方法和现有方法的成像结果对比。Figure 4 is a comparison of imaging results between the method of the embodiment of the present invention and the existing method.
【具体实施方式】【Detailed ways】
如图1所示,本实施例于双RGB图像融合的高光谱成像方法包括:As shown in Figure 1, the hyperspectral imaging method based on dual RGB image fusion in this embodiment includes:
S1,针对来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2分别通过光谱通道上采样提取浅层特征,将浅层特征在通道维堆叠后再下采样去除冗余信息得到高光谱图像H的空谱浅层特征H 0S1, for the fused RGB images R 1 and RGB images R 2 from different physical cameras, shallow features are extracted through spectral channel upsampling respectively, and the shallow features are stacked in the channel dimension and then downsampled to remove redundant information to obtain hyperspectral The spatial spectrum shallow feature H 0 of image H;
S2,基于高光谱图像H的空谱浅层特征H 0迭代求解高光谱图像H,且迭代求解高光谱图像H的函数表达式为: S2, based on the spatial spectrum shallow feature H 0 of the hyperspectral image H, iteratively solve the hyperspectral image H, and the function expression for iteratively solving the hyperspectral image H is:
H k+1=H kkS 1 T(S 1H k-R 1)-β kS 2 T(S 2H k–R 2), H k+1 =H kk S 1 T (S 1 H k -R 1 ) -β k S 2 T (S 2 H k -R 2 ),
上式中,H k+1为第k+1次迭代得到的高光谱图像,H k为第k次迭代得到的高光谱图像,α k、β k均表示第k次迭代更新的惩罚因子,α k、β k均为可学习参数,S 1为采集RGB图像R 1的相机的光谱响应函数,S 2为采集RGB图像R 2的相机的光谱响应函数。 In the above formula, H k+1 is the hyperspectral image obtained at the k+1 iteration, H k is the hyperspectral image obtained at the k iteration, α k and β k both represent the penalty factor updated at the k iteration, α k and β k are both learnable parameters, S 1 is the spectral response function of the camera that collects the RGB image R 1 , and S 2 is the spectral response function of the camera that collects the RGB image R 2 .
本实施例中,步骤S1中得到高光谱图像H的空谱浅层特征H 0的函数表达式为: In this embodiment, the functional expression of the spatial spectrum shallow feature H 0 of the hyperspectral image H obtained in step S1 is:
H 0=Conv 1×1(CAT[Conv 3×3R 1,Conv 3×3R 2]), H 0 =Conv 1×1 (CAT[Conv 3×3 R 1 ,Conv 3×3 R 2 ]),
上式中,Conv 1×1表示通过卷积核为1×1的二维卷积进行下采样,Conv 3×3表示通过卷积核为3×3的二维卷积进行光谱通道上采样以提取浅层特征,CAT表示通道维堆叠。 In the above formula, Conv 1×1 means downsampling through a two-dimensional convolution with a convolution kernel of 1×1, and Conv 3×3 means upsampling the spectral channel through a two-dimensional convolution with a convolution kernel of 3×3. To extract shallow features, CAT represents channel dimension stacking.
本实施例中,步骤S2之前还包括预先确定迭代求解高光谱图像H的函数表达式的步骤:In this embodiment, step S2 also includes predetermining the step of iteratively solving the functional expression of the hyperspectral image H:
A1,建立高光谱图像和RGB图像之间的基本映射关系如下式所示:A1, establish the basic mapping relationship between hyperspectral images and RGB images as shown in the following formula:
R=SH+N,R=SH+N,
上式中,R表示RGB图像,S表示采集RGB图像R的相机的光谱响应函数,H表示高光谱图像,N表示RGB图像的噪声;In the above formula, R represents the RGB image, S represents the spectral response function of the camera that collects the RGB image R, H represents the hyperspectral image, and N represents the noise of the RGB image;
A2,根据高光谱图像和RGB图像之间的基本映射关系,得到高光谱图像H和来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2的映射关系: A2, based on the basic mapping relationship between hyperspectral images and RGB images, the mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras is obtained:
R 1=S 1H+N 1R 1 =S 1 H+N 1 ,
R 2=S 2H+N 2R 2 =S 2 H+N 2 ,
上式中,S 1为采集RGB图像R 1的相机的光谱响应函数,S 2为采集RGB图像R 2的相机的光谱响应函数,N 1为RGB图像R 1的噪声,N 2为RGB图像R 2的噪声; In the above formula, S 1 is the spectral response function of the camera that collects RGB image R 1 , S 2 is the spectral response function of the camera that collects RGB image R 2 , N 1 is the noise of RGB image R 1 , and N 2 is the RGB image R 2 noise;
A3,根据高光谱图像H和来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2的映射关系,建立高光谱图像H的基础模型: A3, based on the mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras, establish the basic model of the hyperspectral image H:
Figure PCTCN2022143123-appb-000002
Figure PCTCN2022143123-appb-000002
上式中,λ为权重值,φ(H)为高光谱图像H的正则项;In the above formula, λ is the weight value, φ(H) is the regular term of the hyperspectral image H;
A4,基于高光谱图像H的基础模型进行梯度下降优化更新,得到所述迭代求解高光谱图像H的函数表达式。A4, perform gradient descent optimization and update based on the basic model of the hyperspectral image H, and obtain the functional expression for iteratively solving the hyperspectral image H.
如图2所示,本实施例中步骤S2中迭代求解高光谱图像H为通过由光谱重构模块级联构成的深度卷积神经网络完成的,且任意第k级光谱重构模块用于执行步骤S2中的第k次迭代求解。As shown in Figure 2, the iterative solution of the hyperspectral image H in step S2 in this embodiment is completed through a deep convolutional neural network composed of a cascade of spectral reconstruction modules, and any k-th level spectral reconstruction module is used to execute Solve for the k-th iteration in step S2.
如图2所示,本实施例中任意第k级光谱重构模块由谱段注意力模块SAM和光谱响应曲线修正模块SCM相互连接构成,所述谱段注意力模块SAM用于深度挖掘上一个光谱重构模块输出的第k次迭代得到的高光谱图像或者高光谱图像H的空谱浅层特征H 0的空谱特征,所述光谱响应曲线修正模块SCM用于将谱段注意力模块SAM挖掘出来的空谱特征作为输入,执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像。通过谱段注意力模块SAM可以更好的学习高光谱图像的空谱特征。 As shown in Figure 2, any k-th level spectral reconstruction module in this embodiment is composed of a spectral segment attention module SAM and a spectral response curve correction module SCM connected to each other. The spectral segment attention module SAM is used to deeply mine the previous The hyperspectral image obtained by the k-th iteration output by the spectral reconstruction module or the spatial spectrum shallow feature H 0 of the hyperspectral image H, the spectral response curve correction module SCM is used to apply the spectral segment attention module SAM The mined spatial spectral features are used as input, and the iteration is performed to solve the function expression of the hyperspectral image H to obtain the hyperspectral image obtained in the k+1th iteration of the output. The spectral attention module SAM can better learn the spatial spectral features of hyperspectral images.
如图3所示,本实施例中谱段注意力模块SAM为特征提取单元、通道注意力机制单元、下采样单元依次相连构成的三层网络模块,所述特征提取单元为由卷积并联单元和参数化修正线性单元连接构成,其中卷积并联单元由卷积核大小分别为3×3、1×3、3×1的3个卷积并联构成;所述通道注意力机制单元包括依次相连的1×1卷积层、非线性归一化层、叉乘层、激活层、1×1卷积层、激活层以及点乘层,且叉乘层和点乘层的另一路输入为特征提取单元的输出;所述下采样单元为由一个3×3的卷积层组成,用于进行光谱维度下采样。通过上述结构的谱段注意力模块SAM,一方面可准确学习高光谱图像的空谱特征,另一方面网络参数少,迁移性更好。As shown in Figure 3, the spectral segment attention module SAM in this embodiment is a three-layer network module composed of a feature extraction unit, a channel attention mechanism unit, and a downsampling unit connected in sequence. The feature extraction unit is a convolutional parallel unit. It is connected with a parameterized modified linear unit, in which the convolutional parallel unit is composed of three convolutions in parallel with convolution kernel sizes of 3×3, 1×3, and 3×1 respectively; the channel attention mechanism unit includes 1×1 convolution layer, nonlinear normalization layer, cross product layer, activation layer, 1×1 convolution layer, activation layer and dot product layer, and the other input of the cross product layer and dot product layer is the feature The output of the extraction unit; the down-sampling unit is composed of a 3×3 convolution layer and is used for spectral dimension down-sampling. Through the spectral segment attention module SAM with the above structure, on the one hand, the spatial spectrum characteristics of hyperspectral images can be accurately learned, on the other hand, the network parameters are fewer and the transferability is better.
本实施例中的深度卷积神经网络中参数大部分都是通过网络训练得来,因此在不同场 景下或不同拍摄设备下等不同类型的双RGB融合高光谱快速成像时,不需要改变网络的结构,仅需改变少许参数,具有很强的普适性和鲁棒性。Most of the parameters in the deep convolutional neural network in this embodiment are obtained through network training. Therefore, there is no need to change the network when performing different types of dual RGB fusion hyperspectral fast imaging in different scenes or different shooting equipment. The structure only needs to change a few parameters, and has strong universality and robustness.
本实施例中,通过深度卷积神经网络迭代求解高光谱图像H包括:In this embodiment, iteratively solving the hyperspectral image H through a deep convolutional neural network includes:
B1,初始化深度卷积神经网络的网络参数,迭代次数k以及惩罚因子α k、β k;例如本实施例中,迭代次数k的初始值为0,惩罚因子α k、β k的初始值为设置0.0005; B1, initialize the network parameters of the deep convolutional neural network, the iteration number k and the penalty factors α k and β k ; for example, in this embodiment, the initial value of the iteration number k is 0, and the initial values of the penalty factors α k and β k are set 0.0005;
B2,通过深度卷积神经网络中第k级光谱重构模块中的谱段注意力模块SAM深度挖掘上一个光谱重构模块输出的第k次迭代得到的高光谱图像或者高光谱图像H的空谱浅层特征H 0的空谱特征,再通过光谱响应曲线修正模块SCM执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像; B2, through the spectral segment attention module SAM in the k-th level spectral reconstruction module in the deep convolutional neural network, deeply mine the hyperspectral image or the space of the hyperspectral image H obtained by the k-th iteration output by the previous spectral reconstruction module. The empty spectrum feature of the shallow spectral feature H 0 is then used to perform the iterative solution of the functional expression of the hyperspectral image H through the spectral response curve correction module SCM to obtain the hyperspectral image obtained in the k+1th iteration of the output;
B3,判断迭代次数k等于预设迭代总次数K是否成立,如果成立则将最终得到的高光谱图像H k+1作为最终得到的高光谱图像H;否则,将迭代次数k加1,跳转执行步骤B2。 B3, determine whether the number of iterations k is equal to the preset total number of iterations K. If it is true, the final hyperspectral image H k+1 will be used as the final hyperspectral image H; otherwise, add 1 to the number of iterations k and jump Perform step B2.
本实施例中,步骤B2中通过光谱响应曲线修正模块SCM执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像是指将高光谱图像H的基础模型视为具有解析解的强凸问题,此强凸问题可视为最优化估计问题,通过微分对该高光谱图像H的基础模型求导,得到关于高光谱图像H的迭代求解函数表达式,选用最优化估计算法中的近端梯度下降算法来逐步求得解析解。本实施例基于双RGB图像融合的高光谱成像方法通过建立双RGB与高光谱图像之间的映射关系,将双RGB高光谱融合成像问题转变成最优化估计问题,并利用近端梯度下降算法,将最优化估计问题转化为了利用谱段注意力机制深度挖掘特征问题和光谱响应曲线修正特征问题,这样能够同时提升重构精度和速率,进而有效实现双RGB高光谱快速融合成像,降低了高光谱图像的获取成本。需要说明的是,在得到高光谱图像H的迭代求解函数表达式的基础上,选用最优化估计算法中的近端梯度下降算法来逐步求得解析解为现有方法,可参见Beck A,Teboulle M.A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems[J].Siam J Imaging Sciences,2009,2(1):183-202。本实施例方法中仅仅为对该方法的应用,不涉及对该方法的改进,故其详细实现细节在此不再详述。In this embodiment, in step B2, the spectral response curve correction module SCM executes the iterative solution of the functional expression of the hyperspectral image H to obtain the hyperspectral image obtained in the k+1th iteration of the output, which refers to the hyperspectral image H The basic model of is regarded as a strongly convex problem with an analytical solution. This strongly convex problem can be regarded as an optimization estimation problem. By derivation of the basic model of the hyperspectral image H, an iterative solution function expression for the hyperspectral image H is obtained. Formula, the proximal gradient descent algorithm in the optimization estimation algorithm is selected to gradually obtain the analytical solution. This embodiment uses a hyperspectral imaging method based on dual RGB image fusion to convert the dual RGB hyperspectral fusion imaging problem into an optimization estimation problem by establishing a mapping relationship between dual RGB and hyperspectral images, and uses the proximal gradient descent algorithm to The optimization estimation problem is transformed into a deep feature mining problem and a spectral response curve correction feature problem using the spectral segment attention mechanism. This can simultaneously improve the reconstruction accuracy and speed, thereby effectively realizing dual RGB hyperspectral fast fusion imaging and reducing the cost of hyperspectral Image acquisition cost. It should be noted that on the basis of obtaining the iterative solution function expression of the hyperspectral image H, the existing method is to use the proximal gradient descent algorithm in the optimization estimation algorithm to gradually obtain the analytical solution. See Beck A, Teboulle M.A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems[J]. Siam J Imaging Sciences, 2009, 2(1):183-202. The method of this embodiment is only the application of the method and does not involve the improvement of the method, so its detailed implementation details will not be described in detail here.
为了对本实施例双RGB高光谱融合成像方法进行验证,本实施例中利用CAVE公开的32对数据集进行验证实验,其中CAVE数据集中高光谱图像波段数为31、空间尺寸为512×512。在实验中,把该数据集中的高光谱图像当作高分辨率高光谱图像,利用不同传感器的光谱响应函数,下采样两组RGB作为输入图像。在实际过程中,将CAVE数据集中20对数据作为训练集,2对数据作为验证集,10对数据作为测试集,并对比了4种典型的单RGB高光谱成像方法。其中融合图像的评价指标有4种,分别是光谱角(SAM)、 均方根误差(RMSE)、统一图像质量指标(UIQI)和结构相似度(SSIM)。其中UIQI和SSIM的值越大,高分辨率图像质量越好,SAM和RMSE的值越大代表高分辨率图像的质量越差。In order to verify the dual RGB hyperspectral fusion imaging method in this embodiment, a verification experiment is conducted using the 32 pairs of data sets published by CAVE in this embodiment, in which the number of hyperspectral image bands in the CAVE data set is 31 and the spatial size is 512×512. In the experiment, the hyperspectral images in this data set are treated as high-resolution hyperspectral images, and the spectral response functions of different sensors are used to downsample two sets of RGB as input images. In the actual process, 20 pairs of data in the CAVE data set were used as the training set, 2 pairs of data were used as the verification set, and 10 pairs of data were used as the test set, and four typical single RGB hyperspectral imaging methods were compared. There are four evaluation indicators for fused images, namely spectral angle (SAM), root mean square error (RMSE), unified image quality index (UIQI) and structural similarity (SSIM). The larger the values of UIQI and SSIM are, the better the quality of the high-resolution image is, and the larger the values of SAM and RMSE are, the worse the quality of the high-resolution image is.
图4所示为三种典型的成像方法HSCNN-R,AWAN+,HSRnet和本实施例提出的方法(TRFS)在CAVE数据集高光谱图像成像结果对比,图4中的(A)为HSCNN-R方法恢复出来的高光谱图像的第25波段图,图4中的(a)为HSCNN-R方法的高光谱图像误差结果图。图4中的(B)为AWAN+方法恢复出来的高光谱图像的第25波段图,图4中的(b)为AWAN+方法的高光谱图像误差结果图。图4中的(C)为HSRnet方法恢复出来的高光谱图像的第25波段图,图4中的(c)为HSRnet方法的高光谱图像误差结果图。图4中的(D)为本实施例提出的方法(TRFS)恢复出来的高光谱图像的第25波段图,图4中的(d)为本实施例提出的方法(TRFS)的高光谱图像误差结果图结果。图4中的(E)为作为参考的高光谱图像原图。Figure 4 shows a comparison of the hyperspectral image imaging results of three typical imaging methods HSCNN-R, AWAN+, HSRnet and the method proposed in this embodiment (TRFS) in the CAVE data set. (A) in Figure 4 is HSCNN-R The 25th band image of the hyperspectral image recovered by this method. (a) in Figure 4 is the hyperspectral image error result image of the HSCNN-R method. (B) in Figure 4 is the 25th band image of the hyperspectral image restored by the AWAN+ method, and (b) in Figure 4 is the error result of the hyperspectral image by the AWAN+ method. (C) in Figure 4 is the 25th band image of the hyperspectral image restored by the HSRnet method, and (c) in Figure 4 is the error result of the hyperspectral image by the HSRnet method. (D) in Figure 4 is the 25th band image of the hyperspectral image recovered by the method (TRFS) proposed in this embodiment, and (d) in Figure 4 is the hyperspectral image restored by the method (TRFS) proposed in this embodiment. Error result graph results. (E) in Figure 4 is the original hyperspectral image used as a reference.
表1展示了四种典型的成像方法(Arad,HSCNN-R,AWAN+,HSRnet)和本实施例提出的方法(TRFS)在CAVE数据集上成像实验的客观评价指标,最好的数值结果被标黑。Table 1 shows the objective evaluation indicators of four typical imaging methods (Arad, HSCNN-R, AWAN+, HSRnet) and the method proposed in this embodiment (TRFS) on the CAVE data set. The best numerical results are marked black.
表1:本实施例方法与四种典型高光谱成像方法在CAVE数据集上的客观性能指标。Table 1: Objective performance indicators of the method in this embodiment and four typical hyperspectral imaging methods on the CAVE data set.
方法method SAMSAM RMSERMSE UIQIUIQI SSIMSSIM
AradArad 20.526120.5261 15.264515.2645 0.62870.6287 0.83650.8365
HSCNN-RHSCNN-R 11.825211.8252 6.66286.6628 0.75780.7578 0.94720.9472
HSRnetHSRnet 11.513311.5133 6.32386.3238 0.77420.7742 0.95820.9582
AWAN+AWAN+ 8.06618.0661 5.85425.8542 0.87030.8703 0.97990.9799
TRFSTRFS 5.11915.1191 3.14203.1420 0.91340.9134 0.98910.9891
从表1可以看出,本实施例方法(TRFS)的所有客观评价指标都优于其它方法,这是因为本实施例方法(TRFS)将双RGB高光谱融合成像问题变成了最优化估计问题,借助于光谱响应函数来修正提取的特征,更重要的是采用的谱段注意力机制可以更好的学习高光谱图像的空谱特征,保存图像的空间与光谱细节。As can be seen from Table 1, all objective evaluation indicators of the method in this embodiment (TRFS) are better than other methods. This is because the method in this embodiment (TRFS) turns the dual RGB hyperspectral fusion imaging problem into an optimization estimation problem. , using the spectral response function to correct the extracted features, and more importantly, the spectral segment attention mechanism adopted can better learn the spatial spectral features of hyperspectral images and preserve the spatial and spectral details of the image.
综上所述,本实施例双RGB高光谱融合成像方法利用了深度神经卷积网络的强大学习能力,并辅以最优化估计算法,能够同时提升成像精度和效率。首先分别对RGB图像进行光谱上采样,并在通道维度上堆叠降维,在本实施例中被称为浅层特征提取模块。由于高光谱图像具有丰富的空间和光谱信息,因此设计了谱段注意力模块来提取高光 谱图像的空谱特征。之后通过近端梯度下降算法,借助于光谱响应函数来修正提取到的特征,充分利用高光谱内在特性。本实施例基于最优化估计算法从双RGB图像来估计高光谱高分辨率图像,并采用训练好的卷积神经网络来学习高光谱图像的空谱特征,整个高光谱图像的估计采用近端梯度下降算法不断迭代,最终获得高分辨率高光谱图像。本实施例的优势是不需要额外的高光谱数据来进行训练,只需要在更容易获得的RGB图像数据集上进行训练,并且适用于不同类型的高光谱数据,抗噪声干扰能力强,与其它高性能的单RGB高光谱成像方法相比,本实施例双RGB高光谱融合成像方法得出来的高光谱图像具有更好的质量,具有更强的抗噪声干扰能力,且在不同场景下或不同拍摄设备下等不同类型的双RGB融合成像时,不需要改变网络的结构,仅需改变少许参数,具有很强的普适性和鲁棒性。To sum up, the dual RGB hyperspectral fusion imaging method of this embodiment utilizes the powerful learning ability of the deep neural convolution network and is supplemented by the optimization estimation algorithm, which can simultaneously improve imaging accuracy and efficiency. First, the RGB images are spectrally upsampled respectively, and dimensionality reduction is stacked on the channel dimension, which is called a shallow feature extraction module in this embodiment. Since hyperspectral images have rich spatial and spectral information, a spectral segment attention module is designed to extract the spatial spectral features of hyperspectral images. Then, the proximal gradient descent algorithm is used to correct the extracted features with the help of the spectral response function, making full use of the intrinsic characteristics of hyperspectral. This embodiment estimates hyperspectral high-resolution images from dual RGB images based on the optimization estimation algorithm, and uses a trained convolutional neural network to learn the spatial spectrum characteristics of the hyperspectral image. The entire hyperspectral image is estimated using the proximal gradient. The descent algorithm continues to iterate, eventually obtaining a high-resolution hyperspectral image. The advantage of this embodiment is that it does not require additional hyperspectral data for training. It only needs to be trained on the more easily obtained RGB image data set, and is suitable for different types of hyperspectral data. It has strong anti-noise interference ability and is different from other Compared with the high-performance single RGB hyperspectral imaging method, the hyperspectral image obtained by the dual RGB hyperspectral fusion imaging method in this embodiment has better quality, stronger anti-noise interference ability, and can be used in different scenarios or in different situations. When shooting different types of dual RGB fusion imaging with different types of equipment, there is no need to change the structure of the network, only a few parameters need to be changed, and it has strong universality and robustness.
此外,本实施例还提供一种基于双RGB图像融合的高光谱成像系统,包括相互连接的微处理器和存储器,所述微处理器被编程或配置以执行所述基于双RGB图像融合的高光谱成像方法的步骤。In addition, this embodiment also provides a hyperspectral imaging system based on dual RGB image fusion, including a microprocessor and a memory connected to each other, and the microprocessor is programmed or configured to perform the hyperspectral imaging system based on dual RGB image fusion. Steps of the spectral imaging method.
此外,本实施例还提供一种计算机可读存储介质,其中存储有计算机程序,所述计算机程序用于被微处理器编程或配置以执行所述基于双RGB图像融合的高光谱成像方法的步骤。In addition, this embodiment also provides a computer-readable storage medium in which a computer program is stored, and the computer program is used to be programmed or configured by a microprocessor to perform the steps of the hyperspectral imaging method based on dual RGB image fusion. .
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可读存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的 处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。Those skilled in the art will understand that embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine, such that the instructions executed by the processor of the computer or other programmable data processing device produce a use A device for realizing the functions specified in one process or multiple processes of the flowchart and/or one block or multiple blocks of the block diagram. These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions The device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram. These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device. Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.
以上所述仅是本发明的优选实施方式,本发明的保护范围并不仅局限于上述实施例,凡属于本发明思路下的技术方案均属于本发明的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理前提下的若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above are only preferred embodiments of the present invention. The protection scope of the present invention is not limited to the above-mentioned embodiments. All technical solutions that fall under the idea of the present invention belong to the protection scope of the present invention. It should be pointed out that for those of ordinary skill in the art, several improvements and modifications may be made without departing from the principles of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention.

Claims (8)

  1. 一种基于双RGB图像融合的高光谱成像方法,其特征在于,包括:A hyperspectral imaging method based on dual RGB image fusion, which is characterized by including:
    S1,针对来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2分别通过光谱通道上采样提取浅层特征,将浅层特征在通道维堆叠后再下采样去除冗余信息得到高光谱图像H的空谱浅层特征H 0;其中得到高光谱图像H的空谱浅层特征H 0的函数表达式为: S1, for the fused RGB images R 1 and RGB images R 2 from different physical cameras, shallow features are extracted through spectral channel upsampling respectively, and the shallow features are stacked in the channel dimension and then downsampled to remove redundant information to obtain hyperspectral The spatial spectrum shallow feature H 0 of the image H; where the functional expression to obtain the spatial spectrum shallow feature H 0 of the hyperspectral image H is:
    H 0=Conv 1×1(CAT[Conv 3×3R 1,Conv 3×3R 2]), H 0 =Conv 1×1 (CAT[Conv 3×3 R 1 ,Conv 3×3 R 2 ]),
    上式中,Conv 1×1表示通过卷积核为1×1的二维卷积进行下采样,Conv 3×3表示通过卷积核为3×3的二维卷积进行光谱通道上采样以提取浅层特征,CAT表示通道维堆叠; In the above formula, Conv 1×1 means downsampling through a two-dimensional convolution with a convolution kernel of 1×1, and Conv 3×3 means upsampling the spectral channel through a two-dimensional convolution with a convolution kernel of 3×3. Extract shallow features, CAT represents channel dimension stacking;
    S2,基于高光谱图像H的空谱浅层特征H 0迭代求解高光谱图像H,且迭代求解高光谱图像H的函数表达式为: S2, based on the spatial spectrum shallow feature H 0 of the hyperspectral image H, iteratively solve the hyperspectral image H, and the function expression for iteratively solving the hyperspectral image H is:
    H k+1=H kk S 1 T(S 1H k-R 1)-β k S 2 T(S 2H k–R 2), H k+1 =H kk S 1 T (S 1 H k -R 1 ) -β k S 2 T (S 2 H k -R 2 ),
    上式中,H k+1为第k+1次迭代得到的高光谱图像,H k为第k次迭代得到的高光谱图像,α k、β k均表示第k次迭代更新的惩罚因子,α k、β k均为可学习参数,S 1为采集RGB图像R 1的相机的光谱响应函数,S 2为采集RGB图像R 2的相机的光谱响应函数;其中迭代求解高光谱图像H为通过由光谱重构模块级联构成的深度卷积神经网络完成的,且任意第k级光谱重构模块用于执行步骤S2中的第k次迭代求解。 In the above formula, H k+1 is the hyperspectral image obtained at the k+1 iteration, H k is the hyperspectral image obtained at the k iteration, α k and β k both represent the penalty factor updated at the k iteration, α k and β k are both learnable parameters, S 1 is the spectral response function of the camera that collects the RGB image R 1 , and S 2 is the spectral response function of the camera that collects the RGB image R 2 ; where the iterative solution of the hyperspectral image H is It is completed by a deep convolutional neural network composed of a cascade of spectral reconstruction modules, and any k-th level spectral reconstruction module is used to perform the k-th iterative solution in step S2.
  2. 根据权利要求1所述的基于双RGB图像融合的高光谱成像方法,其特征在于,步骤S2之前还包括预先确定迭代求解高光谱图像H的函数表达式的步骤:The hyperspectral imaging method based on dual RGB image fusion according to claim 1, characterized in that, before step S2, it also includes the step of predetermining the functional expression of iteratively solving the hyperspectral image H:
    A1,建立高光谱图像和RGB图像之间的基本映射关系如下式所示:A1, establish the basic mapping relationship between hyperspectral images and RGB images as shown in the following formula:
    R=SH+N,R=SH+N,
    上式中,R表示RGB图像,S表示采集RGB图像R的相机的光谱响应函数,H表示高光谱图像,N表示RGB图像的噪声;In the above formula, R represents the RGB image, S represents the spectral response function of the camera that collects the RGB image R, H represents the hyperspectral image, and N represents the noise of the RGB image;
    A2,根据高光谱图像和RGB图像之间的基本映射关系,得到高光谱图像H和来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2的映射关系: A2, based on the basic mapping relationship between hyperspectral images and RGB images, the mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras is obtained:
    R 1=S 1H+N 1R 1 =S 1 H+N 1 ,
    R 2=S 2H+N 2R 2 =S 2 H+N 2 ,
    上式中,S 1为采集RGB图像R 1的相机的光谱响应函数,S 2为采集RGB图像R 2的相机的光谱响应函数,N 1为RGB图像R 1的噪声,N 2为RGB图像R 2的噪声; In the above formula, S 1 is the spectral response function of the camera that collects RGB image R 1 , S 2 is the spectral response function of the camera that collects RGB image R 2 , N 1 is the noise of RGB image R 1 , and N 2 is the RGB image R 2 noise;
    A3,根据高光谱图像H和来自不同物理相机的被融合的RGB图像R 1、RGB图像R 2的映射关系,建立高光谱图像H的基础模型: A3, based on the mapping relationship between the hyperspectral image H and the fused RGB images R 1 and RGB images R 2 from different physical cameras, establish the basic model of the hyperspectral image H:
    Figure PCTCN2022143123-appb-100001
    Figure PCTCN2022143123-appb-100001
    上式中,λ为权重值,φ(H)为高光谱图像H的正则项;In the above formula, λ is the weight value, φ(H) is the regular term of the hyperspectral image H;
    A4,基于高光谱图像H的基础模型进行梯度下降优化更新,得到所述迭代求解高光谱图像H的函数表达式。A4, perform gradient descent optimization and update based on the basic model of the hyperspectral image H, and obtain the functional expression for iteratively solving the hyperspectral image H.
  3. 根据权利要求2所述的基于双RGB图像融合的高光谱成像方法,其特征在于,所述任意第k级光谱重构模块由谱段注意力模块SAM和光谱响应曲线修正模块SCM相互连接构成,所述谱段注意力模块SAM用于深度挖掘上一个光谱重构模块输出的第k次迭代得到的高光谱图像或者高光谱图像H的空谱浅层特征H 0的空谱特征,所述光谱响应曲线修正模块SCM用于将谱段注意力模块SAM挖掘出来的空谱特征作为输入,执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像。 The hyperspectral imaging method based on dual RGB image fusion according to claim 2, characterized in that the arbitrary k-th level spectral reconstruction module is composed of a spectral segment attention module SAM and a spectral response curve correction module SCM connected to each other, The spectral segment attention module SAM is used to deeply mine the spatial spectrum features of the hyperspectral image obtained by the k-th iteration output by the previous spectrum reconstruction module or the spatial spectrum shallow feature H 0 of the hyperspectral image H. The spectrum The response curve correction module SCM is used to take the spatial spectral features mined by the spectral segment attention module SAM as input, and execute the iterative solution of the functional expression of the hyperspectral image H to obtain the hyperspectrum obtained in the k+1th iteration of the output. image.
  4. 根据权利要求3所述的基于双RGB图像融合的高光谱成像方法,其特征在于,所述谱段注意力模块SAM为特征提取单元、通道注意力机制单元、下采样单元依次相连构成的三层网络模块,所述特征提取单元为由卷积并联单元和参数化修正线性单元连接构成,其中卷积并联单元由卷积核大小分别为3×3、1×3、3×1的3个卷积并联构成;所述通道注意力机制单元包括依次相连的1×1卷积层、非线性归一化层、叉乘层、激活层、1×1卷积层、激活层以及点乘层,且叉乘层和点乘层的另一路输入为特征提取单元的输出;所述下采样单元为由一个3×3的卷积层组成,用于进行光谱维度下采样。The hyperspectral imaging method based on dual RGB image fusion according to claim 3, characterized in that the spectral segment attention module SAM is a three-layer structure consisting of a feature extraction unit, a channel attention mechanism unit, and a downsampling unit connected in sequence. Network module, the feature extraction unit is composed of a convolutional parallel unit and a parameterized modified linear unit, wherein the convolutional parallel unit consists of three convolutions with convolution kernel sizes of 3×3, 1×3, and 3×1. The channel attention mechanism unit includes a 1×1 convolution layer, a nonlinear normalization layer, a cross product layer, an activation layer, a 1×1 convolution layer, an activation layer and a dot product layer that are connected in sequence. And the other input of the cross product layer and the dot product layer is the output of the feature extraction unit; the down sampling unit is composed of a 3×3 convolution layer and is used for spectral dimension down sampling.
  5. 根据权利要求3所述的基于双RGB图像融合的高光谱成像方法,其特征在于,通过深度卷积神经网络迭代求解高光谱图像H包括:The hyperspectral imaging method based on dual RGB image fusion according to claim 3, characterized in that iteratively solving the hyperspectral image H through a deep convolutional neural network includes:
    B1,初始化深度卷积神经网络的网络参数,迭代次数k以及惩罚因子α k、β kB1, initialize the network parameters of the deep convolutional neural network, the number of iterations k and the penalty factors α k and β k ;
    B2,通过深度卷积神经网络中第k级光谱重构模块中的谱段注意力模块SAM深度挖掘上一个光谱重构模块输出的第k次迭代得到的高光谱图像或者高光谱图像H的空谱浅层特征H 0的空谱特征,再通过光谱响应曲线修正模块SCM执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像; B2, through the spectral segment attention module SAM in the k-th level spectral reconstruction module in the deep convolutional neural network, deeply mine the hyperspectral image or the space of the hyperspectral image H obtained by the k-th iteration output by the previous spectral reconstruction module. The empty spectrum feature of the shallow spectral feature H 0 is then used to perform the iterative solution of the functional expression of the hyperspectral image H through the spectral response curve correction module SCM to obtain the hyperspectral image obtained in the k+1th iteration of the output;
    B3,判断迭代次数k等于预设迭代总次数K是否成立,如果成立则将最终得到的高光谱图像H k+1作为最终得到的高光谱图像H;否则,将迭代次数k加1,跳转执行步骤B2。 B3, determine whether the number of iterations k is equal to the preset total number of iterations K. If it is true, the final hyperspectral image H k+1 will be used as the final hyperspectral image H; otherwise, add 1 to the number of iterations k and jump Perform step B2.
  6. 根据权利要求5所述的基于双RGB图像融合的高光谱成像方法,其特征在于,步骤B2中通过光谱响应曲线修正模块SCM执行所述迭代求解高光谱图像H的函数表达式以获得输出的第k+1次迭代得到的高光谱图像是指将高光谱图像H的基础模型视为具有解析解的强凸问题,利用近端梯度下降算法来逐步求得解析解。The hyperspectral imaging method based on dual RGB image fusion according to claim 5, characterized in that in step B2, the iterative solution of the functional expression of the hyperspectral image H is performed by the spectral response curve correction module SCM to obtain the output third The hyperspectral image obtained by k+1 iterations refers to treating the basic model of the hyperspectral image H as a strongly convex problem with an analytical solution, and using the proximal gradient descent algorithm to gradually obtain the analytical solution.
  7. 一种基于双RGB图像融合的高光谱成像系统,包括相互连接的微处理器和存储器,其特征在于,所述微处理器被编程或配置以执行权利要求1~6中任意一项所述基于双RGB图像融合的高光谱成像方法的步骤。A hyperspectral imaging system based on dual RGB image fusion, including an interconnected microprocessor and a memory, characterized in that the microprocessor is programmed or configured to execute the method based on any one of claims 1 to 6. Steps of hyperspectral imaging method for dual RGB image fusion.
  8. 一种计算机可读存储介质,其中存储有计算机程序,其特征在于,所述计算机程序用于被微处理器编程或配置以执行权利要求1~6中任意一项所述基于双RGB图像融合的高光谱成像方法的步骤。A computer-readable storage medium in which a computer program is stored, characterized in that the computer program is used to be programmed or configured by a microprocessor to execute the dual RGB image fusion-based method described in any one of claims 1 to 6. Steps in hyperspectral imaging methods.
PCT/CN2022/143123 2022-08-03 2022-12-29 Hyperspectral imaging method and system based on double rgb image fusion, and medium WO2024027095A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210925152.7 2022-08-03
CN202210925152.7A CN114998109B (en) 2022-08-03 2022-08-03 Hyperspectral imaging method, system and medium based on dual RGB image fusion

Publications (1)

Publication Number Publication Date
WO2024027095A1 true WO2024027095A1 (en) 2024-02-08

Family

ID=83021108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/143123 WO2024027095A1 (en) 2022-08-03 2022-12-29 Hyperspectral imaging method and system based on double rgb image fusion, and medium

Country Status (2)

Country Link
CN (1) CN114998109B (en)
WO (1) WO2024027095A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809162A (en) * 2024-02-29 2024-04-02 深圳市润联塑胶模具有限公司 Method and device for correcting imaging non-uniformity of lens and extracting lens parameters

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998109B (en) * 2022-08-03 2022-10-25 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN116071237B (en) * 2023-03-01 2023-06-20 湖南大学 Video hyperspectral imaging method, system and medium based on filter sampling fusion
CN116433551B (en) * 2023-06-13 2023-08-22 湖南大学 High-resolution hyperspectral imaging method and device based on double-light-path RGB fusion
CN117252875B (en) * 2023-11-17 2024-02-09 山东大学 Medical image processing method, system, medium and equipment based on hyperspectral image
CN117314757B (en) * 2023-11-30 2024-02-09 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070116365A1 (en) * 2005-11-23 2007-05-24 Leica Geosytems Geiospatial Imaging, Llc Feature extraction using pixel-level and object-level analysis
US20200302249A1 (en) * 2019-03-19 2020-09-24 Mitsubishi Electric Research Laboratories, Inc. Systems and Methods for Multi-Spectral Image Fusion Using Unrolled Projected Gradient Descent and Convolutinoal Neural Network
CN112801881A (en) * 2021-04-13 2021-05-14 湖南大学 High-resolution hyperspectral calculation imaging method, system and medium
CN113327218A (en) * 2021-06-10 2021-08-31 东华大学 Hyperspectral and full-color image fusion method based on cascade network
CN114266957A (en) * 2021-11-12 2022-04-01 北京工业大学 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1173029B1 (en) * 2000-07-14 2005-11-02 Matsushita Electric Industrial Co., Ltd. Color image pickup device
KR102482438B1 (en) * 2016-09-06 2022-12-29 비.지. 네게브 테크놀로지즈 앤드 애플리케이션스 리미티드, 엣 벤-구리온 유니버시티 Reconstruction of hyperspectral data from images
WO2019051591A1 (en) * 2017-09-15 2019-03-21 Kent Imaging Hybrid visible and near infrared imaging with an rgb color filter array sensor
US11257213B2 (en) * 2018-10-25 2022-02-22 Koninklijke Philips N.V. Tumor boundary reconstruction using hyperspectral imaging
US11019364B2 (en) * 2019-03-23 2021-05-25 Uatc, Llc Compression of images having overlapping fields of view using machine-learned models
CN111191736B (en) * 2020-01-05 2022-03-04 西安电子科技大学 Hyperspectral image classification method based on depth feature cross fusion
CN111579506B (en) * 2020-04-20 2021-04-09 湖南大学 Multi-camera hyperspectral imaging method, system and medium based on deep learning
US20210372938A1 (en) * 2020-05-29 2021-12-02 The Board Of Trustee Of The University Of Alabama Deep learning-based crack segmentation through heterogeneous image fusion
CN112116065A (en) * 2020-08-14 2020-12-22 西安电子科技大学 RGB image spectrum reconstruction method, system, storage medium and application
US11615603B2 (en) * 2020-10-30 2023-03-28 Tata Consultancy Services Limited Method and system for learning spectral features of hyperspectral data using DCNN
CN112767243B (en) * 2020-12-24 2023-05-26 深圳大学 Method and system for realizing super-resolution of hyperspectral image
CN113793261A (en) * 2021-08-05 2021-12-14 西安理工大学 Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070116365A1 (en) * 2005-11-23 2007-05-24 Leica Geosytems Geiospatial Imaging, Llc Feature extraction using pixel-level and object-level analysis
US20200302249A1 (en) * 2019-03-19 2020-09-24 Mitsubishi Electric Research Laboratories, Inc. Systems and Methods for Multi-Spectral Image Fusion Using Unrolled Projected Gradient Descent and Convolutinoal Neural Network
CN112801881A (en) * 2021-04-13 2021-05-14 湖南大学 High-resolution hyperspectral calculation imaging method, system and medium
CN113327218A (en) * 2021-06-10 2021-08-31 东华大学 Hyperspectral and full-color image fusion method based on cascade network
CN114266957A (en) * 2021-11-12 2022-04-01 北京工业大学 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809162A (en) * 2024-02-29 2024-04-02 深圳市润联塑胶模具有限公司 Method and device for correcting imaging non-uniformity of lens and extracting lens parameters
CN117809162B (en) * 2024-02-29 2024-05-07 深圳市润联塑胶模具有限公司 Method and device for correcting imaging non-uniformity of lens and extracting lens parameters

Also Published As

Publication number Publication date
CN114998109A (en) 2022-09-02
CN114998109B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
WO2024027095A1 (en) Hyperspectral imaging method and system based on double rgb image fusion, and medium
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
US11403838B2 (en) Image processing method, apparatus, equipment, and storage medium to obtain target image features
CN110717354B (en) Super-pixel classification method based on semi-supervised K-SVD and multi-scale sparse representation
WO2022217746A1 (en) High-resolution hyperspectral calculation imaging method and system, and medium
Dong et al. Learning a deep convolutional network for image super-resolution
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN112805744B (en) System and method for demosaicing multispectral images
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN102842115B (en) Based on the compressed sensing image super-resolution rebuilding method of double dictionary study
Chen et al. Convolutional neural network based dem super resolution
CN111160273A (en) Hyperspectral image space spectrum combined classification method and device
CN113763299B (en) Panchromatic and multispectral image fusion method and device and application thereof
CN112669248B (en) Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN103020912A (en) Remote sensing image restoration method combining wave-band clustering with sparse representation
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
US11669943B2 (en) Dual-stage system for computational photography, and technique for training same
Yuan et al. ROBUST PCANet for hyperspectral image change detection
CN115439325A (en) Low-resolution hyperspectral image processing method and device and computer program product
CN108446723B (en) Multi-scale space spectrum collaborative classification method for hyperspectral image
CN115984949B (en) Low-quality face image recognition method and equipment with attention mechanism
CN116228542A (en) Image super-resolution reconstruction method based on trans-scale non-local attention mechanism
CN116524352A (en) Remote sensing image water body extraction method and device
CN113628111B (en) Hyperspectral image super-resolution method based on gradient information constraint
CN109785253B (en) Panchromatic sharpening post-processing method based on enhanced back projection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22953883

Country of ref document: EP

Kind code of ref document: A1