CN113284067A - Hyperspectral panchromatic sharpening method based on depth detail injection network - Google Patents

Hyperspectral panchromatic sharpening method based on depth detail injection network Download PDF

Info

Publication number
CN113284067A
CN113284067A CN202110602214.6A CN202110602214A CN113284067A CN 113284067 A CN113284067 A CN 113284067A CN 202110602214 A CN202110602214 A CN 202110602214A CN 113284067 A CN113284067 A CN 113284067A
Authority
CN
China
Prior art keywords
image
hyperspectral
formula
residual error
shallow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110602214.6A
Other languages
Chinese (zh)
Other versions
CN113284067B (en
Inventor
赵明华
李停停
胡静
宁家伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110602214.6A priority Critical patent/CN113284067B/en
Publication of CN113284067A publication Critical patent/CN113284067A/en
Application granted granted Critical
Publication of CN113284067B publication Critical patent/CN113284067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral panchromatic sharpening method based on a depth detail injection network, which comprises the steps of firstly selecting two hyperspectral image data sets respectively covering an indoor scene and an outdoor scene, namely a Cave data set and a PaviaCenter data set; the hyperspectral image with low resolution is up-sampled and combined with the full-color image, and the combined image is input into the convolutional layer to extract shallow layer characteristics; the extracted shallow features are sent to the convolutional layer again for further extraction; then inputting the shallow layer characteristics extracted for the second time into a residual error dense block network; finally, performing global feature fusion on all residual error dense blocks to obtain the hierarchical features of the combined image; performing residual error operation on the shallow layer characteristics and the hierarchical characteristics; and finally, carrying out convolution to obtain a fusion result of the hyperspectral panchromatic sharpening method based on the depth detail injection network. The invention solves the problem of limited fusion effect caused by insufficient detail extraction in the hyperspectral fusion process in the prior art.

Description

Hyperspectral panchromatic sharpening method based on depth detail injection network
Technical Field
The invention belongs to the technical field of remote sensing images, and particularly relates to a hyperspectral panchromatic sharpening method based on a depth detail injection network.
Background
The hyperspectral image is three-dimensional cubic data containing spatial information and spectral information. In the spatial domain, each band can be represented as a scene description of the corresponding wavelength. Spectrally, any pixel forms a high-resolution spectral curve under the response of hundreds of different wavelengths, and can be used for distinguishing different material properties, namely spectral identification. The unique spectrum identification of the hyperspectral remote sensing image enables the hyperspectral remote sensing image to be widely applied to the fields of military rescue, environmental monitoring and the like. However, due to the limitation of hardware, it is difficult for the spectral imager to obtain a remote sensing image with both high spatial resolution and high spectral resolution. In the practical application process, more data corresponding to different loads are obtained on the remote sensing platform. The hyperspectral image obtained by the imaging spectrometer contains abundant spectral information but has lower spatial resolution, and the remote sensing image obtained by the panchromatic camera has higher spatial resolution but less wave bands. The hyperspectral panchromatic sharpening is to obtain a hyperspectral image with both high spatial resolution and high spectral resolution by fusing a hyperspectral image and a panchromatic image.
The existing hyperspectral full-color fusion method can be divided into component replacement, multiresolution analysis, matrix decomposition-based and deep learning-based methods. The component replacement method is to use a full-color image to replace one component after the inverse transformation of the hyperspectral image. The multiresolution analysis method injects spatial detail information of the panchromatic image into the up-sampled hyperspectral image. The matrix decomposition-based method is to model the process of generating a panchromatic image and an input hyperspectral image by decomposing a target hyperspectral image into spectral bases and corresponding subspace coefficients. In order to solve the modeling function, a priori knowledge of a given image is required, and the given priori knowledge of the image often cannot completely describe the characteristics of the image, so that the detail of the fused image is distorted.
The hyperspectral panchromatic sharpening method based on deep learning is characterized in that a hyperspectral image and a panchromatic image are simultaneously input into a network, and input and output are learned, namely the mapping relation between ideal hyperspectral images. The learned relationship is generalized to the rest of the images to obtain a high spatial resolution hyperspectral image.
Disclosure of Invention
The invention aims to provide a hyperspectral panchromatic sharpening method based on a depth detail injection network, and solves the problem that in the prior art, the fusion effect is limited due to insufficient detail extraction in a hyperspectral fusion process.
The technical scheme adopted by the invention is that the hyperspectral panchromatic sharpening method based on the depth detail injection network is implemented according to the following steps:
step 1, selecting data sets of two hyperspectral images, wherein the data sets of the two hyperspectral images respectively cover an indoor scene and an outdoor scene, the indoor scene is represented by a Cave data set, and the outdoor scene is represented by a PaviaCenter data set;
step 2, performing up-sampling on the low-resolution hyperspectral image in the data set in the step 1, combining the hyperspectral image with the full-color image, and inputting the combined image into the convolutional layer to extract the shallow feature of the combined image;
3, sending the shallow layer features extracted in the step 2 to the convolutional layer again, and further extracting the shallow layer features; then inputting the shallow layer characteristics extracted for the second time into a residual error dense block network; finally, performing primary global feature fusion on all residual error dense blocks to obtain the hierarchical features of the combined image;
step 4, performing residual error operation on the shallow layer characteristics obtained in the step 2 and the hierarchical characteristics obtained in the step 3; and finally, carrying out convolution operation once to obtain a fusion result of the hyperspectral panchromatic sharpening method based on the depth detail injection network.
The present invention is also characterized in that,
the step 1 is as follows:
step 1.1, adopting a Cave data set to represent an indoor scene, and adopting a Pavia Center data set to represent an outdoor scene; the original hyperspectral image in the data set is used as a reference image, the simulated low-resolution hyperspectral image is obtained by down-sampling the reference image, and the simulated high-resolution panchromatic image is obtained by averaging the third dimension of the reference image;
step 1.2, respectively dividing a training set, a verification set and a test set for the two data sets, namely a Cave data set and a Pavia Center data set, wherein the number of images in each training set is 80% of the whole image data set, the number of images in each test set is 10% of the whole image data set, and the number of images in each verification set is 10% of the whole image data set;
and 1.3, dividing the data set, performing data preprocessing, and uniformly adjusting the size of the image to 64 multiplied by 64.
The step 2 is as follows:
step 2.1, performing up-sampling on the low-resolution hyperspectral image obtained in the step 1.1, as shown in formula (1):
Figure BDA0003093080340000031
in the formula: HSbRepresenting a low-resolution hyperspectral image, wherein b is 1,2,3, and n is the number of wave bands of the hyperspectral image; f. ofupRefers to a bicubic interpolation function of relative multiple of a hyperspectral image with low spatial resolution,
Figure BDA0003093080340000032
representing an upsampled hyperspectral image;
step 2.2, performing combined operation on the high-resolution full-color image obtained in the step 1.1 and the hyperspectral image obtained in the step 2.1 to obtain a combined image with n +1 wave band numbers, wherein the combined process is shown as a formula (2):
Figure BDA0003093080340000041
in the formula: b ═ 1,2,3,. n, n + 1; as indicated by a full color image and
Figure BDA0003093080340000042
the PAN represents a full-color image,
Figure BDA0003093080340000043
representing a joint image;
and 2.3, extracting shallow features of the combined image by using a 3 × 3 convolutional layer in the preprocessing step, wherein the formula (3) is as follows:
F-1=fCONV(HSin), (3)
in the formula: f. ofCONVRepresents a convolution operation; f-1And the shallow feature of the combined image is represented and simultaneously used as an input for extracting the hierarchical feature, and the global residual error learning is also used.
Step 3, the shallow features of the combined image extracted in the step 2 are sent to the convolutional layer again for re-extraction; then inputting the shallow layer characteristics extracted for the second time into a residual error dense block network; and finally, performing one-time global feature fusion on all residual error dense blocks, which specifically comprises the following steps:
step 3.1, performing convolution operation of 3 × 3 on the shallow layer feature extracted in step 2, as shown in formula (4):
F0=fCONV(F-1), (4)
in the formula: f0Representing shallow features extracted by performing a second convolution operation on the joint image;
step 3.2, assuming that the residual error dense blocks RDBs include D residual error dense blocks RDB and the D-th residual error dense block RDB, as shown in formula (5), in one residual error dense block RDB, first, the state of the previous residual error dense block RDB needs to be transferred to each convolution layer and rectification linear unit in the current residual error dense block RDB, as shown in formula (6):
Figure BDA0003093080340000044
Figure BDA0003093080340000045
in the formula:
Figure BDA0003093080340000046
a complex function representing the d-th residual dense block RDB;
Figure BDA0003093080340000047
representing the weight of the ith layer in the residual dense block RDB;
Figure BDA0003093080340000051
an ith convolution layer representing a d-th residual dense block RDB; i ═ 1,2,3,. ·, I, ReLu stands for activation function;
step 3.3, secondly, the state of the previous residual error dense block RDB and the state of the whole convolution layer in the current residual error dense block RDB are connected and adaptively fused together through convolution of 1 × 1, as shown in formula (1):
Figure BDA0003093080340000052
in the formula:
Figure BDA0003093080340000053
represents the 1 × 1 convolutional layer function in the d RDB, Fd,LFLocal features representing the d-th residual dense block RDB;
step 3.4, finally, a formula (8) is obtained by summing the local features of the previous residual error dense block RDB and the d-th residual error dense block RDB obtained in step 3.3, as follows:
Fd=Fd-1+Fd,LF, (8)
in the formula: fd-1Represents the d-1 th residual dense block RDB, FdRepresents the d-th residual dense block RDB;
step 3.5, performing global feature fusion on all residual error dense blocks RDBs obtained in the step 3.4, namely adaptively fusing features extracted by all RDBs together to obtain global features, wherein the process is defined as formula (9):
FGFF=HGFF([F1......,FD]), (9)
in the formula: [ f ] of1......,FD]Is the result of the 1 st RDB through the D < th > RDB, HGFFIs a function of the global feature fusion, FGFFRepresenting the hierarchical features of the joint image.
Step 4, carrying out residual error operation on the shallow layer characteristics obtained in the step 2.3 and the hierarchical characteristics obtained in the step 3; and finally, carrying out convolution operation once to obtain a high-resolution hyperspectral image, which specifically comprises the following steps:
the summation operation process of the shallow feature obtained in step 4.1 and step 2.3 and the hierarchical feature obtained in step 3 is shown in formula (10):
FRes=F-1+FGFF, (10)
in the formula: f-1And FGFFRespectively representing shallow and hierarchical features of the joint image, FResDense features representing the joint image;
step 4.2, performing convolution operation of 3 × 3 on the dense features of the combined image obtained in step 4.1 to obtain a high-resolution hyperspectral image with a wave band of n, as shown in formula (11):
HSfus=fconv(FRes), (11)
in the formula: fResDense features representing the joint image; HSfusRepresenting a high-resolution hyperspectral image.
The hyperspectral panchromatic sharpening method based on the depth detail injection network has the advantages that the residual error dense network is adopted to fully utilize all the hierarchical features of the input image, then the residual error dense block is used for extracting the local features, and finally the global residual error learning is used for combining the shallow features and the deep features together to obtain the hyperspectral image with high spatial resolution and high spectral resolution.
Drawings
FIG. 1 is a flow chart of a hyperspectral panchromatic sharpening method based on a depth detail injection network according to the invention;
FIG. 2 is a visual result of the Cave test set using different fusion algorithms with a sampling factor of 8 to obtain the 30 th waveband of the fusion image;
fig. 3 is a visualization of the 102 th band of the fused image obtained by the Pavia Center test set using different fusion algorithms with a sampling factor of 2.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a hyperspectral panchromatic sharpening method based on a depth detail injection network, which is implemented by the following steps in detail by combining a flow chart shown in figure 1 and figures 2-3:
step 1, selecting data sets of two hyperspectral images, wherein the data sets of the two hyperspectral images respectively cover an indoor scene and an outdoor scene, the indoor scene is represented by a Cave data set, and the outdoor scene is represented by a PaviaCenter data set;
the step 1 is as follows:
step 1.1, constructing a data set of a hyperspectral image, and adopting a Cave data set to represent an indoor scene and a Pavia Center data set to represent an outdoor scene; according to the Wald protocol, an original hyperspectral image in a data set is used as a reference image, a simulated low-resolution hyperspectral image is obtained by down-sampling the reference image, and a simulated high-resolution panchromatic image is obtained by averaging the third dimension of the reference image;
step 1.2, respectively dividing a training set, a verification set and a test set for the two data sets, namely a Cave data set and a Pavia Center data set, wherein the number of images in each training set is 80% of the whole image data set, the number of images in each test set is 10% of the whole image data set, and the number of images in each verification set is 10% of the whole image data set;
step 1.3, after the data set is divided, data preprocessing is carried out, and in order to guarantee feasibility of code operation, the size of the image is uniformly adjusted to be 64 x 64.
Step 2, performing up-sampling on the low-resolution hyperspectral image in the data set in the step 1, combining the hyperspectral image with the full-color image, inputting the combined image into the convolutional layer, and extracting the shallow layer feature F of the combined image-1
Step 2, as shown in fig. 1, the low-resolution hyperspectral image in the data in step 1 is firstly up-sampled, then combined with the full-color image, and input into the convolutional layer, specifically as follows:
step 2.1, performing up-sampling on the low-resolution hyperspectral image obtained in the step 1.1, as shown in formula (1):
Figure BDA0003093080340000071
in the formula: HSbRepresenting a low-resolution hyperspectral image, wherein b is 1,2,3, and n is the number of wave bands of the hyperspectral image; fup refers to a bicubic interpolation function of relative multiples of a hyperspectral image with low spatial resolution,
Figure BDA0003093080340000072
representing an upsampled hyperspectral image;
step 2.2, performing combined operation on the high-resolution full-color image obtained in the step 1.1 and the hyperspectral image obtained in the step 2.1 to obtain a combined image with n +1 wave band numbers, wherein the combined process is shown as a formula (2):
Figure BDA0003093080340000081
in the formula: b ═ 1,2,3,. n, n + 1; as indicated by a full color image and
Figure BDA0003093080340000082
the PAN represents a full-color image,
Figure BDA0003093080340000083
representing a joint image;
and 2.3, extracting shallow features of the combined image by using a 3 × 3 convolutional layer in the preprocessing step, wherein the formula (3) is as follows:
F-1=fCONV(HSin), (3)
in the formula: f. ofCONVRepresents a convolution operation; f-1And the shallow feature of the combined image is represented and simultaneously used as an input for extracting the hierarchical feature, and the global residual error learning is also used.
Step 3, sending the shallow feature of the combined image extracted in the step 2 to the convolution layer again for extracting the shallow feature again; then inputting the shallow layer characteristics extracted for the second time into a residual error dense block network; finally, performing global feature fusion on all residual dense blocks once, as shown in fig. 1, specifically as follows:
step 3.1, performing convolution operation of 3 × 3 on the shallow layer feature extracted in step 2, as shown in formula (4):
F0=fCONV(F-1), (4)
in the formula: f0Representing shallow features extracted by performing a second convolution operation on the joint image;
step 3.2, assuming that the residual error dense blocks RDBs include D residual error dense blocks RDB and the D-th residual error dense block RDB, as shown in formula (5), in one residual error dense block RDB, first, the state of the previous residual error dense block RDB needs to be transferred to each convolution layer and rectification linear unit in the current residual error dense block RDB, as shown in formula (6):
Figure BDA0003093080340000091
Figure BDA0003093080340000092
in the formula:
Figure BDA0003093080340000093
a complex function representing the d-th residual dense block RDB;
Figure BDA0003093080340000094
representing the weight of the ith layer in the residual dense block RDB;
Figure BDA0003093080340000097
an ith convolution layer representing a d-th residual dense block RDB; i ═ 1,2,3,. ·, I, ReLu stands for activation function;
step 3.3, secondly, the state of the previous residual error dense block RDB and the state of the whole convolution layer in the current residual error dense block RDB are connected and adaptively fused together through convolution of 1 × 1, as shown in formula (1):
Figure BDA0003093080340000095
in the formula:
Figure BDA0003093080340000096
represents the 1 × 1 convolutional layer function in the d RDB, Fd,LFLocal features representing the d-th residual dense block RDB;
step 3.4, finally, a formula (8) is obtained by summing the local features of the previous residual error dense block RDB and the d-th residual error dense block RDB obtained in step 3.3, as follows:
Fd=Fd-1+Fd,LF, (8)
in the formula: fd-1Represents the d-1 th residual dense block RDB, FdRepresents the d-th residual dense block RDB;
step 3.5, performing global feature fusion on all residual error dense blocks RDBs obtained in the step 3.4, namely adaptively fusing features extracted by all RDBs together to obtain global features, wherein the global feature fusion comprises convolution operations of 1 × 1 and 3 × 3, and the convolution of 1 × 1 is to fuse a series of features; the 3x3 convolution is prepared for further feature extraction for the next global residual learning. The above process is defined as formula (9):
FGFF=HGFF([f1…...,FD]), (9)
in the formula: [ F ]1......,FD]Is the result of the 1 st RDB through the D < th > RDB, HGFFIs a function of the global feature fusion, FGFFRepresenting the hierarchical features of the joint image.
Step 4, performing residual error operation on the shallow layer characteristics obtained in the step 2 and the hierarchical characteristics obtained in the step 3; and finally, carrying out convolution operation once to obtain a fusion result of the hyperspectral panchromatic sharpening method based on the depth detail injection network.
Step 4, as shown in fig. 1, performing residual error operation on the shallow feature obtained in step 2.3 and the hierarchical feature obtained in step 3; and finally, carrying out convolution operation once to obtain a high-resolution hyperspectral image, which specifically comprises the following steps:
the summation operation process of the shallow feature obtained in step 4.1 and step 2.3 and the hierarchical feature obtained in step 3 is shown in formula (10):
FRes=F-1+FGFF, (10)
in the formula: f-1And FGFFRespectively representing shallow and hierarchical features of the joint image, FResDense features representing the joint image;
step 4.2, performing convolution operation of 3 × 3 on the dense features of the combined image obtained in step 4.1 to obtain a high-resolution hyperspectral image with a wave band of n, as shown in formula (11):
HSfus=fconv(FRes), (11)
in the formula: fResDense features representing the joint image; HSfusRepresenting a high-resolution hyperspectral image.
In addition, in order to perform comprehensive evaluation on the fused image, subjective evaluation and objective evaluation are adopted. Objective evaluation indices include Cross-Correlation (CC), Spectral Angle Mapping (SAM), Root-Mean-Squared Error (RMSE), Relative global dimensional composite Error (Erreal Relative Global evaluation nelle De Synth se, ERGAS), Peak Signal Noise Ratio (PSNR), and Structural Similarity (SSIM).
The invention herein is compared to the following six algorithms: bicubic (bicubic), a self-adaptive Schmidt orthogonal variation method (GSA), a Laplacian Pyramid method based on a Generalized modulation transfer function (MTF-Generalized Laplacian Pyramid, MTF-GLP), a Guided filter principal component analysis method (GFPCA), Coupled nonnegative matrix decomposition (mf cnc), and a panchromatic sharpening method based on a convolutional neural network (PNN).
For subjective evaluation, Flowers images in the Cave test set are selected, sampling factors are 8, and the images are presented by using the thirtieth wave band of the fused image. Fig. 2(a) is a reference diagram of Flowers, fig. 2(b) is a result diagram of a bicubic (bicubic) method of performing bicubic interpolation on Flowers, fig. 2(c) is a result diagram of a GSA (GSA) method of performing adaptive schmitt orthogonal transformation on Flowers, fig. 2(d) is a result diagram of a laplacian pyramid method (MTF-GLP) based on a generalized modulation transfer function on Flowers, fig. 2(e) is a result diagram of a guided filter principal component analysis method (GFPCA) on Flowers, fig. 2(f) is a result diagram of a coupled nonnegative matrix decomposition method (CNMF) on Flowers, fig. 2(g) is a result diagram of a PNN (PNN) method of performing convolutional neural network-based panchromatic sharpening on Flowers, and fig. 2(h) is a result diagram of a hyperspectral panchromatic method of performing depth-detail injection network-based on Flowers. As can be seen from fig. 2, the hyperspectral image of Bicubic appears with a large amount of blur; the GSA fused hyperspectral image has certain spectral distortion; the MTF-GLP fused hyperspectral image loses some spatial details; the GFPCA fused hyperspectral image is fuzzy in some detail areas; the CMNF and PNN fused hyperspectral image can be well maintained on spectral information and spatial information; the fused image of the algorithm provided by the invention effectively maintains the spectral information and enhances the spatial information. The average values for the Cave dataset using various fusion algorithms with sampling factors of 2, 4 and 8 are listed in table 1, with the optimal values being shown in bold. From the experimental results, it can be seen that the values of CC, PSNR and SSIM are the largest and the values of SAM, RMSE and ERGAS are the smallest for the present algorithm, regardless of the sampling factor.
TABLE 1 mean values of various fusion algorithms for Cave datasets using different sampling factors
Figure BDA0003093080340000121
Fig. 3 is a visualization result of 102 th band of a fused image obtained by a Pavia Center test set using different fusion algorithms with a sampling factor of 2, fig. 3(a) is a reference diagram of the test set, fig. 3(b) is a result diagram of a bicubic method of bicubic interpolation performed on the test set, fig. 3(c) is a result diagram of an adaptive schmidt orthogonal transformation method (GSA) performed on the test set, fig. 3(d) is a result diagram of a laplacian pyramid method based on a generalized modulation transfer function (MTF-GLP) performed on the test set, fig. 3(e) is a result diagram of a guided filtering principal component analysis method (GFPCA) performed on the test set, fig. 3(f) is a result diagram of a coupled non-negative matrix factorization method (CNMF) performed on the test set, fig. 3(g) is a result diagram of a full color sharpening method based on a convolutional neural network (PNN) performed on the test set, and fig. 3(h) is a hyperspectral method of a full color sharpening method based on depth injection network performed on the test set And (5) fruit pictures. The results of Bicubic appear to be a blurring of the patch in detail; the results of GSA, CMNF and PNN fusion are somewhat detailed lost; slight spectral distortion appears as a result of MTF-GLP fusion, and partial edge information is lost; the result of GFPCA fusion appears unclear in detail information and the image is dark overall. Compared to the above algorithms, the algorithms presented herein are enhanced in both spectral and spatial aspects. Table 2 is an objective evaluation index for various fusion algorithms using various sampling factors for the Pavia dataset. As can be seen from the table, each evaluation index of the algorithm herein is better than the optimum value and the value of PNN in the conventional method.
Table 2 mean values of various fusion algorithms using different sampling factors for the PaviaCenter dataset
Figure BDA0003093080340000131

Claims (5)

1. The hyperspectral panchromatic sharpening method based on the depth detail injection network is characterized by being implemented according to the following steps:
step 1, selecting data sets of two hyperspectral images, wherein the data sets of the two hyperspectral images respectively cover an indoor scene and an outdoor scene, the indoor scene is represented by a Cave data set, and the outdoor scene is represented by a PaviaCenter data set;
step 2, performing up-sampling on the low-resolution hyperspectral image in the data set in the step 1, combining the hyperspectral image with the full-color image, and inputting the combined image into the convolutional layer to extract the shallow feature of the combined image;
3, sending the shallow layer features extracted in the step 2 to the convolutional layer again, and further extracting the shallow layer features; then inputting the shallow layer characteristics extracted for the second time into a residual error dense block network; finally, performing primary global feature fusion on all residual error dense blocks to obtain the hierarchical features of the combined image;
step 4, performing residual error operation on the shallow layer characteristics obtained in the step 2 and the hierarchical characteristics obtained in the step 3; and finally, carrying out convolution operation once to obtain a fusion result of the hyperspectral panchromatic sharpening method based on the depth detail injection network.
2. The hyperspectral panchromatic sharpening method based on the depth detail injection network according to claim 1, wherein the step 1 is as follows:
step 1.1, adopting a Cave data set to represent an indoor scene, and adopting a Pavia Center data set to represent an outdoor scene; the original hyperspectral image in the data set is used as a reference image, the simulated low-resolution hyperspectral image is obtained by down-sampling the reference image, and the simulated high-resolution panchromatic image is obtained by averaging the third dimension of the reference image;
step 1.2, respectively dividing a training set, a verification set and a test set for the two data sets, namely a Cave data set and a Pavia Center data set, wherein the number of images in each training set is 80% of the whole image data set, the number of images in each test set is 10% of the whole image data set, and the number of images in each verification set is 10% of the whole image data set;
and 1.3, dividing the data set, performing data preprocessing, and uniformly adjusting the size of the image to 64 multiplied by 64.
3. The hyperspectral panchromatic sharpening method based on the depth detail injection network according to claim 2, wherein the step 2 is as follows:
step 2.1, performing up-sampling on the low-resolution hyperspectral image obtained in the step 1.1, as shown in formula (1):
Figure FDA0003093080330000021
in the formula, HSbRepresenting a low-resolution hyperspectral image, b is 1,2,3, …, n, wherein n is the number of wave bands of the hyperspectral image; f. ofupRefers to a bicubic interpolation function of relative multiple of a hyperspectral image with low spatial resolution,
Figure FDA0003093080330000022
representing an upsampled hyperspectral image;
step 2.2, performing combined operation on the high-resolution full-color image obtained in the step 1.1 and the hyperspectral image obtained in the step 2.1 to obtain a combined image with n +1 wave band numbers, wherein the combined process is shown as a formula (2):
Figure FDA0003093080330000023
wherein B is 1,2,3, …, n, n + 1; as indicated by a full color image and
Figure FDA0003093080330000024
cascaded operation of (5), PAN representing full colorThe image is a picture of a person to be imaged,
Figure FDA0003093080330000025
representing a joint image;
and 2.3, extracting shallow features of the combined image by using a 3 × 3 convolutional layer in the preprocessing step, wherein the formula (3) is as follows:
F-1=fCONV(HSin), (3)
in the formula fCONVRepresents a convolution operation; f-1And the shallow feature of the combined image is represented and simultaneously used as an input for extracting the hierarchical feature, and the global residual error learning is also used.
4. The hyperspectral panchromatic sharpening method based on the depth detail injection network is characterized in that the step 3 is used for sending the shallow features of the combined image extracted in the step 2 to the convolution layer again for extracting the shallow features again; then inputting the shallow layer characteristics extracted for the second time into a residual error dense block network; and finally, performing one-time global feature fusion on all residual error dense blocks, which specifically comprises the following steps:
step 3.1, performing convolution operation of 3 × 3 on the shallow layer feature extracted in step 2, as shown in formula (4):
F0=fCONV(F-1), (4)
in the formula: f0Representing shallow features extracted by performing a second convolution operation on the joint image;
step 3.2, assuming that the residual error dense blocks RDBs include D residual error dense blocks RDB and the D-th residual error dense block RDB, as shown in formula (5), in one residual error dense block RDB, first, the state of the previous residual error dense block RDB needs to be transferred to each convolution layer and rectification linear unit in the current residual error dense block RDB, as shown in formula (6):
Figure FDA0003093080330000031
Figure FDA0003093080330000032
in the formula:
Figure FDA0003093080330000033
a complex function representing the d-th residual dense block RDB;
Figure FDA0003093080330000034
representing the weight of the ith layer in the residual dense block RDB;
Figure FDA0003093080330000035
an ith convolution layer representing a d-th residual dense block RDB; i ═ 1,2,3, …, I, ReLu, for activation functions;
step 3.3, secondly, the state of the previous residual error dense block RDB and the state of the whole convolution layer in the current residual error dense block RDB are connected and adaptively fused together through convolution of 1 × 1, as shown in formula (1):
Figure FDA0003093080330000036
in the formula:
Figure FDA0003093080330000037
represents the 1 × 1 convolutional layer function in the d RDB, Fd,LFLocal features representing the d-th residual dense block RDB;
step 3.4, finally, a formula (8) is obtained by summing the local features of the previous residual error dense block RDB and the d-th residual error dense block RDB obtained in step 3.3, as follows:
Fd=Fd-1+Fd,LF, (8)
in the formula, Fd-1Represents the d-1 th residual dense block RDB, FdRepresents the d-th residual dense block RDB;
step 3.5, performing global feature fusion on all residual error dense blocks RDBs obtained in the step 3.4, namely adaptively fusing features extracted by all RDBs together to obtain global features, wherein the process is defined as formula (9):
FGFF=HGFF([F1……,FD]), (9)
in the formula: [ F ]1 … …,FD]Is the result of the 1 st RDB through the D < th > RDB, HGFFIs a function of the global feature fusion, FGFFRepresenting the hierarchical features of the joint image.
5. The hyperspectral panchromatic sharpening method based on the depth detail injection network according to claim 4, wherein the step 4 is to perform residual operation on the shallow feature obtained in the step 2.3 and the hierarchical feature obtained in the step 3; and finally, carrying out convolution operation once to obtain a high-resolution hyperspectral image, which specifically comprises the following steps:
the summation operation process of the shallow feature obtained in step 4.1 and step 2.3 and the hierarchical feature obtained in step 3 is shown in formula (10):
FRes=F-1+FGFF, (10)
in the formula, F-1And FGFFRespectively representing shallow and hierarchical features of the joint image, FResDense features representing the joint image;
step 4.2, performing convolution operation of 3 × 3 on the dense features of the combined image obtained in step 4.1 to obtain a high-resolution hyperspectral image with a wave band of n, as shown in formula (11):
HSfus=fconv(FRes), (11)
in the formula: fResDense features representing the joint image; HSfusRepresenting a high-resolution hyperspectral image.
CN202110602214.6A 2021-05-31 2021-05-31 Hyperspectral panchromatic sharpening method based on depth detail injection network Active CN113284067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110602214.6A CN113284067B (en) 2021-05-31 2021-05-31 Hyperspectral panchromatic sharpening method based on depth detail injection network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110602214.6A CN113284067B (en) 2021-05-31 2021-05-31 Hyperspectral panchromatic sharpening method based on depth detail injection network

Publications (2)

Publication Number Publication Date
CN113284067A true CN113284067A (en) 2021-08-20
CN113284067B CN113284067B (en) 2024-02-09

Family

ID=77282855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110602214.6A Active CN113284067B (en) 2021-05-31 2021-05-31 Hyperspectral panchromatic sharpening method based on depth detail injection network

Country Status (1)

Country Link
CN (1) CN113284067B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897714A (en) * 2022-04-15 2022-08-12 华南理工大学 Hyperspectral image sharpening method based on dual-scale fusion network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140301659A1 (en) * 2013-04-07 2014-10-09 Bo Li Panchromatic Sharpening Method of Spectral Image Based on Fusion of Overall Structural Information and Spatial Detail Information
CN109727207A (en) * 2018-12-06 2019-05-07 华南理工大学 High spectrum image sharpening method based on Forecast of Spectra residual error convolutional neural networks
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks
CN111127374A (en) * 2019-11-22 2020-05-08 西北大学 Pan-sharing method based on multi-scale dense network
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140301659A1 (en) * 2013-04-07 2014-10-09 Bo Li Panchromatic Sharpening Method of Spectral Image Based on Fusion of Overall Structural Information and Spatial Detail Information
CN109727207A (en) * 2018-12-06 2019-05-07 华南理工大学 High spectrum image sharpening method based on Forecast of Spectra residual error convolutional neural networks
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks
CN111127374A (en) * 2019-11-22 2020-05-08 西北大学 Pan-sharing method based on multi-scale dense network
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张少磊;付光远;汪洪桥;赵玉清;: "基于向量总变差约束局部光谱解混的高光谱图像超分辨", 光学精密工程, no. 12 *
雷鹏程;刘丛;唐坚刚;彭敦陆;: "分层特征融合注意力网络图像超分辨率重建", 中国图象图形学报, no. 09 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897714A (en) * 2022-04-15 2022-08-12 华南理工大学 Hyperspectral image sharpening method based on dual-scale fusion network

Also Published As

Publication number Publication date
CN113284067B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
Shao et al. Remote sensing image fusion with deep convolutional neural network
CN110533620B (en) Hyperspectral and full-color image fusion method based on AAE extraction spatial features
Zhou et al. Pyramid fully convolutional network for hyperspectral and multispectral image fusion
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
Acharya et al. Image processing: principles and applications
González-Audícana et al. A low computational-cost method to fuse IKONOS images using the spectral response function of its sensors
CN109272010B (en) Multi-scale remote sensing image fusion method based on convolutional neural network
CN106920214B (en) Super-resolution reconstruction method for space target image
Qu et al. A dual-branch detail extraction network for hyperspectral pansharpening
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
CN112507997A (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN116152120B (en) Low-light image enhancement method and device integrating high-low frequency characteristic information
CN108090872B (en) Single-frame multispectral image super-resolution reconstruction method and system based on gradient extraction
Sdraka et al. Deep learning for downscaling remote sensing images: Fusion and super-resolution
CN113191325B (en) Image fusion method, system and application thereof
CN113793289A (en) Multi-spectral image and panchromatic image fuzzy fusion method based on CNN and NSCT
Pan et al. Structure–color preserving network for hyperspectral image super-resolution
Nie et al. Unsupervised hyperspectral pansharpening by ratio estimation and residual attention network
Licciardi et al. Fusion of hyperspectral and panchromatic images: A hybrid use of indusion and nonlinear PCA
CN113284067B (en) Hyperspectral panchromatic sharpening method based on depth detail injection network
Lu et al. Pan-sharpening by multilevel interband structure modeling
CN114511470B (en) Attention mechanism-based double-branch panchromatic sharpening method
CN114638761B (en) Full-color sharpening method, equipment and medium for hyperspectral image
Jain et al. Multimodal image fusion employing discrete cosine transform
CN115861749A (en) Remote sensing image fusion method based on window cross attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant