CN115880152B - Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network - Google Patents

Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network Download PDF

Info

Publication number
CN115880152B
CN115880152B CN202211603912.9A CN202211603912A CN115880152B CN 115880152 B CN115880152 B CN 115880152B CN 202211603912 A CN202211603912 A CN 202211603912A CN 115880152 B CN115880152 B CN 115880152B
Authority
CN
China
Prior art keywords
layer
sensor
multispectral
feature
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211603912.9A
Other languages
Chinese (zh)
Other versions
CN115880152A (en
Inventor
谷延锋
李天帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202211603912.9A priority Critical patent/CN115880152B/en
Publication of CN115880152A publication Critical patent/CN115880152A/en
Application granted granted Critical
Publication of CN115880152B publication Critical patent/CN115880152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network, and relates to a hyperspectral remote sensing image generation method. The invention aims to solve the problem that the acquisition of hyperspectral images containing rich spectral bands in the prior imaging technology is a relatively high acquisition cost. The process is as follows: 1. constructing a corresponding ideal multispectral image by utilizing a multispectral sensor spectral response function and the hyperspectral image; selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain an ideal sensor mapping network after training; 2. an ideal multi-sensor spectrum reconstruction network is constructed and trained; 3. obtaining a hyperspectral image corresponding to the ideal multispectral image; 4. and obtaining a corrected hyperspectral image. The invention belongs to the technical field of satellite remote sensing.

Description

Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network
Technical Field
The invention belongs to the technical field of satellite remote sensing, relates to a deep neural network and a spectrum super-resolution technology, and particularly relates to a hyperspectral image generation method.
Background
Hyperspectral images typically have a variety of spectral bands from the infrared spectrum to the ultraviolet spectrum, and the rich spectral information of hyperspectral images makes it easier to separate objects that are similar in local bands, and rich spectral features have been widely used for various tasks.
However, acquiring hyperspectral images containing rich spectral bands is a relatively costly task due to limitations in imaging technology. Multispectral images typically have fewer spectral bands (typically less than 20) than hyperspectral images, accompanied by low acquisition costs, rich spatial information, and continuous temporal information, making it more convenient to distinguish in image detail. The number of multispectral satellites, which is far greater than that of hyperspectral satellites, also brings about a large number of usable multispectral images.
Disclosure of Invention
The invention aims to solve the problem that the acquisition of hyperspectral images containing rich spectral bands is a relatively high acquisition cost in the prior imaging technology, and provides a hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network.
A hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network comprises the following specific processes:
firstly, constructing a corresponding ideal multispectral image by utilizing a multispectral sensor spectral response function and a hyperspectral image;
selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain an ideal sensor mapping network after training;
constructing a corresponding multispectral image by utilizing a plurality of different multispectral sensor spectral response functions and hyperspectral images, constructing a multispectral-hyperspectral data pair of the multispectral sensor, and constructing and training an ideal multispectral reconstruction network by taking the multispectral-hyperspectral data pair as a training sample;
step three, acquiring a multispectral image of a region to be tested through a sensor, inputting the multispectral image of the region to be tested into a sensor ideal mapping network after training is completed, and acquiring an ideal multispectral image corresponding to the multispectral image of the region to be tested;
inputting the ideal multispectral image into an ideal multispectral sensor spectrum reconstruction network after training is completed, and obtaining a hyperspectral image corresponding to the ideal multispectral image;
and step four, correcting the hyperspectral image obtained in the step three to obtain a corrected hyperspectral image.
The beneficial effects of the invention are as follows:
multispectral images acquired by different sensors can provide more spectral information due to different band ranges sensed by the multispectral images. It becomes quite interesting to try to acquire the mapping of multi-sensor multispectral images to hyperspectral images and then generate hyperspectral images from the multispectral images as a calculation alternative. The resulting hyperspectral image will have a high spatial resolution, spectral resolution and temporal resolution at the same time, which makes fine interpretation of the region on a small scale and with high accuracy possible.
Through a hyperspectral image generation technology based on the multispectral images of the multisensor, a large number of multispectral images from different sensors can be generated into corresponding hyperspectral images, namely a large number of new hyperspectral images are generated, so that the existing multispectral data of the multisensor are processed into a hyperspectral image sequence with high space-time resolution and rich spectral bands; to conduct an overall, spatiotemporal analysis of the region by means of a trained hyperspectral classification model.
According to the method, a large number of hyperspectral remote sensing images are generated by taking rich multispectral image resources of different sensors as a basis through the trained multispectral reconstruction network, and a large number of available hyperspectral remote sensing image resources with high spatial resolution and high time resolution are brought for hyperspectral data application.
Description of the drawings:
FIG. 1 is an ideal mapping network diagram of a sensor, wherein MSI0-MSIn is a real multispectral image, and an ideal multispectral image HSI is obtained after projection;
FIG. 2 is a diagram of an ideal mapping network for a sensor;
FIG. 3 is a diagram of a full connection Res structure;
FIG. 4 is a diagram of an ideal multi-sensor spectral reconstruction network;
FIG. 5 is a diagram of a 2D spatial feature extraction network;
FIG. 6 is a diagram of a compression and excitation residual block structure;
FIG. 7 is a diagram of a 3D spatial spectrum feature building network;
FIG. 8 is a block diagram of a multi-sensor information fusion network;
fig. 9 is a diagram of a multi-information fusion block structure.
The specific embodiment is as follows:
the first embodiment is as follows: the hyperspectral remote sensing image generation method based on the Multi-sensor spectrum reconstruction network (Multi-Sensor Spectral Reconstruction Network, hereinafter referred to as MSSRN) comprises the following specific processes:
firstly, constructing a corresponding ideal multispectral image by utilizing a multispectral sensor spectral response function and a hyperspectral image;
selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain an ideal sensor mapping network after training; as in fig. 1;
the spectral curves generated by different sensors will tend to agree after being mapped. The sensor ideal mapping network establishes a mapping relation between a real multispectral image and an ideal multispectral image through training;
constructing a corresponding multispectral image by utilizing a plurality of different multispectral sensor spectral response functions and hyperspectral images, constructing a multispectral-hyperspectral data pair of the multispectral sensor, and constructing and training an ideal multispectral reconstruction network by taking the multispectral-hyperspectral data pair as a training sample;
step three, acquiring a multispectral image of a region to be tested through a sensor, inputting the multispectral image of the region to be tested into a sensor ideal mapping network after training is completed, and acquiring an ideal multispectral image corresponding to the multispectral image of the region to be tested;
inputting the ideal multispectral image into an ideal multispectral sensor spectrum reconstruction network after training is completed, and obtaining a hyperspectral image corresponding to the ideal multispectral image;
and step four, correcting the hyperspectral image obtained in the step three to obtain a corrected hyperspectral image.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is that in the first step, a corresponding ideal multispectral image is constructed by using the multispectral sensor spectral response function and the hyperspectral image; the specific process is as follows:
1) Spectral response function of multispectral sensor used for inquiry and wave band used for hyperspectral image to be generatedThe method comprises the steps of carrying out a first treatment on the surface of the With L and L M As a continuous spectrum curve (referring to the complete spectrum curve at the incidence sensor) and a multispectral curve, respectively, with R as the spectral response function of the multispectral sensor, then L M The relationship between the ith band of (c) and L and R can be expressed as L Mi =∫L(λ)R i (λ)dλ
Wherein L is Mi An ith band of the multispectral curve, L (lambda) is a continuous spectrum curve, R i (lambda) is a spectral response function corresponding to an ith band of the multispectral curve, and lambda is a wavelength;
2) Adjusting the spectral response function of a multispectral sensor to a hyperspectral to multispectral normalized spectral response function R based on the principle of linear interpolation H Spectral response function R H Representing the mapping of hyperspectral images to multispectral images,
wherein L is H Represents hyperspectral curve, h is hyperspectral band number, lambda n Wavelength of nth band possessed by hyperspectral sensor, R Hin ) For normalizing the spectral response function R H At a wavelength lambda in the ith band of the corresponding multispectral curve n The specific value is taken;
3) Multiplying the hyperspectral image with the normalized spectral response function from hyperspectral to multispectral to obtain a corresponding ideal multispectral image L M =L H R H ,L M 、L H And R is H L respectively expressed in matrix form M 、L H And R is H (italics is expressed in terms of single pixel computation, bold is expressed in terms of matrix for the whole image).
Other steps and parameters are the same as in the first embodiment.
And a third specific embodiment: the first or second embodiment is different from the first embodiment in that in the first step, the ideal mapping network of the sensor is constructed, and the basic principle of the hyperspectral remote sensing image generating method based on deep learning in this embodiment is as follows: and constructing an ideal mapping network of the sensor based on pixel information fusion.
The specific network structure is shown in fig. 2, and the ideal mapping network of the sensor comprises a feature extraction layer and a feature fusion layer;
the sensor ideal mapping network (overall system response) can be expressed as
I Map =H Fus (H QS (I M-S )×H QR (I M-R ))
Wherein H is QS 、H QR And H Fus Respectively representing respective system responses of a feature extraction layer (reference data part) for reference data, a feature extraction layer (real data part) for real data, and a feature fusion layer, I M-S And I M-R Respectively expressed as input reference multispectral data and real multispectral data, I Map Ideal projection multispectral data output by an ideal mapping network for the sensor;
in the training process, the reference data is ideal multispectral data of the main sensor, and in the subsequent testing process, real multispectral data of the main sensor are adopted;
the feature extraction layer (reference data portion) for the reference data is identical in structure to the feature extraction layer (real data portion) for the real data;
the feature extraction layer sequentially comprises 1 fully connected layer and 3 fully connected Res layers, is used for feature extraction of reference data and real data, and can be expressed as:
I QS =H QS-3 (H QS-2 (H QS-1 (H QS-0 (I M-S ))))
I QR =H QR-3 (H QR-2 (H QR-1 (H QR-0 (I M-R ))))
wherein H is QS-0 、H QR-0 A 1 st full connection layer of a feature extraction layer (reference data portion) for reference data and a feature extraction layer (real data portion) for real data, respectively; h QS-1 、H QS-2 、H QS-3 、H QR-1 、H QR-2 And H QR-3 Respectively needlesA 1 st to 3 rd fully connected Res layer to the feature extraction layer (reference data portion) of the reference data and the feature extraction layer (real data portion) of the real data; i QS For extracting features from reference data, I QR Features extracted from real data;
wherein each fully connected Res layer sequentially comprises a fully connected layer, an active layer, a fully connected layer and an active layer, as shown in figure 3;
the feature fusion layer comprises two element products and four full-connection layers, and is used for fusing the information acquired by the feature extraction layer to obtain a projected multispectral spectrum curve, and the feature fusion layer (system response) can be expressed as:
I Map =H Fus1 (I QS ×I QR )×H Fus2 (I QR )
wherein H is Fus1 And H Fus2 Respectively representing the system response of the real feature processing of the fusion feature processing, I Map Outputting features for the feature fusion layer;
the fusion characteristic treatment sequentially consists of 2 full-connection layers;
the real feature processing consists of 2 full connection layers in sequence.
Other steps and parameters are the same as in the first or second embodiment.
The specific embodiment IV is as follows: the difference between the present embodiment and the first to third embodiments is that, in the second step, the ideal multi-sensor spectrum reconstruction network is constructed, and the basic principle of the hyperspectral remote sensing image generating method based on deep learning in this embodiment is as follows: and constructing an ideal multi-sensor spectrum reconstruction network based on multi-sensor feature fusion.
The ideal multi-sensor spectrum reconstruction network comprises a feature extraction layer and a feature fusion layer;
the feature extraction layer comprises a 2D spatial feature extraction network and a 3D spatial spectrum feature extraction network;
the characteristic fusion layer comprises a multi-sensor information fusion network and a spectrum characteristic processing module;
the specific network structure is shown in fig. 4, and the ideal multi-sensor spectrum reconstruction network comprises a feature extraction layer and a feature fusion layer; the ideal multi-sensor spectral reconstruction network (overall system response) can be expressed as:
I MSSR =H SE (H S3D (I M )+H MSR (H S2D (I M ),H S2D1 (I M1 ),H S2D2 (I M2 )))
wherein H is S2D Representing a 2D space feature extraction network corresponding to a feature extraction layer of a main sensor (ideal multispectral data during training and ideal multispectral data during testing, and real multispectral data during practical application);
H S3D representing a 3D spatial spectrum feature extraction network corresponding to the main sensor feature extraction layer;
H S2D1 a 2D spatial feature extraction network corresponding to the feature extraction layer of the sensor 1 (ideal multispectral data during training, and mapped ideal multispectral data obtained by the real multispectral data through an ideal mapping network during testing and practical application);
H S2D2 a 2D spatial feature extraction network corresponding to the feature extraction layer of the sensor 2 (ideal multispectral data during training, and mapped ideal multispectral data obtained by the real multispectral data through an ideal mapping network during testing and practical application);
H MSR and H SE Respectively representing a multi-sensor information fusion network and a spectrum feature processing module in the feature fusion layer;
I M representing ideal multispectral data input to the primary sensor, I M1 Ideal multispectral data representing input sensor 1, I M2 Ideal multispectral data representing input sensor 2, I MSSR Representing an ideal multi-sensor spectral reconstruction network output.
The training is ideal multispectral data, the testing is ideal multispectral data, and the practical application is real multispectral data.
Other steps and parameters are the same as in one to three embodiments.
Fifth embodiment: this embodiment differs from one to three embodiments in that the 2D spatial feature extraction network includes: the device comprises a dimension raising layer, a feature extraction layer, a feature stacking layer and a feature compression layer; as shown in fig. 5;
an up-dimension layer for processing the original multispectral image into a specified high-dimension feature image, the dimension typically being 256 in use. The dimension-increasing layer consists of a single 3×3 convolution layer and a single activation layer (PRelu) in sequence;
I 2D-DA =H 2D-DA (I M )
I M to input data, I 2D-DA Is output after passing through the dimension increasing layer; h 2D-DA Processing for a dimension lifting layer;
for input dataThe output after passing through the dimension increasing layer is +.>Wherein n is the number of multispectral images processed each time, m is the band number of the multispectral images, and x and y are the length and width of the input multispectral images respectively; h 2D-DA Processing for a dimension lifting layer;
the characteristic extraction layer is used for extracting characteristics of different depths of the multispectral image; the feature extraction layer consists of 5 compression and excitation residual blocks;
the compression and excitation residual block is shown in fig. 6;
each compression and excitation residual block is composed of a single residual block and a single compression and excitation block in sequence;
each residual block comprises, in turn, a 3 x 3 convolutional layer, an active layer (PRelu), a 1 x 1 convolutional layer, and an active layer (PRelu);
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation between each compression and excitation residual block is as follows:
the input features are sequentially input into a 3X 3 convolution layer, an activation layer, a 1X 1 convolution layer and an activation layer (PRelu), residual block output features are obtained, the result obtained by adding the residual block output features and the input features is sequentially input into a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer, compression and excitation block output features are obtained, and the residual block output features are multiplied by the compression and excitation block output features to obtain compression and excitation residual block output results;
the output of the compressed and excited residual block for layers 1 through N can be expressed as:
I 2D-1 =H 2D-1 (I 2D-DA )
I 2D-N =H 2D-N (...(H 2D-1 (I 2D-DA )))
where H represents the system response of the compressed and excited residual block, H 2D-1 Representing the system response of layer 1 compression and excitation residual block, H 2D-N Representing the system response of the layer N compression and excitation residual block, I 2D-N Features representing the output of the layer N compression and excitation residual block, the compression and excitation residual block not changing the dimensions of the features,
a feature stacking layer for stacking the extracted different depth features of the multispectral image; feature stacking layers stack features of different depths together from the spectral dimension (the channel dimension of the 2D network), the feature stacking layers can be expressed as:
I 2D-C =[I 2D-DA ,I 2D-1 ,I 2D-2 ,...,I 2D-N ]
wherein [ among others ]]Representing a stack of spectral dimensions, I 2D-C Features representing the final output, the spectral dimensions of the features increase after stacking,i is the number of compressed and excited residual blocks;
and the characteristic compression layer is used for compressing the stacking characteristics to obtain hyperspectral characteristics. The feature compression layer consists of a single 3×3 convolution layer and a single activation layer (PRelu) in sequence; expressed as:
I 2D =H 2D-D (I 2D-C )
wherein H is 2D-D The feature compression layer process is represented, which consists of a single 3 x 3 convolutional layer and a single active layer (PRelu) in sequence.
For input dataThe output after passing through the feature compression layer is +.>h is the number of spectra of the desired hyperspectral spectrum.
Other steps and parameters are the same as in one to four embodiments.
Specific embodiment six: the difference between the present embodiment and the one-fifth embodiment is that the 3D spatial spectrum feature extraction network includes 4 sets of 3D feature processing modules and one 3D feature compression module (the processing modules, the outputs of the second set of 3D feature processing modules are output to the third set of 3D feature processing modules, the outputs of the third set of 3D feature processing modules are output to the fourth set of 3D feature processing modules, and the outputs of the fourth set of 3D feature processing modules are output to the 3D feature compression module), as shown in fig. 7;
each 3D feature processing module has a specified fundamental spectral dimension and the spectral dimension is stepped up (typically 10-20-40-80-160), wherein each stacked 3D feature processing module includes: the device comprises a dimension raising layer, a feature extraction layer, a feature expansion layer, a 3D feature stacking layer, a summation module and a 3D spectrum feature amplification layer;
an upscale layer for processing the original multispectral image into a specified high-dimensional feature image, the dimension being related to the block specified dimension in use. The dimension-increasing layer consists of a single 3×3 convolution layer and a single activation layer (PRelu) in sequence; expressed as:
I 3D-DA =H 3D-DA (I M )
I M for inputting dataThe output after passing through the dimension increasing layer of the first 3D characteristic processing module is I 3D-1 />The output after passing through the dimension increasing layer of the kth 3D feature processing module is I 3D-k Where s is the initial 3D dimension of the setup, typically 10,2 k-1 Magnification of the kth-1 3D feature processing module; h 3D-DA For the up-web layer processing, I 3D-DA Is output after passing through the dimension increasing layer;
the characteristic extraction layer is used for extracting characteristics of different depths of the multispectral image; the feature extraction layer consists of 5 compression and excitation residual blocks; compression and excitation residual blocks as shown in fig. 6, each compression and excitation residual block is composed of a single residual block and a single compression and excitation block in turn;
each residual block comprises, in turn, a 3 x 3 convolutional layer, an active layer (PRelu), a 1 x 1 convolutional layer, and an active layer (PRelu);
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation between each compression and excitation residual block is as follows:
the input features are sequentially input into a 3X 3 convolution layer, an activation layer, a 1X 1 convolution layer and an activation layer (PRelu), residual block output features are obtained, the result obtained by adding the residual block output features and the input features is sequentially input into a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer, compression and excitation block output features are obtained, and the residual block output features are multiplied by the compression and excitation block output features to obtain compression and excitation residual block output results;
the output of the compressed and excited residual blocks for layers 1 through N can be expressed as
I 3D-1 =H 3D-1 (I 3D-DA )
I 3D-N =H 3D-N (...(H 3D-1 (I 3D-DA )))
Where H represents the system response of the compressed and excited residual block, H 3D-1 Representing the system response of layer 1 compression and excitation residual block, H 3D-N Representing the system response of the layer N compression and excitation residual block, I 3D-N Features representing the output of the layer N compression and excitation residual block, the compression and excitation residual block not changing the dimensions of the features,
a feature expansion layer for expanding the 2D depth feature into 3D depth feature, and adding a feature dimension (channel dimension of 3D network) to the original 4-dimensional data to become 5-dimensional data, namely, byBecome->I 3D-N-USQ Features representing the output of the Nth layer laminated and excited residual block after feature expansion, and I of the dimension raising layer 3D-DA Change to I by feature expansion 3D-DA-USQ
A 3D feature stacking layer for stacking 3D depth features; the 3D feature stack layer stacks features of different depths together from a feature dimension, which 3D feature stack layer can be expressed as:
I 3D-C =[I 3D-DA-USQ ,I 3D-1-USQ ,I 3D-2-USQ ,...,I 3D-N-USQ ]
wherein [ among others ]]Representing a stack of feature dimensions, I 3D-C Features representing the final output, spectral dimensions of the features after stacking are increased with unchanged feature dimensions,i is the number of compressed and excited residual blocks;
the summation module is used for combining the depth characteristics of the upper layer 3D characteristic processing module;
I 3D-S =I 3D-C +I 3D-L0
wherein I is 3D-L0 Features acquired for the upper layer 3D feature processing module; i 3D-S Representing the summed features;
a 3D spectral feature amplification layer for amplifying the spectral dimensions of the extracted 3D depth features; the 3D spectrum characteristic amplifying layer consists of 1 3D deconvolution layer and is used for improving the spectrum dimension of data;
I 3D-L =H 3D-L (I 3D-S )
wherein H is 3D-L Representing 3D spectral feature magnification layer processing; i 3D-L Representing the amplified I of 3D spectral features 3D-S
After amplification, the spectral dimension of the data is doubled, byBecomes as follows
After passing through 4 groups of overlapped 3D feature processing modules, inputting the data into a 3D feature compression module;
the 3D feature compression module comprises 1 or 2 3D convolution modules (the 3D convolution modules sequentially comprise convolution layers and activation functions) influenced by different input multispectral dimensions and different output hyperspectral dimensions so as to compress the spectrum dimensions of the 3D features to the spectrum dimensions of the hyperspectral features;
I 3D =H 3D-D (I 3D-S )
wherein I is 3D-S Representing the summed features; h 3D-D Representing 3D feature compression module processing; i 3D Representing the output characteristics after passing through the 3D characteristic compression module.
For input dataThe output after passing through the 3D characteristic compression module isAfter removing the redundant feature dimension +.>Wherein t is the number of the overlapped 3D characteristic processing modules.
Other steps and parameters are the same as in one of the first to fifth embodiments.
Seventh embodiment: the difference between the present embodiment and one to six embodiments is that the spectral feature processing module is composed of a single compression and excitation block, and is used for adjusting the sum of outputs of the 2D network and the 3D network in a spectral level;
the single compression and excitation block comprises, in order, a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer, and a Sigmoid activation layer.
Other steps and parameters are the same as in one of the first to sixth embodiments.
Eighth embodiment: the difference between this embodiment and one of the first to seventh embodiments is that, as shown in fig. 8, the multi-sensor information fusion network sequentially includes 1 multi-sensor information preprocessing layer, 3 multi-information fusion blocks, and 1 information post-processing layer;
the multi-sensor information preprocessing layer comprises 3 2D convolution modules for processing multi-sensor feature blocks acquired by the 2D spatial feature extraction network to similar semantic features, and is an intermediate feature I of the 2D spatial feature extraction network 2D-C Obtained by stacking layers of features), respectively designated as I for a plurality of sensors 2D-C 、I 2D-C1 And I 2D-C2 The method comprises the following steps of:
I MT-P =H MT-P (I 2D-C )
I MT-P1 =H MT-P1 (I 2D-C1 )
I MT-P2 =H MT-P2 (I 2D-C2 )
wherein I is 2D-C Stacking features acquired from multispectral data input for a main sensor at a feature stacking layer through a 2D spatial feature extraction network, and H MT-P For the system response of the 2D convolution module (the 2D convolution module sequentially comprises a convolution layer and an activation function) corresponding to the main sensor, I MT-P For the main sensor output characteristics, I 2D-C2 Stacking features acquired from multispectral data input to the sensor 1 at a feature stacking layer through a 2D spatial feature extraction network, H MT-P2 For the system response of the 2D convolution module of the corresponding sensor 1, I MT-P2 For the sensor 1 to output characteristics, I 2D-C2 Stacking features acquired from multispectral data input to the sensor 2 at a feature stacking layer through a 2D spatial feature extraction network, H MT-P3 For the system response of the 2D convolution module of the corresponding sensor 2, I MT-P3 Outputting a characteristic for the sensor 2;
wherein the three outputs are all of dimensionsThe multi-information fusion block performs preliminary fusion on three output features:
I MT-PF =H MT-F (I MT-P +I MT-P1 +I MT-P2 )
I MTI-PC-End =H MT-F (H MT-F (I MT-PF ))
wherein I is MT-PF For output after single multi-information fusion block, I MTI-PC-End H for output after 3 pieces of multi-information fusion blocks MT-F A response function of the multi-information fusion block;
the information post-processing layer comprises an activation layer and is used for outputting the combination block output by the last-stage multi-information fusion block after post-processing:
I MT =H MT-A (I MTI-PC-End )
wherein I is MTI-PC-End A combined block H which is output for the last-stage multi-information fusion block MT-A In order to activate the response of the function.
The multi-information fusion block comprises a feature processing module, a feature fusion module and a feature output module;
the multi-information fusion block comprises five inputs and five outputs, as shown in fig. 9, the five inputs are respectively an information block 1, an information block 2, an information block 3, a fusion block and a combination block, and the five outputs are respectively an information block 1, an information block 2, an information block 3, a fusion block and a combination block;
the three information blocks of the first multi-information fusion block are respectively I MT-P1 、I MT-P2 And I MT-P3 The fusion block is I MT-PF The combined block is I MT-PF Data blocks filled with 0 in consistent dimensions;
the five inputs of other multi-information fusion blocks are respectively five outputs of the upper stage;
the characteristic processing module comprises 3 compression and excitation residual blocks, and respectively performs characteristic processing on the information block 1, the information block 2 and the information block 3:
I MTI-P1 =H MTI-P1 (I MTI-Input1 )
I MTI-P2 =H MTI-P2 (I MTI-Input2 )
I MTI-P3 =H MTI-P3 (I MTI-Input3 )
wherein I is MTI-Input1 For information block 1, I MTI-P1 For output of information block 1, I MTI-Input2 For information block 2, I MTI-P2 For output of information block 2, I MTI-Input3 For information block 3, I MTI-P3 Is the output of the information block 3; h MTI-P1 To compress and excite the block, H MTI-P2 To compress and excite the block, H MTI-P3 Is a compression and excitation block;
the compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the feature fusion module fuses the multi-sensor features:
I MTI-PF =H MTI-PF (H MTI-P1 (I MTI-Input1 )+H MTI-P2 (I MTI-Input2 )+H MTI-P3 (I MTI-Input3 )+H MTI-PFF (I MTI-InputF ))
wherein H is MTI-PFF 、H MTI-P1 、H MTI-P2 、H MTI-P3 System responses of convolution blocks processing fusion block, information block 1, information block 2, information block 3, respectively, H MTI-PF System response of activation function for processing after summation of output results of the 4 blocks, I MTI-PF For fusion result, I MTI-InputF Is a fusion block;
the feature output module comprises a single convolution block (the convolution block sequentially comprises a convolution layer and an activation function) and an addition layer, so that the fusion feature and the previous level feature are further fused:
I MTI-PC =I MTI-InputC +H MTI-PC (I MTI-PF )
wherein I is MTI-PC The method is characterized by outputting by a characteristic output module; h MTI-PC Is a single convolution block.
Other steps and parameters are the same as those of one of the first to seventh embodiments.
Detailed description nine: this embodiment differs from one of embodiments one to eight in that the sensor in step one ideally maps the network; the specific process is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss=L(f S (M R1 ,M S0 ;θ S ),M S1 )
Where L is the loss function used, M R1 、M S0 And M S1 The real multispectral data of the sensor 1 to be projected, the ideal multispectral data of the sensor 0 for reference and the ideal multispectral data of the sensor 1 are respectively obtained; θ S Is a network parameter, f S Responding to the network;
the training process of the multi-sensor spectrum reconstruction network in the second step is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss s =L(f M (M,M 1 ,M 2 ;θ M ),H)+λL(f M (M,M 1 ,M 2 ;θ M )R,M)
Where L is the loss function used, M, M 1 、M 2 And H is the real multispectral data of the sensor 1, the ideal multispectral data of the sensor 0 for reference, the ideal multispectral data of the sensor 1 and the hyperspectral data respectively, R is the matrix corresponding to the spectral response function, f M For network response, θ M Lambda is a scaling factor for network parameters, typically set to 0.25.
Other steps and parameters are the same as in one to eight of the embodiments.
Detailed description ten: the difference between the present embodiment and one of the first to ninth embodiments is that, in the fourth step, the hyperspectral image obtained in the third step is corrected, and a corrected hyperspectral image is obtained; the specific process is as follows:
parallel components corresponding to the nth multispectral sensor are
Substituted by corresponding parallel components
The final output result is
Wherein M is 1 st to n 0 Multispectral data corresponding to the plurality of multispectral sensors,n 0 for the number of multispectral sensors and multispectral data, n=1, 2, …, n 0 The method comprises the steps of carrying out a first treatment on the surface of the f (M; theta) is the reconstructed hyperspectral data output by the network, R n For the spectral response function of the nth multispectral sensor,>is R n Is transposed of (C), PC n APC for parallel components obtained by decomposing hyperspectral image using spectral response function of nth multispectral sensor n For the replaced parallel components (calculated based on the ideal multispectral image and the spectral response function of the multispectral sensor), M n As an ideal multispectral image, H RC And correcting the corrected hyperspectral image, wherein θ is a network parameter.
Other steps and parameters are the same as in one of the first to ninth embodiments.
The following examples are used to verify the benefits of the present invention:
3 hyperspectral remote sensing images (0501,0628,0724) acquired by a resource No. 02D star hyperspectral sensor and 3 groups of images consisting of a high-resolution No. one broad sensor, a high-resolution No. six broad sensor, a sentinel No. two broad sensor and 12 multispectral remote sensing images acquired by a resource No. 02D star multispectral sensor are taken as data sets. Resource number one 02D satellite hyperspectral data with a spatial resolution of 30 meters. The spectral resolution in the visible near infrared band (0.39-1.04 μm) is about 10nm, for a total of 76 bands, and the spectral resolution in the short wave infrared band (1-2.5 μm) is about 20nm, for a total of 80 bands. 77 noiseless bands covering 390-1040nm are selected as hyperspectral bands to be generated. Two sets of images were selected for training and the remaining set of images was used for testing. In the training process, hyperspectral images, multispectral images obtained by downsampling and true multispectral images are input into a network in an image block with the space size of 16 multiplied by 16, an ADAM optimizer is adopted in training, the initial learning rate is 0.0005, the iterative learning rate of each epoch is reduced by 10%, and the minimum learning rate is reduced to 0.00002. During testing, data is input into an ideal mapping network of the sensor first, and then is input into an ideal multi-sensor spectrum reconstruction network. The similarity between the generated hyperspectral image and the real hyperspectral image is measured by adopting a Root Mean Square Error (RMSE), an average relative error (MRAE) and a spectrum angle function (SAM), and the smaller the RMSE, MRAE, SAM is, the higher the reconstruction accuracy is.
Table 1 similarity measure on four sets of data sets
Referring to table 1, the hyperspectral remote sensing image generating method based on the multi-sensor spectrum reconstruction network in this example has smaller RMSE, MRAE, SAM compared with the single-sensor spectrum reconstruction, which indicates that the hyperspectral image generated by the method after combining the multi-sensor information has higher similarity with the real hyperspectral image, and can effectively generate the spectrum characteristics of the hyperspectral image.
In summary, in order to obtain the hyperspectral remote sensing image conveniently and at low cost, the embodiment provides a hyperspectral remote sensing image generation method in a multi-sensor spectrum reconstruction network, which can effectively generate corresponding hyperspectral remote sensing images based on different sensor multispectral images.

Claims (10)

1. A hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network is characterized by comprising the following steps of: the method comprises the following specific processes:
firstly, constructing a corresponding ideal multispectral image by utilizing a multispectral sensor spectral response function and a hyperspectral image;
selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain an ideal sensor mapping network after training;
constructing a corresponding multispectral image by utilizing a plurality of different multispectral sensor spectral response functions and hyperspectral images, constructing a multispectral-hyperspectral data pair of the multispectral sensor, and constructing and training an ideal multispectral reconstruction network by taking the multispectral-hyperspectral data pair as a training sample;
step three, acquiring a multispectral image of a region to be tested through a sensor, inputting the multispectral image of the region to be tested into a sensor ideal mapping network after training is completed, and acquiring an ideal multispectral image corresponding to the multispectral image of the region to be tested;
inputting the ideal multispectral image into an ideal multispectral sensor spectrum reconstruction network after training is completed, and obtaining a hyperspectral image corresponding to the ideal multispectral image;
and step four, correcting the hyperspectral image obtained in the step three to obtain a corrected hyperspectral image.
2. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 1, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps of: in the first step, a corresponding ideal multispectral image is constructed by utilizing a multispectral sensor spectral response function and a hyperspectral image; the specific process is as follows:
1) In terms of L and L M Respectively as a continuous spectrum curve and a multispectral curve, and R is used as a spectrum response function of the multispectral sensor, L is M The relationship between the ith band of (c) and L and R can be expressed as L Mi =∫L(λ)R i (λ)dλ
Wherein L is Mi An ith band of the multispectral curve, L (lambda) is a continuous spectrum curve, R i (lambda) is a spectral response function corresponding to an ith band of the multispectral curve, and lambda is a wavelength;
2) Adjusting the spectral response function of a multispectral sensor to a hyperspectral to multispectral normalized spectral response function R based on the principle of linear interpolation H Spectral response function R H Representing the mapping of hyperspectral images to multispectral images,
wherein L is H Represents hyperspectral curve, h is hyperspectral band number, lambda n Wavelength of nth band possessed by hyperspectral sensor, R Hin ) For normalizing the spectral response function R H At a wavelength lambda in the ith band of the corresponding multispectral curve n The specific value is taken;
3) Multiplying the hyperspectral image with the normalized spectral response function from hyperspectral to multispectral to obtain a corresponding ideal multispectral image L M =L H R H ,L M 、L H And R is H L respectively expressed in matrix form M 、L H And R is H
3. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 2, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps of: the ideal mapping network of the sensor in the first step comprises a feature extraction layer and a feature fusion layer;
the sensor ideal mapping network can be expressed as
I Map =H Fus (H QS (I M-S )×H QR (I M-R ))
Wherein H is QS 、H QR And H Fus Respectively representing respective system responses of a feature extraction layer for reference data, a feature extraction layer for real data and a feature fusion layer, I M-S And I M-R Respectively expressed as input reference multispectral data and real multispectral data, I Map Ideal projection multispectral data output by an ideal mapping network for the sensor;
the feature extraction layer aiming at the reference data is consistent with the feature extraction layer aiming at the real data in structure;
the feature extraction layer sequentially comprises 1 full-connection layer and 3 full-connection Res layers, is used for feature extraction of reference data and real data, and can be expressed as:
I QS =H QS-3 (H QS-2 (H QS-1 (H QS-0 (I M-S ))))
I QR =H QR-3 (H QR-2 (H QR-1 (H QR-0 (I M-R ))))
wherein H is QS-0 、H QR-0 Respectively isThe 1 st full connection layer of the characteristic extraction layer aiming at the reference data and the characteristic extraction layer aiming at the real data; h QS-1 、H QS-2 、H QS-3 、H QR-1 、H QR-2 And H QR-3 The 1 st to 3 rd fully connected Res layers of the characteristic extraction layer aiming at the reference data and the characteristic extraction layer aiming at the real data are respectively; i QS For extracting features from reference data, I QR Features extracted from real data;
each full-connection Res layer sequentially comprises a full-connection layer, an activation layer, a full-connection layer and an activation layer;
the feature fusion layer can be expressed as:
I Map =H Fus1 (I QS ×I QR )×H Fus2 (I QR )
wherein H is Fus1 And H Fus2 System responses respectively representing fused feature processing and true feature processing, I Map Outputting features for the feature fusion layer;
the fusion characteristic treatment sequentially consists of 2 full-connection layers;
the real feature processing consists of 2 full connection layers in sequence.
4. A hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network as claimed in claim 3, wherein: the ideal multi-sensor spectrum reconstruction network in the second step comprises a feature extraction layer and a feature fusion layer;
the feature extraction layer comprises a 2D spatial feature extraction network and a 3D spatial spectrum feature extraction network;
the characteristic fusion layer comprises a multi-sensor information fusion network and a spectrum characteristic processing module;
the ideal multi-sensor spectral reconstruction network can be expressed as:
I MSSR =H SE (H S3D (I M )+H MSR (H S2D (I M ),H S2D1 (I M1 ),H S2D2 (I M2 )))
wherein H is S2D Representing a principalA 2D spatial feature extraction network corresponding to the sensor feature extraction layer;
H S3D representing a 3D spatial spectrum feature extraction network corresponding to the main sensor feature extraction layer;
H S2D1 representing a 2D spatial feature extraction network corresponding to the feature extraction layer of the sensor 1;
H S2D2 representing a 2D spatial feature extraction network corresponding to the feature extraction layer of the sensor 2;
H MSR and H SE Respectively representing a multi-sensor information fusion network and a spectrum feature processing module in the feature fusion layer;
I M representing ideal multispectral data input to the primary sensor, I M1 Ideal multispectral data representing input sensor 1, I M2 Ideal multispectral data representing input sensor 2, I MSSR Representing an ideal multi-sensor spectral reconstruction network output.
5. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 4, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps of: the 2D spatial feature extraction network includes: the device comprises a dimension raising layer, a feature extraction layer, a feature stacking layer and a feature compression layer;
the dimension-increasing layer sequentially consists of a single 3×3 convolution layer and a single activation layer;
I 2D-DA =H 2D-DA (I M )
I M to input data, I 2D-DA Is output after passing through the dimension increasing layer; h 2D-DA Processing for a dimension lifting layer;
the feature extraction layer consists of 5 compression and excitation residual blocks;
each compression and excitation residual block is composed of a single residual block and a single compression and excitation block in sequence;
each residual block comprises a 3×3 convolution layer, an active layer, a 1×1 convolution layer and an active layer in sequence;
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation between each compression and excitation residual block is as follows:
the input features are sequentially input into a 3X 3 convolution layer, an activation layer, a 1X 1 convolution layer and an activation layer to obtain residual block output features, the result obtained by adding the residual block output features and the input features is sequentially input into a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer to obtain compression and excitation block output features, and the residual block output features are multiplied by the compression and excitation block output features to obtain compression and excitation residual block output results;
the output of the compressed and excited residual block for layers 1 through N can be expressed as:
I 2D-1 =H 2D-1 (I 2D-DA )
I 2D-N =H 2D-N (...(H 2D-1 (I 2D-DA )))
where H represents the system response of the compressed and excited residual block, H 2D-1 Representing the system response of layer 1 compression and excitation residual block, H 2D-N Representing the system response of the layer N compression and excitation residual block, I 2D-N Features representing the output of the N-th layer compression and excitation residual block;
the feature stack layer can be expressed as:
I 2D-C =[I 2D-DA ,I 2D-1 ,I 2D-2 ,...,I 2D-N ]
wherein [ among others ]]Representing a stack of spectral dimensions, I 2D-C Features representing the final output;
the characteristic compression layer is composed of a single 3×3 convolution layer and a single activation layer in sequence; expressed as:
I 2D =H 2D-D (I 2D-C )
wherein H is 2D-D The feature compression layer process is represented, which consists of a single 3 x 3 convolutional layer and a single active layer in sequence.
6. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 5, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps of: the 3D spatial spectrum feature extraction network comprises 4 groups of 3D feature processing modules and a 3D feature compression module;
each superimposed 3D feature processing module includes: the device comprises a dimension raising layer, a feature extraction layer, a feature expansion layer, a 3D feature stacking layer, a summation module and a 3D spectrum feature amplification layer;
the dimension-increasing layer sequentially consists of a single 3×3 convolution layer and a single activation layer; expressed as:
I 3D-DA =H 3D-DA (I M )
I M is input data; h 3D-DA For the up-web layer processing, I 3D-DA Is output after passing through the dimension increasing layer;
the feature extraction layer consists of 5 compression and excitation residual blocks; each compression and excitation residual block is composed of a single residual block and a single compression and excitation block in sequence;
each residual block comprises a 3×3 convolution layer, an active layer, a 1×1 convolution layer and an active layer in sequence;
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation between each compression and excitation residual block is as follows:
the input features are sequentially input into a 3X 3 convolution layer, an activation layer, a 1X 1 convolution layer and an activation layer to obtain residual block output features, the result obtained by adding the residual block output features and the input features is sequentially input into a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer to obtain compression and excitation block output features, and the residual block output features are multiplied by the compression and excitation block output features to obtain compression and excitation residual block output results;
the output of the compressed and excited residual blocks for layers 1 through N can be expressed as
I 3D-1 =H 3D-1 (I 3D-DA )
I 3D-N =H 3D-N (...(H 3D-1 (I 3D-DA )))
Where H represents the system response of the compressed and excited residual block, H 3D-1 Representing the system response of layer 1 compression and excitation residual block, H 3D-N Representing the system response of the layer N compression and excitation residual block, I 3D-N Features representing the output of the N-th layer compression and excitation residual block;
a feature expansion layer for expanding the 2D depth feature into a 3D depth feature, and adding a feature dimension into the original 4-dimensional data to change the feature dimension into 5-dimensional data, I 3D-N-USQ Features representing the output of the Nth layer laminated and excited residual block after feature expansion, and I of the dimension raising layer 3D-DA Change to I by feature expansion 3D-DA-USQ
The 3D feature stack layer may be expressed as:
I 3D-C =[I 3D-DA-USQ ,I 3D-1-USQ ,I 3D-2-USQ ,...,I 3D-N-USQ ]
wherein [ among others ]]Representing a stack of feature dimensions, I 3D-C Features representing the final output;
the summation module is used for combining the depth characteristics of the upper layer 3D characteristic processing module;
I 3D-S =I 3D-C +I 3D-L0
wherein I is 3D-L0 Features acquired for the upper layer 3D feature processing module; i 3D-S Representing the summed features;
the 3D spectrum characteristic amplifying layer consists of 1 3D deconvolution layer and is used for improving the spectrum dimension of data;
I 3D-L =H 3D-L (I 3D-S )
wherein H is 3D-L Representing 3D spectral feature magnification layer processing; i 3D-L Representing the amplified I of 3D spectral features 3D-S
After passing through 4 groups of overlapped 3D feature processing modules, inputting the data into a 3D feature compression module;
the 3D feature compression module comprises 1 or 2 3D convolution modules to compress the spectral dimension of the 3D feature to the spectral dimension of the hyperspectral feature;
I 3D =H 3D-D (I 3D-S )
wherein I is 3D-S Representing the summed features; h 3D-D Representing 3D feature compression module processing; i 3D Representing the output characteristics after passing through the 3D characteristic compression module.
7. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 6, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps: the spectral feature processing module consists of a single compression and excitation block;
the single compression and excitation block comprises, in order, a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer, and a Sigmoid activation layer.
8. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 7, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps of: the multi-sensor information fusion network sequentially comprises 1 multi-sensor information preprocessing layer, 3 multi-information fusion blocks and 1 information post-processing layer;
the multi-sensor information preprocessing layer extracts intermediate characteristics I of the network for 2D spatial characteristics 2D-C Designated as I for each of the plurality of sensors 2D-C 、I 2D-C1 And I 2D-C2 The method comprises the following steps of:
I MT-P =H MT-P (I 2D-C )
I MT-P1 =H MT-P1 (I 2D-C1 )
I MT-P2 =H MT-P2 (I 2D-C2 )
wherein I is 2D-C Stacking features acquired from multispectral data input for a main sensor at a feature stacking layer through a 2D spatial feature extraction network, and H MT-P For the system response of the 2D convolution module corresponding to the main sensor, I MT-P For the main sensor output characteristics, I 2D-C1 Stacking features acquired from multispectral data input to the sensor 1 at a feature stacking layer through a 2D spatial feature extraction network, H MT-P1 For the system response of the 2D convolution module of the corresponding sensor 1, I MT-P1 For the sensor 1 to output characteristics, I 2D-C2 Stacking features acquired from multispectral data input to the sensor 2 at a feature stacking layer through a 2D spatial feature extraction network, H MT-P2 For the system response of the 2D convolution module of the corresponding sensor 2, I MT-P2 Outputting a characteristic for the sensor 2;
the multi-information fusion block performs preliminary fusion on three output features:
I MT-PF =H MT-F (I MT-P +I MT-P1 +I MT-P2 )
I MTI-PC-End =H MT-F (H MT-F (I MT-PF ))
wherein I is MT-PF For output after single multi-information fusion block, I MTI-PC-End H for output after 3 pieces of multi-information fusion blocks MT-F A response function of the multi-information fusion block;
the information post-processing layer comprises an activation layer and is used for outputting the combination block output by the last-stage multi-information fusion block after post-processing:
I MT =H MT-A (I MTI-PC-End )
wherein I is MTI-PC-End A combined block H which is output for the last-stage multi-information fusion block MT-A In order to activate the response of the function.
9. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 8, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps of: the ideal mapping network of the sensor in the first step; the specific process is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss=L(f S (M R1 ,M S0 ;θ S ),M S1 )
Where L is the loss function used, M R1 、M S0 And M S1 The real multispectral data of the sensor 1 to be projected, the ideal multispectral data of the sensor 0 for reference and the ideal multispectral data of the sensor 1 are respectively obtained; θ S Is a network parameter, f S Is netA complex response;
the training process of the multi-sensor spectrum reconstruction network in the second step is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss s =L(f M (M,M 1 ,M 2 ;θ M ),H)+λL(f M (M,M 1 ,M 2 ;θ M )R,M)
Where L is the loss function used, M, M 1 、M 2 And H is the real multispectral data of the sensor 1, the ideal multispectral data of the sensor 0 for reference, the ideal multispectral data of the sensor 1 and the hyperspectral data respectively, R is the matrix corresponding to the spectral response function, f M For network response, θ M As a network parameter, λ is a scaling factor.
10. The hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network as claimed in claim 9, wherein the hyperspectral remote sensing image generation method is characterized by comprising the following steps of: in the fourth step, the hyperspectral image obtained in the third step is corrected, and a corrected hyperspectral image is obtained; the specific process is as follows:
parallel components corresponding to the nth multispectral sensor are
Substituted by corresponding parallel components
The final output result is
Wherein M is 1 st to n 0 Multispectral data corresponding to the plurality of multispectral sensors,n 0 for the number of multispectral sensors and multispectral data, n=1, 2, …, n 0 The method comprises the steps of carrying out a first treatment on the surface of the f (M; theta) is the reconstructed hyperspectral data output by the network, R n For the spectral response function of the nth multispectral sensor,>is R n Is transposed of (C), PC n APC for parallel components obtained by decomposing hyperspectral image using spectral response function of nth multispectral sensor n For the substituted parallel component, M n As an ideal multispectral image, H RC And correcting the corrected hyperspectral image, wherein θ is a network parameter.
CN202211603912.9A 2022-12-13 2022-12-13 Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network Active CN115880152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211603912.9A CN115880152B (en) 2022-12-13 2022-12-13 Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211603912.9A CN115880152B (en) 2022-12-13 2022-12-13 Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network

Publications (2)

Publication Number Publication Date
CN115880152A CN115880152A (en) 2023-03-31
CN115880152B true CN115880152B (en) 2023-11-24

Family

ID=85767388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211603912.9A Active CN115880152B (en) 2022-12-13 2022-12-13 Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network

Country Status (1)

Country Link
CN (1) CN115880152B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960345A (en) * 2018-08-08 2018-12-07 广东工业大学 A kind of fusion method of remote sensing images, system and associated component
CN111127573A (en) * 2019-12-12 2020-05-08 首都师范大学 Wide-spectrum hyperspectral image reconstruction method based on deep learning
CN112818794A (en) * 2021-01-25 2021-05-18 哈尔滨工业大学 Hyperspectral remote sensing image generation method based on progressive space-spectrum combined depth network
WO2022222352A1 (en) * 2021-04-22 2022-10-27 海南大学 Remote-sensing panchromatic and multispectral image distributed fusion method based on residual network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO337687B1 (en) * 2011-07-08 2016-06-06 Norsk Elektro Optikk As Hyperspectral camera and method of recording hyperspectral data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960345A (en) * 2018-08-08 2018-12-07 广东工业大学 A kind of fusion method of remote sensing images, system and associated component
CN111127573A (en) * 2019-12-12 2020-05-08 首都师范大学 Wide-spectrum hyperspectral image reconstruction method based on deep learning
CN112818794A (en) * 2021-01-25 2021-05-18 哈尔滨工业大学 Hyperspectral remote sensing image generation method based on progressive space-spectrum combined depth network
WO2022222352A1 (en) * 2021-04-22 2022-10-27 海南大学 Remote-sensing panchromatic and multispectral image distributed fusion method based on residual network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Tianshuai Li 等.Spectral Reconstruction Network From Multispectral Images to Hyperspectral Images: A Multitemporal Case.2022,全文. *
基于多分辨率分析的GF-5和GF-1遥感影像空―谱融合;孟祥超;孙伟伟;任凯;杨刚;邵枫;符冉迪;;遥感学报(第04期);全文 *

Also Published As

Publication number Publication date
CN115880152A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
CN112818794B (en) Hyperspectral remote sensing image generation method based on progressive space-spectrum combined depth network
CN104112263B (en) The method of full-colour image and Multispectral Image Fusion based on deep neural network
CN113327218B (en) Hyperspectral and full-color image fusion method based on cascade network
CN112861729B (en) Real-time depth completion method based on pseudo-depth map guidance
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN113673590B (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
CN109858408B (en) Ultrasonic signal processing method based on self-encoder
CN112836773A (en) Hyperspectral image classification method based on global attention residual error network
CN111191735B (en) Convolutional neural network image classification method based on data difference and multi-scale features
CN112184554A (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN108734675A (en) Image recovery method based on mixing sparse prior model
CN113902622B (en) Spectrum super-resolution method based on depth priori joint attention
CN114842351A (en) Remote sensing image semantic change detection method based on twin transforms
CN115700727A (en) Spectral super-resolution reconstruction method and system based on self-attention mechanism
CN112419192A (en) Convolutional neural network-based ISMS image restoration and super-resolution reconstruction method and device
CN110956601A (en) Infrared image fusion method and device based on multi-sensor mode coefficients and computer readable storage medium
CN115880152B (en) Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN112819769A (en) Nonlinear hyperspectral image anomaly detection algorithm based on kernel function and joint dictionary
CN111680667A (en) Remote sensing image ground object classification method based on deep neural network
CN111275624B (en) Face image super-resolution reconstruction and identification method based on multi-set typical correlation analysis
CN113724307A (en) Image registration method and device based on characteristic self-calibration network and related components
CN112241765A (en) Image classification model and method based on multi-scale convolution and attention mechanism
CN115205710B (en) Double-time-phase remote sensing image change detection method combined with color correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant