CN115880152A - Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network - Google Patents
Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network Download PDFInfo
- Publication number
- CN115880152A CN115880152A CN202211603912.9A CN202211603912A CN115880152A CN 115880152 A CN115880152 A CN 115880152A CN 202211603912 A CN202211603912 A CN 202211603912A CN 115880152 A CN115880152 A CN 115880152A
- Authority
- CN
- China
- Prior art keywords
- layer
- sensor
- multispectral
- feature
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000001228 spectrum Methods 0.000 title claims abstract description 46
- 230000003595 spectral effect Effects 0.000 claims abstract description 100
- 238000013507 mapping Methods 0.000 claims abstract description 33
- 238000005316 response function Methods 0.000 claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000008569 process Effects 0.000 claims abstract description 23
- 230000006835 compression Effects 0.000 claims description 75
- 238000007906 compression Methods 0.000 claims description 75
- 238000000605 extraction Methods 0.000 claims description 75
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 66
- 230000005284 excitation Effects 0.000 claims description 55
- 230000004927 fusion Effects 0.000 claims description 55
- 238000012545 processing Methods 0.000 claims description 49
- 230000004044 response Effects 0.000 claims description 35
- 230000004913 activation Effects 0.000 claims description 27
- 230000006870 function Effects 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 11
- 230000003321 amplification Effects 0.000 claims description 6
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 6
- 238000012805 post-processing Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 101100064718 Borrelia bavariensis (strain ATCC BAA-2496 / DSM 23469 / PBi) fusA1 gene Proteins 0.000 claims description 4
- 101100118163 Borrelia bavariensis (strain ATCC BAA-2496 / DSM 23469 / PBi) fusA2 gene Proteins 0.000 claims description 4
- 101100209555 Caenorhabditis elegans vha-17 gene Proteins 0.000 claims description 4
- 230000001174 ascending effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000003384 imaging method Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 7
- 230000002123 temporal effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000002211 ultraviolet spectrum Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Abstract
The invention discloses a hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network, and relates to a hyperspectral remote sensing image generation method. The invention aims to solve the problem that the acquisition of a hyperspectral image containing rich spectral bands in the prior imaging technology is relatively high in acquisition cost. The process is as follows: 1. constructing a corresponding ideal multispectral image by using a multispectral sensor spectral response function and a hyperspectral image; selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain a trained ideal sensor mapping network; 2. an ideal multi-sensor spectrum reconstruction network is constructed and trained; 3. acquiring a hyperspectral image corresponding to the ideal multispectral image; 4. and obtaining the corrected hyperspectral image. The invention belongs to the technical field of satellite remote sensing.
Description
Technical Field
The invention belongs to the technical field of satellite remote sensing, relates to a deep neural network and a spectrum super-resolution technology, and particularly relates to a hyperspectral image generation method.
Background
The hyperspectral image generally has a plurality of spectral bands from an infrared spectrum to an ultraviolet spectrum, rich spectral information of the hyperspectral image enables objects similar in local wave bands to be separated more easily, and rich spectral characteristics are widely applied to various tasks.
However, due to the limitations of imaging technology, acquiring hyperspectral images containing rich spectral bands is a relatively costly task. Multispectral images typically have fewer spectral bands (typically less than 20) than hyperspectral images, with low acquisition costs, rich spatial information and continuous temporal information, making it more convenient to distinguish in image detail. Meanwhile, the number of multispectral satellites far more than that of the hyperspectral satellites also brings a large number of usable multispectral images.
Disclosure of Invention
The invention aims to solve the problem that the acquisition of a hyperspectral image containing rich spectral bands in the prior imaging technology is relatively high in acquisition cost, and provides a hyperspectral remote sensing image generation method based on a multi-sensor spectral reconstruction network.
A hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network comprises the following specific processes:
constructing a corresponding ideal multispectral image by using a multispectral sensor spectral response function and a hyperspectral image;
selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain a trained ideal sensor mapping network;
constructing corresponding multispectral images by using a plurality of different multispectral sensor spectral response functions and hyperspectral images, constructing multispectral-hyperspectral data pairs of the multispectral sensors, taking the multispectral-hyperspectral data pairs as training samples, and constructing and training an ideal multisensor spectral reconstruction network;
acquiring a multispectral image of a region to be measured through a sensor, inputting the multispectral image of the region to be measured into a trained sensor ideal mapping network, and acquiring an ideal multispectral image corresponding to the multispectral image of the region to be measured;
inputting the ideal multispectral image into an ideal multisensor spectrum reconstruction network after training to obtain a hyperspectral image corresponding to the ideal multispectral image;
and step four, correcting the hyperspectral image obtained in the step three to obtain a corrected hyperspectral image.
The invention has the beneficial effects that:
multispectral images acquired by different sensors can provide more spectral information due to different wave band ranges sensed by the multispectral images. Therefore, it is meaningful to try to acquire the mapping relationship from the multispectral image of the multi-sensor to the hyperspectral image and then generate the hyperspectral image from the multispectral image as a calculation alternative. The generated hyperspectral image will have simultaneously higher spatial resolution, spectral resolution and temporal resolution, which makes fine interpretation of the area with small scale and high precision feasible.
By means of a hyperspectral image generation technology based on multispectral images of multiple sensors, a large number of multispectral images from different sensors can be generated into corresponding hyperspectral images, namely a large number of new hyperspectral images are generated, and therefore the multispectral data of the existing multiple sensors are processed into a hyperspectral image sequence which is high in space-time resolution and has rich spectral wave bands; and performing integral cross-space-time analysis on the region through the trained hyperspectral classification model.
According to the invention, a large number of hyperspectral remote sensing images are generated by using the trained multi-sensor spectrum reconstruction network and taking abundant multispectral image resources of different sensors as a basis, so that a large number of available hyperspectral remote sensing image resources with high spatial resolution and high temporal resolution are brought for hyperspectral data application.
Description of the drawings:
FIG. 1 is a diagram of an ideal mapping network of a sensor, and MSI0-MSIN are real multispectral images, and ideal multispectral images HSI are obtained after projection;
FIG. 2 is a diagram of an ideal mapping network for a sensor;
FIG. 3 is a diagram of a fully connected Res architecture;
FIG. 4 is a diagram of an ideal multi-sensor spectral reconstruction network;
FIG. 5 is a diagram of a 2D spatial feature extraction network architecture;
FIG. 6 is a block diagram of compression and excitation residual blocks;
FIG. 7 is a 3D space spectrum feature construction network structure diagram;
FIG. 8 is a diagram of a multi-sensor information fusion network architecture;
fig. 9 is a diagram of a multi-information fusion block.
The specific implementation mode is as follows:
the first embodiment is as follows: in the embodiment of the invention, a method for generating a hyperspectral remote sensing image based on a Multi-Sensor Spectral Reconstruction Network (MSSRN for short) specifically comprises the following steps:
firstly, constructing a corresponding ideal multispectral image by using a spectral response function of a multispectral sensor and a hyperspectral image;
selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain a trained ideal sensor mapping network; as shown in FIG. 1;
the spectral curves generated by the different sensors will tend to be consistent after mapping. The sensor ideal mapping network establishes a mapping relation between a real multispectral image and an ideal multispectral image through training;
constructing corresponding multispectral images by using a plurality of different multispectral sensor spectral response functions and hyperspectral images, constructing multispectral-hyperspectral data pairs of the multispectral sensors, taking the multispectral-hyperspectral data pairs as training samples, and constructing and training an ideal multisensor spectral reconstruction network;
acquiring a multispectral image of a region to be measured through a sensor, inputting the multispectral image of the region to be measured into a trained sensor ideal mapping network, and acquiring an ideal multispectral image corresponding to the multispectral image of the region to be measured;
inputting the ideal multispectral image into an ideal multisensor spectrum reconstruction network after training to obtain a hyperspectral image corresponding to the ideal multispectral image;
and step four, correcting the hyperspectral image obtained in the step three to obtain a corrected hyperspectral image.
The second embodiment is as follows: the first step is to construct an ideal multispectral image by using the spectral response function of the multispectral sensor and the hyperspectral image; the specific process is as follows:
1) Inquiring a spectral response function of the used multispectral sensor and a wave band used by a hyperspectral image to be generated; with L and L M As a continuous spectral curve (referring to the complete spectral curve at the incident sensor) and a multispectral curve, respectively, with R as the spectral response function of the multispectral sensor, then L M The relationship between the ith band and L and R can be expressed as L Mi =∫L(λ)R i (λ)dλ
Wherein L is Mi Is the i-th band of the multi-spectral curve, L (lambda) is the continuous spectral curve, R i (lambda) is a spectral response function corresponding to the ith wave band of the multi-spectral curve, and lambda is the wavelength;
2) Adjusting the spectral response function of the multispectral sensor into a normalized spectral response function R from hyperspectral to multispectral based on the principle of linear interpolation H Spectral response function R H Representing the mapping relation from the hyperspectral image to the multispectral image,
wherein L is H Represents a hyperspectral curve, h is the number of hyperspectral bands, λ n Wavelength of the nth wavelength band, R, possessed by the hyperspectral sensor Hi (λ n ) As a function of normalized spectral response R H At the ith wave band of the corresponding multispectral curve, the wavelength is lambda n Specific values of time;
3) Multiplying the high-spectrum image with the normalized spectral response function from high spectrum to multiple spectrum to obtain a corresponding ideal multi-spectrum image L M =L H R H ,L M 、L H And R H Respectively, L expressed in matrix form M 、L H And R H (italicized is represented in single pixel calculations and bold is represented in matrix form for the entire image).
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the difference between the first embodiment and the second embodiment is that, in the first step, the ideal mapping network of the sensor is constructed, and the basic principle of the hyperspectral remote sensing image generation method based on deep learning in the embodiment is as follows: and constructing an ideal mapping network of the sensor based on pixel information fusion.
The specific network structure is shown in fig. 2, and the sensor ideal mapping network comprises a feature extraction layer and a feature fusion layer;
the sensor ideal mapping network (total system response) can be expressed as
I Map =H Fus (H QS (I M-S )×H QR (I M-R ))
Wherein H QS 、H QR And H Fus Respectively representing respective system responses of a feature extraction layer (reference data portion) for reference data, a feature extraction layer (real data portion) for real data, and a feature fusion layer, I M-S And I M-R Respectively expressed as input reference multispectral data and real multispectral data, I Map Ideal projection multispectral data output by the ideal mapping network for the sensor;
in the training process, the reference data are ideal multi-spectral data selected as the main sensor, and in the subsequent testing process, the real multi-spectral data of the main sensor are adopted;
the feature extraction layer (reference data portion) for the reference data is structurally identical to the feature extraction layer (real data portion) for the real data;
the feature extraction layer sequentially comprises 1 fully-connected layer and 3 fully-connected Res layers and is used for feature extraction of reference data and real data, and the feature extraction layer (system response) can be expressed as:
I QS =H QS-3 (H QS-2 (H QS-1 (H QS-0 (I M-S ))))
I QR =H QR-3 (H QR-2 (H QR-1 (H QR-0 (I M-R ))))
wherein H QS-0 、H QR-0 The 1 st full connection layer of the feature extraction layer (reference data part) for reference data and the feature extraction layer (real data part) for real data respectively; h QS-1 、H QS-2 、H QS-3 、H QR-1 、H QR-2 And H QR-3 The 1 st to 3 rd fully connected Res layers of the characteristic extraction layer (reference data part) aiming at the reference data and the characteristic extraction layer (real data part) aiming at the real data are respectively arranged; i is QS Features extracted for reference data, I QR Features extracted for real data;
wherein each full connection Res layer is composed of a full connection layer, an active layer, a full connection layer, and an active layer in sequence, as shown in fig. 3;
the feature fusion layer comprises two element products and four full connection layers, and is used for fusing the information acquired by the feature extraction layer to obtain a projected multispectral spectral curve, and the feature fusion layer (system response) can be expressed as follows:
I Map =H Fus1 (I QS ×I QR )×H Fus2 (I QR )
wherein H Fus1 And H Fus2 Respectively representing the system response of the fused feature process to the real feature process, I Map Outputting the features for the feature fusion layer;
the fusion characteristic processing is sequentially composed of 2 full-connection layers;
the real feature processing is composed of 2 full connection layers in sequence.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode is as follows: the difference between the embodiment and one of the first to third specific embodiments is that, in the second step, the ideal multi-sensor spectrum reconstruction network is constructed, and the basic principle of the hyperspectral remote sensing image generation method based on deep learning in the embodiment is as follows: and constructing an ideal multi-sensor spectrum reconstruction network based on multi-sensor feature fusion.
The ideal multi-sensor spectrum reconstruction network comprises a feature extraction layer and a feature fusion layer;
the feature extraction layer comprises a 2D space feature extraction network and a 3D space spectrum feature extraction network;
the characteristic fusion layer comprises a multi-sensor information fusion network and a spectral characteristic processing module;
the specific network structure is shown in fig. 4, and the ideal multi-sensor spectrum reconstruction network comprises a feature extraction layer and a feature fusion layer; the ideal multi-sensor spectral reconstruction network (total system response) can be expressed as:
I MSSR =H SE (H S3D (I M )+H MSR (H S2D (I M ),H S2D1 (I M1 ),H S2D2 (I M2 )))
wherein H S2D The 2D space characteristic extraction network corresponding to the characteristic extraction layer represents a main sensor (ideal multispectral data during training, ideal multispectral data during testing and real multispectral data during actual application);
H S3D representing a 3D space spectrum feature extraction network corresponding to the main sensor feature extraction layer;
H S2D1 the 2D space characteristic extraction network corresponding to the characteristic extraction layer of the sensor 1 (ideal multispectral data during training, mapping ideal multispectral data obtained by real multispectral data through an ideal mapping network during testing and actual application);
H S2D2 a 2D space characteristic extraction network corresponding to a characteristic extraction layer of the sensor 2 (ideal multispectral data during training, mapping ideal multispectral data obtained by real multispectral data through an ideal mapping network during testing and actual application);
H MSR and H SE Respectively representing a multi-sensor information fusion network and a spectrum characteristic processing module in the characteristic fusion layer;
I M representing ideal multi-spectral data, I, input to the primary sensor M1 Representing ideal multispectral data, I, of the input sensor 1 M2 Representing ideal multi-spectral data, I, of the input sensor 2 MSSR Representing the ideal multi-sensor spectral reconstruction network output.
The multispectral data is ideal multispectral data during training, ideal multispectral data during testing and real multispectral data during actual application.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the present embodiment is different from the first to third embodiments in that the 2D spatial feature extraction network includes: the device comprises a dimensionality increasing layer, a feature extraction layer, a feature stacking layer and a feature compression layer; as shown in fig. 5;
a dimensionality enhancement layer for processing the original multispectral image into a specified high-dimensional feature image, which in use is typically 256. The dimensionality increasing layer sequentially consists of a single 3 multiplied by 3 convolution layer and a single active layer (PRelu);
I 2D-DA =H 2D-DA (I M )
I M to input data, I 2D-DA Is output after passing through the dimensionality raising layer; h 2D-DA Processing for a dimensionality raising layer;
for input dataThe output after passing through the dimension increasing layer is->Wherein n is the number of the multispectral images processed each time, m is the number of wave bands of the multispectral images, and x and y are the length and width of the input multispectral images respectively; h 2D-DA Processing for a dimensionality-raising layer;
the characteristic extraction layer is used for extracting characteristics of the multispectral image at different depths; the feature extraction layer consists of 5 compression and excitation residual blocks;
the compression and excitation residual block is shown in FIG. 6;
each compression and excitation residual block consists of a single residual block and a single compression and excitation block in sequence;
each residual block comprises a 3 × 3 convolutional layer, an active layer (PRelu), a 1 × 1 convolutional layer and an active layer (PRelu) in turn;
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation of each compression and excitation residual block is as follows:
inputting the input characteristics into a 3 x 3 convolutional layer, an active layer, a 1 x 1 convolutional layer and an active layer (PRelu) in sequence to obtain output characteristics of a residual block, inputting the result of adding the output characteristics of the residual block and the input characteristics into a global pooling layer, a full connection layer, a Relu active layer, a full connection layer and a Sigmoid active layer in sequence to obtain output characteristics of a compression and excitation block, and multiplying the output characteristics of the residual block and the output characteristics of the compression and excitation block to obtain the output result of the compression and excitation residual block;
the output of the compression and excitation residual block for layers 1 through N can be expressed as:
I 2D-1 =H 2D-1 (I 2D-DA )
I 2D-N =H 2D-N (...(H 2D-1 (I 2D-DA )))
where H represents the system response of the compressed and excited residual block, H 2D-1 Representing the system response of the layer 1 compressed and excited residual block, H 2D-N Representing the system response of the N layer compressed and excited residual block, I 2D-N The characteristics of the output of the N layer compression and excitation residual block are shown, the dimension of the characteristics is not changed by the compression and excitation residual block,
the characteristic stacking layer is used for stacking the characteristics of different depths of the extracted multispectral image; the stack of features stacks features of different depths together from the spectral dimension (the channel dimension of the 2D network), which can be expressed as:
I 2D-C =[I 2D-DA ,I 2D-1 ,I 2D-2 ,...,I 2D-N ]
wherein]Representing a stack of spectral dimensions, I 2D-C Features representing the final output, the spectral dimensions of the features increase after stacking,i is the number of the compression and excitation residual blocks;
and the characteristic compression layer is used for compressing the stacked characteristics to obtain the hyperspectral characteristics. The characteristic compression layer sequentially consists of a single 3 multiplied by 3 convolutional layer and a single active layer (PRelu); expressed as:
I 2D =H 2D-D (I 2D-C )
wherein H 2D-D Representing a feature compression layer process, which in turn consists of a single 3 x 3 convolutional layer and a single active layer (PRelu).
For input dataThe output after passing through the characteristic compression layer is->h is the number of the needed high spectrum.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between this embodiment and one of the first to fifth embodiments is that the 3D spatial spectrum feature extraction network includes 4 sets of 3D feature processing modules and one 3D feature compression module (processing modules, output of the second set of 3D feature processing modules is sent to the third set of 3D feature processing modules, output of the third set of 3D feature processing modules is sent to the fourth set of 3D feature processing modules, and output of the fourth set of 3D feature processing modules is sent to the 3D feature compression module), as shown in fig. 7;
each 3D feature processing module has a specified base spectral dimension, and the spectral dimensions are progressively increased (typically 10-20-40-80-160), wherein each stacked 3D feature processing module comprises: the system comprises a dimensionality increasing layer, a feature extraction layer, a feature expansion layer, a 3D feature stacking layer, a summation module and a 3D spectral feature amplification layer;
a dimension-up layer for processing the original multi-spectral image into a specified high-dimensional feature image, the dimension being related, in use, to the specified dimension of the block. The dimensionality increasing layer sequentially consists of a single 3 x 3 convolutional layer and a single active layer (PRelu); expressed as:
I 3D-DA =H 3D-DA (I M )
I M for inputting dataThe output after the dimensionality increasing layer of the first 3D feature processing module is I 3D-1 />The output after passing through the dimensionality increasing layer of the kth 3D feature processing module is I 3D-k Where s is the set initial 3D dimension, typically 10,2 k-1 The magnification factor of the kth-1 3D feature processing module; h 3D-DA For the multidimensional layer treatment, I 3D-DA Is output after passing through the dimensionality raising layer;
the characteristic extraction layer is used for extracting characteristics of different depths of the multispectral image; the feature extraction layer consists of 5 compression and excitation residual blocks; the compression and excitation residual blocks are shown in fig. 6, and each compression and excitation residual block is composed of a single residual block and a single compression and excitation block in turn;
each residual block comprises a 3 × 3 convolutional layer, an active layer (PRelu), a 1 × 1 convolutional layer and an active layer (PRelu) in turn;
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation of each compression and excitation residual block is as follows:
inputting the input characteristics into a 3 x 3 convolutional layer, an active layer, a 1 x 1 convolutional layer and an active layer (PRelu) in sequence to obtain output characteristics of a residual block, inputting the result of adding the output characteristics of the residual block and the input characteristics into a global pooling layer, a full connection layer, a Relu active layer, a full connection layer and a Sigmoid active layer in sequence to obtain output characteristics of a compression and excitation block, and multiplying the output characteristics of the residual block and the output characteristics of the compression and excitation block to obtain the output result of the compression and excitation residual block;
the output of the compression and excitation residual block for layers 1 through N can be expressed as
I 3D-1 =H 3D-1 (I 3D-DA )
I 3D-N =H 3D-N (...(H 3D-1 (I 3D-DA )))
Where H represents the system response of the compressed and excited residual block, H 3D-1 Represents the system response of the layer 1 compression and excitation residual block, H 3D-N Representing the system response of the N layer compressed and excited residual block, I 3D-N The characteristics of the output of the N layer compression and excitation residual block are shown, the dimension of the characteristics is not changed by the compression and excitation residual block,
a feature extension layer for extending the 2D depth feature into a 3D depth feature and adding a feature dimension (channel dimension of 3D network) to the original 4D data to change into 5D dataBecomes->I 3D-N-USQ I of the upscaled layer representing the feature of the N-th layer compressed and excited residual block output after feature expansion 3D-DA Becomes I by feature expansion 3D-DA-USQ ;
A 3D feature stacking layer for stacking 3D depth features; a 3D stack of features stacks features of different depths together from the feature dimension, which can be expressed as:
I 3D-C =[I 3D-DA-USQ ,I 3D-1-USQ ,I 3D-2-USQ ,...,I 3D-N-USQ ]
wherein]Representing stacks of feature dimensions, I 3D-C The characteristics which represent the final output are obtained, the spectral dimension of the characteristics which are stacked is not changed, the characteristic dimension is increased,i is the number of compression and excitation residual blocks;
the summing module is used for combining the depth characteristics of the upper 3D characteristic processing module;
I 3D-S =I 3D-C +I 3D-L0
wherein I 3D-L0 Features acquired by an upper 3D feature processing module; i is 3D-S Representing the features after the summation process;
the 3D spectral feature amplification layer is used for amplifying the spectral dimension of the extracted 3D depth feature; the 3D spectral feature amplification layer consists of 1 3D deconvolution layer and is used for improving the spectral dimension of data;
I 3D-L =H 3D-L (I 3D-S )
wherein H 3D-L Representing 3D spectral feature magnification layer processing; i is 3D-L Representing amplified 3D spectral features I 3D-S ;
After passing through 4 groups of stacked 3D feature processing modules, inputting the data into a 3D feature compression module;
affected by the difference between the input multispectral dimension and the output hyperspectral dimension, the 3D feature compression module comprises 1 or 2 3D convolution modules (the 3D convolution modules sequentially comprise convolution layers and activation functions) so as to compress the spectral dimension of the 3D feature to the spectral dimension of the high spectral feature;
I 3D =H 3D-D (I 3D-S )
wherein, I 3D-S Representing the feature after the summation processing; h 3D-D Representing 3D feature compression module processing; i is 3D Representing the output characteristics after passing through the 3D characteristic compression module.
For input dataThe output after the 3D characteristic compression module is->After removing the redundant feature dimension is->Wherein t is the number of the superposition type 3D feature processing modules.
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth specific embodiments is that the spectral feature processing module is composed of a single compression and excitation block, and is configured to perform spectral level adjustment on the sum of outputs of the 2D network and the 3D network;
the single compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The specific implementation mode is eight: the difference between this embodiment and one of the first to seventh embodiments is that, as shown in fig. 8, the multi-sensor information fusion network sequentially includes 1 multi-sensor information preprocessing layer, 3 multi-information fusion blocks, and 1 information post-processing layer;
wherein the multi-sensor information preprocessing layer comprises 3 2D convolution modules to process multi-sensor feature blocks acquired by the 2D spatial feature extraction network to similar semantic features, and the multi-sensor information preprocessing layer is used for extracting intermediate features I of the 2D spatial feature extraction network 2D-C Is a result of feature stacking layers), respectively designated as I for a plurality of sensors 2D-C 、I 2D-C1 And I 2D-C2 Respectively processing as follows:
I MT-P =H MT-P (I 2D-C )
I MT-P1 =H MT-P1 (I 2D-C1 )
I MT-P2 =H MT-P2 (I 2D-C2 )
wherein, I 2D-C Stacking characteristics H acquired by a 2D space characteristic extraction network in a characteristic stacking layer of multispectral data input by a main sensor MT-P For the system response of the 2D convolution module corresponding to the main sensor (the 2D convolution module comprises a convolution layer and an activation function in turn), I MT-P Is a characteristic of the main sensor output, I 2D-C2 Stacking features, H, obtained on feature stacking layers by a 2D spatial feature extraction network for multispectral data input by the sensor 1 MT-P2 System response of 2D convolution module for corresponding sensor 1, I MT-P2 For the output characteristics of the sensor 1, I 2D-C2 The multispectral data input by the sensor 2 is subjected to stacking characteristics obtained by a 2D space characteristic extraction network in a characteristic stacking layer, H MT-P3 System response of 2D convolution module for corresponding sensor 2, I MT-P3 Outputting a characteristic for the sensor 2;
wherein the dimensions of all three outputs areThe multi-information fusion block preliminarily fuses the three output characteristics:
I MT-PF =H MT-F (I MT-P +I MT-P1 +I MT-P2 )
I MTI-PC-End =H MT-F (H MT-F (I MT-PF ))
wherein, I MT-PF For output after a single multi-information fusion block, I MTI-PC-End For output after 3 multiple information fusion blocks, H MT-F A response function for the multi-information fusion block;
the information post-processing layer comprises an activation layer and is used for outputting the combination block output by the last-stage multi-information fusion block after post-processing:
I MT =H MT-A (I MTI-PC-End )
wherein I MTI-PC-End Combination block, H, output for the last stage multi-information fusion block MT-A Is the response of the activation function.
The multi-information fusion block comprises a feature processing module, a feature fusion module and a feature output module;
the multi-information fusion block includes five inputs and five outputs, as shown in fig. 9, the five inputs are information block 1, information block 2, information block 3, fusion block, and combination block, respectively, and the five outputs are information block 1, information block 2, information block 3, fusion block, and combination block, respectively;
three information blocks of the first multi-information fusion block are respectively I MT-P1 、I MT-P2 And I MT-P3 The fusion block is I MT-PF The combination block is MT-PF The dimension is consistent and the data blocks are filled with 0;
five inputs of other multi-information fusion blocks are respectively five outputs of the previous stage;
the characteristic processing module comprises 3 compression and excitation residual blocks, and respectively performs characteristic processing on the information block 1, the information block 2 and the information block 3:
I MTI-P1 =H MTI-P1 (I MTI-Input1 )
I MTI-P2 =H MTI-P2 (I MTI-Input2 )
I MTI-P3 =H MTI-P3 (I MTI-Input3 )
wherein, I MTI-Input1 Is an information block 1,I MTI-P1 For the output of information block 1, I MTI-Input2 Is an information block 2,I MTI-P2 For the output of information block 2, I MTI-Input3 Is an information block 3,I MTI-P3 Is the output of the information block 3; h MTI-P1 For compressing and exciting blocks, H MTI-P2 For compressing and exciting blocks, H MTI-P3 Is a compression and excitation block;
the compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the feature fusion module fuses the multi-sensor features:
I MTI-PF =H MTI-PF (H MTI-P1 (I MTI-Input1 )+H MTI-P2 (I MTI-Input2 )+H MTI-P3 (I MTI-Input3 )+H MTI-PFF (I MTI-InputF ))
wherein H MTI-PFF 、H MTI-P1 、H MTI-P2 、H MTI-P3 The system response of the convolution block, H, respectively, processing the fusion block, information block 1, information block 2, information block 3 MTI-PF For systematic response of the activation function to processing after summing the output results of the 4 blocks mentioned above, I MTI-PF As a result of fusion, I MTI-InputF Is a fusion block;
the feature output module comprises a single convolution block (the convolution block comprises a convolution layer and an activation function in sequence) and an addition layer, so as to further fuse the fused feature with the previous-stage feature:
I MTI-PC =I MTI-InputC +H MTI-PC (I MTI-PF )
wherein I MTI-PC Outputting by a characteristic output module; h MTI-PC As a single volume block.
Other steps and parameters are the same as those in one of the first to seventh embodiments.
The specific implementation method nine: the difference between this embodiment and the first to eighth embodiments is that, in the first step, the sensor ideally maps the network; the specific process is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss=L(f S (M R1 ,M S0 ;θ S ),M S1 )
Where L is the loss function used, M R1 、M S0 And M S1 True multispectral data of a sensor 1 to be projected, ideal multispectral data of a sensor 0 for reference and ideal multispectral data of the sensor 1 are respectively obtained; theta S As a network parameter, f S Responding to the network;
the training process of the multi-sensor spectrum reconstruction network in the second step is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss s =L(f M (M,M 1 ,M 2 ;θ M ),H)+λL(f M (M,M 1 ,M 2 ;θ M )R,M)
Where L is the loss function used, M, M 1 、M 2 H is the real multispectral data of the sensor 1, the ideal multispectral data of the sensor 0 for reference, the ideal multispectral data of the sensor 1 and the hyperspectral data respectively, R is a matrix corresponding to a spectral response function, f is M For network response, θ M For network parameters, λ is a scaling factor, typically set to 0.25.
Other steps and parameters are the same as those in one to eight of the embodiments.
The specific implementation mode is ten: the difference between the embodiment and the first to ninth embodiments is that the hyperspectral image obtained in the step three is corrected in the step four to obtain a corrected hyperspectral image; the specific process is as follows:
the parallel component corresponding to the nth multispectral sensor is
Replacement with the corresponding parallel component
The final output result is
Wherein M is 1 to n 0 The multispectral data corresponding to each of the multispectral sensors,n 0 n =1,2, …, n for multispectral sensor and multispectral data number 0 (ii) a f (M; theta) is reconstructed hyperspectral data output by the network, R n Is the spectral response function of the nth multispectral sensor>Is R n Transpose of (2), PC n APC for parallel components obtained by decomposing a hyperspectral image using the spectral response function of the nth multispectral sensor n For the parallel components after replacement (calculated based on the ideal multispectral image and the spectral response function of the multispectral sensor), M n Is an ideal multispectral image, H RC And correcting the corrected hyperspectral image, wherein theta is a network parameter.
Other steps and parameters are the same as those in one of the first to ninth embodiments.
The following examples were employed to demonstrate the beneficial effects of the present invention:
3 high spectrum remote sensing images (0501,0628,0724) obtained by a resource first 02D star high spectrum sensor and 3 groups of images formed by 12 multi spectrum remote sensing images obtained by a high-resolution first wide sensor, a high-resolution sixth wide sensor, a sentinel second wide sensor and a resource first 02D star multi spectrum sensor at similar time are used as data sets. Resource number one 02D satellite hyperspectral data, with a spatial resolution of 30 meters. The spectral resolution in the visible near infrared band (0.39-1.04 μm) is about 10nm and 76 nm, and the spectral resolution in the short-wave infrared band (1-2.5 μm) is about 20nm and 80 nm. And selecting 77 noiseless wave bands covering 390-1040nm as hyperspectral wave bands to be generated. Two sets of images were selected for training and the remaining set of images were used for testing. In the training process, a hyperspectral image, a multispectral image obtained by downsampling and a real multispectral image are input into a network by image blocks with the space size of 16 multiplied by 16, an ADAM optimizer is adopted for training, the initial learning rate is 0.0005, the iteration learning rate of each epoch is reduced by 10 percent, and the minimum value is 0.00002. During testing, data are input into the ideal mapping network of the sensor and then input into the spectrum reconstruction network of the ideal multi-sensor. The similarity between the generated hyperspectral image and the real hyperspectral image is measured by adopting a root mean square error RMSE, an average relative error MRAE and a spectrum angle function SAM, and the smaller the RMSE, the MRAE and the SAM are, the higher the reconstruction precision is.
TABLE 1 similarity measurement on four sets of data
As can be seen from the table 1, the hyperspectral remote sensing image generation method based on the multi-sensor spectrum reconstruction network has smaller RMSE, MRAE and SAM compared with single-sensor spectrum reconstruction, and shows that the hyperspectral image generated by the method after the multi-sensor information is combined has higher similarity with the real hyperspectral image, and the spectral feature of the hyperspectral image can be effectively generated.
In summary, in order to obtain a hyperspectral remote sensing image conveniently and at low cost, the embodiment provides a hyperspectral remote sensing image generation method for a multi-sensor spectral reconstruction network, which can effectively generate corresponding hyperspectral remote sensing images based on multispectral images of different sensors.
Claims (10)
1. A hyperspectral remote sensing image generation method based on a multi-sensor spectrum reconstruction network is characterized by comprising the following steps: the method comprises the following specific processes:
firstly, constructing a corresponding ideal multispectral image by using a spectral response function of a multispectral sensor and a hyperspectral image;
selecting a sensor as a main sensor, taking ideal multispectral data of the main sensor and real multispectral data of other sensors as a training data set, and optimizing network parameters of an ideal sensor mapping network through the training data set to obtain a trained ideal sensor mapping network;
constructing corresponding multispectral images by using a plurality of different multispectral sensor spectral response functions and hyperspectral images, constructing multispectral-hyperspectral data pairs of the multispectral sensors, taking the multispectral-hyperspectral data pairs as training samples, and constructing and training an ideal multisensor spectral reconstruction network;
acquiring a multispectral image of the region to be detected through a sensor, inputting the multispectral image of the region to be detected into a trained sensor ideal mapping network, and acquiring an ideal multispectral image corresponding to the multispectral image of the region to be detected;
inputting the ideal multispectral image into an ideal multisensor spectrum reconstruction network after training to obtain a hyperspectral image corresponding to the ideal multispectral image;
and step four, correcting the hyperspectral image obtained in the step three to obtain a corrected hyperspectral image.
2. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 1 is characterized by comprising the following steps of: in the first step, a corresponding ideal multispectral image is constructed by using a multispectral sensor spectral response function and a hyperspectral image; the specific process is as follows:
1) With L and L M Respectively as a continuous spectrum curve and a multispectral curve, and taking R as a spectral response function of the multispectral sensor, then L M The relationship between the ith band and L and R can be expressed as L Mi =∫L(λ)R i (λ)dλ
Wherein L is Mi Is the i-th band of the multi-spectral curve, L (lambda) is the continuous spectral curve, R i (lambda) is the spectral response function corresponding to the ith wave band of the multi-spectral curveNumber, λ is wavelength;
2) Adjusting the spectral response function of the multispectral sensor into a normalized spectral response function R from hyperspectral to multispectral based on the principle of linear interpolation H Spectral response function R H Representing the mapping relation from the hyperspectral image to the multispectral image,
wherein L is H Represents a hyperspectral curve, h is the number of hyperspectral bands, λ n Wavelength of the nth wavelength band, R, possessed by the hyperspectral sensor Hi (λ n ) As a function of normalized spectral response R H At the ith wave band of the corresponding multispectral curve, the wavelength is lambda n Specific values of time;
3) Multiplying the high-spectrum image with the normalized spectral response function from high spectrum to multiple spectrum to obtain a corresponding ideal multi-spectrum image L M =L H R H ,L M 、L H And R H Respectively, L is expressed in matrix form M 、L H And R H 。
3. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 2 is characterized in that: the sensor ideal mapping network in the first step comprises a feature extraction layer and a feature fusion layer;
the ideal mapping network of sensors can be expressed as
I Map =H Fus (H QS (I M-S )×H QR (I M-R ))
Wherein H QS 、H QR And H Fus Respectively representing respective system responses of a feature extraction layer for reference data, a feature extraction layer for real data and a feature fusion layer, I M-S And I M-R Respectively expressed as input reference multispectral data and real multispectral data, I Map Ideal mapping of network outputs for sensor idealitiesProjecting the multispectral data;
the structure of the feature extraction layer aiming at the reference data is consistent with that of the feature extraction layer aiming at the real data;
the feature extraction layer sequentially comprises 1 full-connection layer and 3 full-connection Res layers and is used for feature extraction of reference data and real data, and the feature extraction layer can be expressed as follows:
I QS =H QS-3 (H QS-2 (H QS-1 (H QS-0 (I M-S ))))
I QR =H QR-3 (H QR-2 (H QR-1 (H QR-0 (I M-R ))))
wherein H QS-0 、H QR-0 The 1 st full connection layer is respectively a feature extraction layer aiming at the reference data and a feature extraction layer aiming at the real data; h QS-1 、H QS-2 、H QS-3 、H QR-1 、H QR-2 And H QR-3 The 1 st to 3 rd fully connected Res layers are respectively a feature extraction layer aiming at the reference data and a feature extraction layer aiming at the real data; i is QS Features extracted for reference data, I QR Features extracted for real data;
each full-connection Res layer sequentially consists of a full-connection layer, an activation layer, a full-connection layer and an activation layer;
the feature fusion layer can be represented as:
I Map =H Fus1 (I QS ×I QR )×H Fus2 (I QR )
wherein H Fus1 And H Fus2 Respectively representing the system response of the fused feature process to the real feature process, I Map Outputting the features for the feature fusion layer;
the fusion characteristic processing is sequentially composed of 2 full-connection layers;
the real feature processing is composed of 2 full connection layers in sequence.
4. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 3, characterized by comprising the following steps: the ideal multi-sensor spectrum reconstruction network in the second step comprises a feature extraction layer and a feature fusion layer;
the feature extraction layer comprises a 2D space feature extraction network and a 3D space spectrum feature extraction network;
the characteristic fusion layer comprises a multi-sensor information fusion network and a spectrum characteristic processing module;
the ideal multi-sensor spectral reconstruction network can be expressed as:
I MSSR =H SE (H S3D (I M )+H MSR (H S2D (I M ),H S2D1 (I M1 ),H S2D2 (I M2 )))
wherein H S2D Representing a 2D space feature extraction network corresponding to the main sensor feature extraction layer;
H S3D representing a 3D space spectrum feature extraction network corresponding to the main sensor feature extraction layer;
H S2D1 representing a 2D space feature extraction network corresponding to a feature extraction layer of the sensor 1;
H S2D2 representing a 2D spatial feature extraction network corresponding to the feature extraction layer of the sensor 2;
H MSR and H SE Respectively representing a multi-sensor information fusion network and a spectrum characteristic processing module in the characteristic fusion layer;
I M representing ideal multi-spectral data, I, input to the primary sensor M1 Representing ideal multispectral data, I, of the input sensor 1 M2 Representing ideal multi-spectral data, I, of the input sensor 2 MSSR Representing the ideal multi-sensor spectral reconstruction network output.
5. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 4 is characterized in that: the 2D spatial feature extraction network includes: the device comprises a dimensionality increasing layer, a feature extraction layer, a feature stacking layer and a feature compression layer;
the dimensionality increasing layer sequentially consists of a single 3 multiplied by 3 convolution layer and a single activation layer;
I 2D-DA =H 2D-DA (I M )
I M to input data, I 2D-DA Is output after passing through the dimensionality raising layer; h 2D-DA Processing for a dimensionality raising layer;
the feature extraction layer consists of 5 compression and excitation residual blocks;
each compression and excitation residual block consists of a single residual block and a single compression and excitation block in sequence;
each residual block comprises a 3 × 3 convolutional layer, an active layer, a 1 × 1 convolutional layer and an active layer in sequence;
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation of each compression and excitation residual block is as follows:
inputting the input characteristics into a 3 x 3 convolutional layer, an active layer, a 1 x 1 convolutional layer and an active layer in sequence to obtain output characteristics of a residual block, inputting the result of adding the output characteristics of the residual block and the input characteristics into a global pooling layer, a full connection layer, a Relu active layer, a full connection layer and a Sigmoid active layer in sequence to obtain output characteristics of a compression and excitation block, and multiplying the output characteristics of the residual block and the output characteristics of the compression and excitation block to obtain the output result of the compression and excitation residual block;
the output of the compression and excitation residual block for layers 1 through N can be expressed as:
I 2D-1 =H 2D-1 (I 2D-DA )
I 2D-N =H 2D-N (...(H 2D-1 (I 2D-DA )))
where H represents the system response of the compressed and excited residual block, H 2D-1 Representing the system response of the layer 1 compressed and excited residual block, H 2D-N Representing the system response of the N layer compressed and excited residual block, I 2D-N Representing the characteristics of the output of the N layer compression and excitation residual block;
the feature stack layer can be represented as:
I 2D-C =[I 2D-DA ,I 2D-1 ,I 2D-2 ,...,I 2D-N ]
wherein]Representing a stack of spectral dimensions, I 2D-C Features representing the final output;
the characteristic compression layer consists of a single 3 multiplied by 3 convolution layer and a single activation layer in sequence; expressed as:
I 2D =H 2D-D (I 2D-C )
wherein H 2D-D Representing a feature compression layer process, which in turn consists of a single 3 x 3 convolutional layer and a single active layer.
6. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 5 is characterized in that: the 3D space spectrum feature extraction network comprises 4 groups of 3D feature processing modules and a 3D feature compression module;
each stacked 3D feature processing module includes: the system comprises a dimensionality increasing layer, a feature extraction layer, a feature expansion layer, a 3D feature stacking layer, a summation module and a 3D spectral feature amplification layer;
the dimensionality increasing layer sequentially consists of a single 3 multiplied by 3 convolution layer and a single activation layer; expressed as:
I 3D-DA =H 3D-DA (I M )
I M inputting data; h 3D-DA For the ascending layer treatment, I 3D-DA Is output after passing through the dimensionality raising layer;
the feature extraction layer consists of 5 compression and excitation residual blocks; each compression and excitation residual block consists of a single residual block and a single compression and excitation block in sequence;
each residual block comprises a 3 × 3 convolutional layer, an active layer, a 1 × 1 convolutional layer and an active layer in sequence;
each compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer;
the connection relation of each compression and excitation residual block is as follows:
inputting the input characteristics into a 3 x 3 convolutional layer, an active layer, a 1 x 1 convolutional layer and an active layer in sequence to obtain output characteristics of a residual block, inputting the result of adding the output characteristics of the residual block and the input characteristics into a global pooling layer, a full connection layer, a Relu active layer, a full connection layer and a Sigmoid active layer in sequence to obtain output characteristics of a compression and excitation block, and multiplying the output characteristics of the residual block and the output characteristics of the compression and excitation block to obtain the output result of the compression and excitation residual block;
the output of the compression and excitation residual block for layers 1 through N can be expressed as
I 3D-1 =H 3D-1 (I 3D-DA )
I 3D-N =H 3D-N (...(H 3D-1 (I 3D-DA )))
Where H represents the system response of the compressed and excited residual block, H 3D-1 Representing the system response of the layer 1 compressed and excited residual block, H 3D-N Representing the system response of the N layer compressed and excited residual block, I 3D-N Representing the characteristics of the output of the N layer compression and excitation residual block;
a feature extension layer for extending the 2D depth feature into a 3D depth feature, adding a feature dimension to the original 4-dimensional data to become 5-dimensional data, I 3D-N-USQ I of the upscaled layer representing the feature of the N-th layer compressed and excited residual block output after feature expansion 3D-DA Become I by feature extension 3D-DA-USQ ;
The 3D feature stack layer can be represented as:
I 3D-C =[I 3D-DA-USQ ,I 3D-1-USQ ,I 3D-2-USQ ,...,I 3D-N-USQ ]
wherein]Representing stacks of feature dimensions, I 3D-C Features representing the final output;
the summing module is used for combining the depth features of the upper 3D feature processing module;
I 3D-S =I 3D-C +I 3D-L0
in which I 3D-L0 Features acquired by an upper 3D feature processing module; i is 3D-S Representing the feature after the summation processing;
the 3D spectral characteristic amplification layer consists of 1 3D deconvolution layer and is used for improving the spectral dimension of data;
I 3D-L =H 3D-L (I 3D-S )
wherein H 3D-L Representing 3D spectral feature magnification layer processing; i is 3D-L Representing amplified 3D spectral features I 3D-S ;
After passing through 4 groups of overlapped 3D feature processing modules, inputting the data into a 3D feature compression module;
the 3D feature compression module comprises 1 or 2 3D convolution modules to compress the spectral dimensions of the 3D features to spectral dimensions of high spectral features;
I 3D =H 3D-D (I 3D-S )
wherein, I 3D-S Representing the features after the summation process; h 3D-D Representing 3D feature compression module processing; i is 3D Representing the output characteristics after passing through the 3D characteristic compression module.
7. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 6 is characterized in that: the spectral feature processing module consists of a single compression and excitation block;
the single compression and excitation block sequentially comprises a global pooling layer, a full connection layer, a Relu activation layer, a full connection layer and a Sigmoid activation layer.
8. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 7 is characterized in that: the multi-sensor information fusion network sequentially comprises 1 multi-sensor information preprocessing layer, 3 multi-information fusion blocks and 1 information post-processing layer;
intermediate feature I of 2D spatial feature extraction network for multi-sensor information preprocessing layer 2D-C Respectively named as I for a plurality of sensors 2D-C 、I 2D-C1 And I 2D-C2 Respectively processing as follows:
I MT-P =H MT-P (I 2D-C )
I MT-P1 =H MT-P1 (I 2D-C1 )
I MT-P2 =H MT-P2 (I 2D-C2 )
wherein, I 2D-C Stacking features H acquired by a 2D space feature extraction network on a feature stacking layer for multispectral data input by a main sensor MT-P System response of 2D convolution module for corresponding main sensor, I MT-P Is a characteristic of the main sensor output, I 2D-C2 The multispectral data input by the sensor 1 is subjected to stacking characteristics obtained by a 2D space characteristic extraction network in a characteristic stacking layer, H MT-P2 System response of 2D convolution module for corresponding sensor 1, I MT-P2 For the output characteristics of the sensor 1, I 2D-C2 The multispectral data input by the sensor 2 is subjected to stacking feature extraction network on the feature stacking layer through 2D space feature extraction network, H MT-P3 System response of 2D convolution module for corresponding sensor 2, I MT-P3 Outputting the characteristic for the sensor 2;
the multi-information fusion block performs preliminary fusion on the three output characteristics:
I MT-PF =H MT-F (I MT-P +I MT-P1 +I MT-P2 )
I MTI-PC-End =H MT-F (H MT-F (I MT-PF ))
wherein, I MT-PF For output after a single multi-information fusion block, I MTI-PC-End For output after 3 multiple information fusion blocks, H MT-F A response function for the multi-information fusion block;
the information post-processing layer comprises an activation layer and is used for outputting the combination block output by the last-stage multi-information fusion block after post-processing:
I MT =H MT-A (I MTI-PC-End )
wherein I MTI-PC-End Combination block, H, output for the last stage multi-information fusion block MT-A Is the response of the activation function.
9. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 8 is characterized in that: the ideal mapping network of the sensor in the first step; the specific process is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss=L(f S (M R1 ,M S0 ;θ S ),M S1 )
Where L is the loss function used, M R1 、M S0 And M S1 Real multispectral data of a sensor 1 to be projected, ideal multispectral data of a sensor 0 for reference and ideal multispectral data of the sensor 1 are respectively obtained; theta S As a network parameter, f S Responding to the network;
the training process of the multi-sensor spectrum reconstruction network in the second step is as follows:
optimizing network parameters by minimizing loss as shown in the following equation
Loss s =L(f M (M,M 1 ,M 2 ;θ M ),H)+λL(f M (M,M 1 ,M 2 ;θ M )R,M)
Where L is the loss function used, M, M 1 、M 2 H is the real multispectral data of the sensor 1, the ideal multispectral data of the sensor 0 for reference, the ideal multispectral data of the sensor 1 and the hyperspectral data respectively, R is a matrix corresponding to a spectral response function, f is M For network response, θ M λ is a scaling factor for the network parameter.
10. The method for generating the hyperspectral remote sensing image based on the multi-sensor spectral reconstruction network according to claim 9 is characterized in that: in the fourth step, the hyperspectral image obtained in the third step is corrected to obtain a corrected hyperspectral image; the specific process is as follows:
the parallel component corresponding to the nth multispectral sensor is
Replacement by corresponding parallel components
The final output result is
Wherein M is 1 to n 0 Multispectral data corresponding to multispectral sensors, M = (M) 1 ,M 1 ,...,M n0 ),n 0 N =1,2, …, n for multispectral sensor and multispectral data number 0 (ii) a f (M; theta) is reconstructed hyperspectral data output by the network, R n As a function of the spectral response of the nth multispectral sensor,is R n Transpose of (2), PC n APC for decomposing a hyperspectral image using the spectral response function of the nth multispectral sensor to obtain parallel components n For the substituted parallel component, M n Is an ideal multispectral image, H RC And correcting the corrected hyperspectral image, wherein theta is a network parameter. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211603912.9A CN115880152B (en) | 2022-12-13 | 2022-12-13 | Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211603912.9A CN115880152B (en) | 2022-12-13 | 2022-12-13 | Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115880152A true CN115880152A (en) | 2023-03-31 |
CN115880152B CN115880152B (en) | 2023-11-24 |
Family
ID=85767388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211603912.9A Active CN115880152B (en) | 2022-12-13 | 2022-12-13 | Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115880152B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140293062A1 (en) * | 2011-07-08 | 2014-10-02 | Norsk Elektro Optikk As | Hyperspectral Camera and Method for Acquiring Hyperspectral Data |
CN108960345A (en) * | 2018-08-08 | 2018-12-07 | 广东工业大学 | A kind of fusion method of remote sensing images, system and associated component |
CN111127573A (en) * | 2019-12-12 | 2020-05-08 | 首都师范大学 | Wide-spectrum hyperspectral image reconstruction method based on deep learning |
CN112818794A (en) * | 2021-01-25 | 2021-05-18 | 哈尔滨工业大学 | Hyperspectral remote sensing image generation method based on progressive space-spectrum combined depth network |
WO2022222352A1 (en) * | 2021-04-22 | 2022-10-27 | 海南大学 | Remote-sensing panchromatic and multispectral image distributed fusion method based on residual network |
-
2022
- 2022-12-13 CN CN202211603912.9A patent/CN115880152B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140293062A1 (en) * | 2011-07-08 | 2014-10-02 | Norsk Elektro Optikk As | Hyperspectral Camera and Method for Acquiring Hyperspectral Data |
CN108960345A (en) * | 2018-08-08 | 2018-12-07 | 广东工业大学 | A kind of fusion method of remote sensing images, system and associated component |
CN111127573A (en) * | 2019-12-12 | 2020-05-08 | 首都师范大学 | Wide-spectrum hyperspectral image reconstruction method based on deep learning |
CN112818794A (en) * | 2021-01-25 | 2021-05-18 | 哈尔滨工业大学 | Hyperspectral remote sensing image generation method based on progressive space-spectrum combined depth network |
WO2022222352A1 (en) * | 2021-04-22 | 2022-10-27 | 海南大学 | Remote-sensing panchromatic and multispectral image distributed fusion method based on residual network |
Non-Patent Citations (2)
Title |
---|
TIANSHUAI LI 等, SPECTRAL RECONSTRUCTION NETWORK FROM MULTISPECTRAL IMAGES TO HYPERSPECTRAL IMAGES: A MULTITEMPORAL CASE * |
孟祥超;孙伟伟;任凯;杨刚;邵枫;符冉迪;: "基于多分辨率分析的GF-5和GF-1遥感影像空―谱融合", 遥感学报, no. 04 * |
Also Published As
Publication number | Publication date |
---|---|
CN115880152B (en) | 2023-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112818794B (en) | Hyperspectral remote sensing image generation method based on progressive space-spectrum combined depth network | |
CN110782462B (en) | Semantic segmentation method based on double-flow feature fusion | |
CN104112263B (en) | The method of full-colour image and Multispectral Image Fusion based on deep neural network | |
CN111126256B (en) | Hyperspectral image classification method based on self-adaptive space-spectrum multi-scale network | |
CN111080567A (en) | Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network | |
CN113327218B (en) | Hyperspectral and full-color image fusion method based on cascade network | |
CN113673590B (en) | Rain removing method, system and medium based on multi-scale hourglass dense connection network | |
CN112861729B (en) | Real-time depth completion method based on pseudo-depth map guidance | |
CN110288524B (en) | Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism | |
CN111191735B (en) | Convolutional neural network image classification method based on data difference and multi-scale features | |
CN111429466A (en) | Space-based crowd counting and density estimation method based on multi-scale information fusion network | |
CN111429349A (en) | Hyperspectral image super-resolution method based on spectrum constraint countermeasure network | |
CN115700727A (en) | Spectral super-resolution reconstruction method and system based on self-attention mechanism | |
CN113902622B (en) | Spectrum super-resolution method based on depth priori joint attention | |
CN112419192B (en) | Convolutional neural network-based ISMS image restoration and super-resolution reconstruction method and device | |
CN114821261A (en) | Image fusion algorithm | |
CN117252761A (en) | Cross-sensor remote sensing image super-resolution enhancement method | |
CN115345866A (en) | Method for extracting buildings from remote sensing images, electronic equipment and storage medium | |
CN115880152B (en) | Hyperspectral remote sensing image generation method based on multi-sensor spectrum reconstruction network | |
CN115909077A (en) | Hyperspectral image change detection method based on unsupervised spectrum unmixing neural network | |
CN113724307B (en) | Image registration method and device based on characteristic self-calibration network and related components | |
Li et al. | Hyperspectral and Panchromatic images Fusion Based on The Dual Conditional Diffusion Models | |
CN111275624B (en) | Face image super-resolution reconstruction and identification method based on multi-set typical correlation analysis | |
CN114241297A (en) | Remote sensing image classification method based on multi-scale pyramid space independent convolution | |
CN112241765A (en) | Image classification model and method based on multi-scale convolution and attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |