CN113920043A - Double-current remote sensing image fusion method based on residual channel attention mechanism - Google Patents
Double-current remote sensing image fusion method based on residual channel attention mechanism Download PDFInfo
- Publication number
- CN113920043A CN113920043A CN202111156702.5A CN202111156702A CN113920043A CN 113920043 A CN113920043 A CN 113920043A CN 202111156702 A CN202111156702 A CN 202111156702A CN 113920043 A CN113920043 A CN 113920043A
- Authority
- CN
- China
- Prior art keywords
- residual
- channel
- remote sensing
- attention mechanism
- attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007246 mechanism Effects 0.000 title claims abstract description 50
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 22
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims description 19
- 230000004913 activation Effects 0.000 claims description 11
- 238000011176 pooling Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 230000001174 ascending effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 13
- 238000013135 deep learning Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 239000012141 concentrate Substances 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007526 fusion splicing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a double-current remote sensing image fusion method based on a residual channel attention mechanism, wherein a convolutional neural network extracts features from a full-color image and a low-resolution multispectral remote sensing image respectively, then the features are fused to form a compact feature map, then a residual attention network is constructed, the residual attention network uses the attention mechanism to model the interdependency relation between feature channels and self-adaptively adjust the features of each channel, so that the more useful channels can be concentrated, the residual attention network with the capability of identifying and learning is improved, multi-residual connection is adopted, long residual connection allows residual learning of shallow layers, long residual connection and short residual connection allow a large amount of shallow information to pass through jump connection based on identities, the flow of the information is simplified, and finally, after being reconstructed by a deconvolution layer and a convolution layer, a high-quality remote sensing image can be generated, has important significance for the field of remote sensing image fusion.
Description
Technical Field
The invention relates to the technical field of remote sensing image fusion, in particular to a double-current remote sensing image fusion method based on a residual channel attention mechanism.
Background
The remote sensing image fusion is an algorithm for fusing a high-resolution panchromatic remote sensing image (PAN image) and a low-resolution multispectral remote sensing image (LMS image) into the high-resolution multispectral remote sensing image, the high-resolution multispectral remote sensing image can calculate the reflection spectrum of each pixel on the earth surface to obtain various information, thereby providing help for subsequent remote sensing scene segmentation, classification and feature extraction, such as forest resource investigation, ground feature classification, precise agriculture, weather forecast and the like, however, due to the limitation of current hardware, it is difficult to obtain a remote sensing image with high resolution by a single sensor, and only a full-color image of a single waveband of the earth surface and a multi-spectral remote sensing image of a multi-waveband can be respectively obtained, the two images carry different information but can be complementary in information, panchromatic sharpening is developed as a key technology in remote sensing image fusion in order to obtain a multispectral remote sensing image with high resolution. With the increasing importance of remote sensing images, the fusion algorithm of the remote sensing images is continuously improved, how to fuse the spatial information and the spectral information of the full-color image and the multispectral remote sensing image as much as possible to improve the fusion effect is a key concern in the fusion of the remote sensing images.
Many advanced deep learning based approaches have been proposed in recent years with great potential, the deep learning model being built from multiple transform layers, in each of which input data is linearly filtered to produce output data, the multiple layers being added together to form an overall transform with high non-linearity, deep learning, especially Convolutional Neural Networks (CNNs), providing better transformation modeling, which facilitates fitting complex transforms. In the training process, parameters are updated under the supervision of a training sample, the fitting precision is improved, the characteristics extracted by the deep learning method in the fusion of remote sensing images have stronger expressive ability than the characteristics extracted by the traditional method, and the characteristics are inspired by the strong ability of deep learning in the field of computer vision, such as: in the aspect of remote sensing image fusion based on a three-layer CNN structure of a convolutional neural network, a CNN-based remote sensing image fusion algorithm PNN (panschpening by CNN) is firstly provided, so that the performance of the remote sensing image fusion algorithm is remarkably improved, and a high-resolution multispectral remote sensing image is generated; the algorithm is used for remote sensing image fusion DRPNN (remote sensing by Deep Residual CNN) based on a Residual connection depth network, the Residual connection learning characteristic is fused, a very Deep convolution network framework can be formed under the support of a Residual connection framework, the network is not easy to degrade, the fusion precision is improved, and the network performance is also improved; the (Two-stream fusion network) TFnet respectively extracts the features of the PAN and MS images by adopting a dual-channel CNN, and strengthens the deep features by adopting residual connection learning shallow features and then based on an identity jumping process, so that the learning of the features is strengthened and the performance of a fusion network is improved.
Disclosure of Invention
Therefore, the invention provides a double-current remote sensing image fusion method based on a residual channel attention mechanism, a residual attention network is constructed, the network is enabled to be concentrated on more useful channels and enhance learning ability through the channel attention mechanism in the residual attention network, and the multispectral remote sensing image with high resolution can be obtained finally.
The technical scheme of the invention is realized as follows:
the double-current remote sensing image fusion method based on the residual channel attention mechanism comprises the following steps:
s1, extracting the features of the panchromatic image and the low-resolution multispectral remote sensing image by using a convolutional neural network, and splicing the two images to obtain a splicing feature;
step S2, constructing a residual error attention network, wherein the residual error attention network comprises a residual error attention module, and the residual error attention module comprises a channel attention mechanism;
step S3, inputting the splicing characteristics into a residual error attention network for convolution processing to obtain initial characteristics, carrying out weighted distribution processing on the initial characteristics by a residual error attention module according to a channel attention mechanism to obtain new characteristics, and obtaining reinforced characteristics according to the new characteristics;
and step S4, the enhanced features are subjected to deconvolution layer size enlargement, and then the features after enlargement are reconstructed through a convolution layer, so that a multispectral remote sensing image with high resolution is obtained.
Preferably, in step S1, before feature extraction is performed on the panchromatic image, the panchromatic image is downsampled to adapt to the size of the multispectral remote sensing image with low resolution.
Preferably, the residual attention network of step S2 further includes a residual attention group, the residual attention group includes several blocks, a long residual connection and a short residual connection, the short residual connection stacks the remaining blocks, and the long residual connection and the short residual connection allow the shallow information to be directly propagated backward through the identity mapping.
Preferably, the specific step of step S3 includes:
step S31, splicing the characteristics Fb-1Inputting the data into a residual attention network, and obtaining an initial characteristic X after two convolutions;
step S32, inputting the initial feature X into a residual attention module, and obtaining a new feature by a channel attention mechanismAccording to the new characteristicsAnd splicing feature Fb-1Obtaining an enhanced feature Fb。
Preferably, the specific expression of step S31 is:
wherein Xb-1For splicing features Fb-1The output, X, obtained after the first convolutionbFor the output obtained after the second convolution, the initial characteristic X is XbOne of (1), W1And W2Weights of the first and second convolutional layers, respectively, b1And b2Represents the bias of the first and second convolutional layers, 3 × 3 represents the size of the convolutional kernel, and δ (·) represents the ReLU activation function.
Preferably, the specific expression of step S32 is:
Fb=CA(Xb)+Fb-1;
Preferably, the step S3, where the residual attention module performs weighted distribution processing on the initial features according to the channel attention mechanism and obtains the enhanced features, includes the specific steps of:
s33, acquiring the channel number C, performing global average pooling on the input initial characteristics X, and acquiring a channel description z;
step S34, the channel description z is sequentially processed by a down-sampling layer and an up-sampling layer to obtain channel statistics w, and the channel statistics w contains the weight coefficient w of each channelc;
Step S35, weighting factor wcMultiplying the initial characteristic X to obtain a new characteristic
Preferably, in step S33, the number of channels C ═ 1, 2.., C), and the initial characteristic X ═ X1,X2,...Xc]Channel description of the c-th channel zcThe specific expression of (A) is as follows:
wherein f isGP(. H) is the global average pooling function, H, W is the size of the feature map, xc(i, j) is layer c feature xcThe value at (i, j).
Preferably, the specific expression of the channel statistic w in step S34 is as follows:
w=S(WUδ(WDz));
where S (-) denotes a sigmoid activation function, δ (-) denotes a ReLU activation function, WDSet of weights for dimension-reduced convolutional layers, WUIs the set of weights for the ascending convolution layer.
wherein wcAnd xcThe weighting coefficients and the initial characteristics of the layer c channel are respectively.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a double-current remote sensing image fusion method based on a residual channel attention mechanism, which adopts a convolutional neural network as a feature extractor to represent a full-color image and a multispectral remote sensing image with low resolution, fuses and splices the two extracted features into spliced features, and then constructs the residual attention network, wherein a residual attention module containing the channel attention mechanism is arranged in the residual attention network, and the channel attention mechanism can learn the interdependency among channels to recalibrate the channel features, so that the spatial information and the spectral information in the image can be extracted in a concentrated manner to comprehensively reconstruct a panchromatic sharpened image, and the multispectral remote sensing image with high resolution can be obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only preferred embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a block flow diagram of a dual-flow remote sensing image fusion method based on a residual channel attention mechanism according to the present invention;
FIG. 2 is a schematic flow chart of a double-current remote sensing image fusion method based on a residual channel attention mechanism according to the present invention;
FIG. 3 is a flow chart of a residual attention network of the dual-flow remote sensing image fusion method based on the residual channel attention mechanism of the present invention;
FIG. 4 is a flow chart of a channel attention mechanism of the dual-flow remote sensing image fusion method based on a residual channel attention mechanism of the present invention;
FIG. 5 is a comparison diagram of other remote sensing image fusion algorithms of the dual-flow remote sensing image fusion method based on the residual channel attention mechanism.
Detailed Description
For a better understanding of the technical content of the present invention, a specific embodiment is provided below, and the present invention is further described with reference to the accompanying drawings.
Referring to fig. 1 to 4, the method for fusing dual-flow remote sensing images based on the residual channel attention mechanism provided by the invention comprises the following steps:
s1, extracting the features of the panchromatic image and the low-resolution multispectral remote sensing image by using a convolutional neural network, and splicing the two images to obtain a splicing feature;
step S2, constructing a residual error attention network, wherein the residual error attention network comprises a residual error attention module, and the residual error attention module comprises a channel attention mechanism;
step S3, inputting the splicing characteristics into a residual error attention network for convolution processing to obtain initial characteristics, carrying out weighted distribution processing on the initial characteristics by a residual error attention module according to a channel attention mechanism to obtain new characteristics, and obtaining reinforced characteristics according to the new characteristics;
and step S4, the enhanced features are subjected to deconvolution layer size enlargement, and then the features after enlargement are reconstructed through a convolution layer, so that a multispectral remote sensing image with high resolution is obtained.
The invention relates to a double-current remote sensing image fusion method based on a residual channel attention mechanism, which comprises the steps of firstly adopting a convolutional neural network as a feature extractor to represent a panchromatic image and a low-resolution multispectral remote sensing image, then splicing the two on a channel to form compact feature representation, namely splicing features, wherein the panchromatic image and the low-resolution multispectral remote sensing image are carriers of spectral information, so that when the feature extraction is carried out by adopting the convolutional neural network, the feature image can be well reconstructed, after the splicing features are obtained, the splicing features are input into a constructed residual attention network, the residual attention network comprises a plurality of residual attention modules, the residual attention modules comprise the channel attention mechanism, and the channel attention mechanism can adaptively endow different weights to each channel by modeling the interdependency among feature channels, the mechanism allows the network to concentrate on more useful channels and enhance learning ability, and the enhanced features processed by the residual attention network can be convolved twice to obtain a final high-resolution multispectral remote sensing image.
Preferably, in step S1, before feature extraction is performed on the panchromatic image, the panchromatic image is downsampled to adapt to the size of the multispectral remote sensing image with low resolution.
The size of the full-color image is 256 × 256 generally, and the size of the multispectral remote sensing image is 64 × 64 generally, so that the convolutional neural network is used for carrying out feature extractionBefore taking, downsampling is needed to be carried out on the panchromatic image, so that the size of the panchromatic image is applied to the multispectral remote sensing image with low resolution, in order to keep the characteristics and prevent information loss, a convolutional neural network serving as a characteristic extractor does not use a pooling layer, batch normalization and ReLU, the pooling layer, the batch normalization and the ReLU are simply connected together to realize a fusion strategy, and the spliced characteristics F after fusion splicingb-1The specific expression of (A) is as follows:
inputting full-color image and low-resolution multispectral remote sensing image by XpAnd XmRepresentation, features extracted by convolutional neural networksAndindicating that the superscript 1 indicates feature extraction from layer 1, thenThe expression of (a) is:
wherein W1And W2Are respectively provided withRepresenting the weights of the first and second convolutional layers, b1And b2Respectively, the offsets of the first and second convolutional layers, 3 x 3 represents the size of the convolutional kernel,representing the convolution output of layer 2.
Preferably, the residual attention network of step S2 further includes a residual attention group, the residual attention group includes several blocks, a long residual connection and a short residual connection, the short residual connection stacks the remaining blocks, and the long residual connection and the short residual connection allow the shallow information to be directly propagated backward through the identity mapping.
Multiple residual concatenations may learn shallow features to enhance deep features, while flow of information is facilitated because long and short residual concatenations allow shallow information to propagate directly back through the identity map.
Preferably, the specific step of step S3 includes:
step S31, splicing the characteristics Fb-1Inputting the data into a residual attention network, and obtaining an initial characteristic X after two convolutions;
the specific expression of the step S31 is as follows:
wherein Xb-1For splicing features Fb-1The output, X, obtained after the first convolutionbFor the output obtained after the second convolution, the initial characteristic X is XbOne of (1), W1And W2Weights of the first and second convolutional layers, respectively, b1And b2Represents the bias of the first and second convolutional layers, 3 × 3 represents the size of the convolutional kernel, and δ (·) represents the ReLU activation function.
Step S32, inputting the initial feature X into a residual attention module, and obtaining a new feature by a channel attention mechanismAccording to the new characteristicsAnd splicing feature Fb-1Obtaining an enhanced feature Fb。
Preferably, the first and second liquid crystal materials are,
the specific expression of the step S32 is as follows:
Fb=CA(Xb)+Fb-1;
After the stitching features are input into the residual attention network, under the action of multi-residual connection and a channel attention mechanism, the enhanced features can be obtained, and due to the existence of the channel attention mechanism, each channel can be endowed with different weights adaptively through modeling the interdependence among feature channels, and the mechanism allows the network to concentrate on more useful channels and enhance the learning capacity.
Preferably, the step S3, where the residual attention module performs weighted distribution processing on the initial features according to the channel attention mechanism and obtains the enhanced features, includes the specific steps of:
step S33, acquiring the channel quantity C, performing global average pooling on the input initial characteristics X, converting global space information into a channel descriptor, and acquiring a channel description z;
the number of channels C ═ 1, 2., C), the initial characteristic X ═ X1,X2,...Xc]Channel description of the c-th channel zcThe specific expression of (A) is as follows:
wherein f isGP(. H) is the global average pooling function, H, W is the size of the feature map, xc(i, j) is the c-th layerCharacteristic xcThe value at (i, j).
Besides global pooling, an aggregation technique is introduced, channel dependence is completely captured from aggregated information through global average pooling, and then a sigmoid activation function is introduced to learn nonlinear interaction between channels.
Step S34, the channel description z is sequentially processed by a down-sampling layer and an up-sampling layer to obtain channel statistics w, and the channel statistics w contains the weight coefficient w of each channelc;
The specific expression of the channel statistic w in step S34 is as follows:
w=S(WUδ(WDz));
where S (-) denotes a sigmoid activation function, δ (-) denotes a ReLU activation function, WDSet of weights for dimension-reduced convolutional layers, WUIs the set of weights for the ascending convolution layer.
The dimensionality reduction convolutional layer has the functions of reducing dimensionality of the channels, setting the dimensionality reduction proportion to be r, activating the dimensionality reduced signals by using a ReLU activation function, increasing the number of the channels by r times through the dimensionality reduction convolutional layer, and obtaining the weight coefficient w of each channelcAnd the final channel statistic w is obtained and used to rescale the initial feature X.
Step S35, weighting factor wcMultiplying the initial characteristic X to obtain a new characteristic
wherein wcAnd xcThe weighting coefficients and the initial characteristics of the layer c channel are respectively.
The whole adjustment process of the channel attention mechanism is actually that the weighting distribution is carried out again on the characteristics of different channels.
Compared with the prior art, the invention provides a double-current fusion architecture, a convolutional neural network extracts features from a full-color image and a multispectral remote sensing image respectively, then the features are fused to form a compact feature map which can simultaneously represent the space and spectral information of the full-color image and the multispectral image, each convolution operator only has a local receptive field in the convolution process and cannot fully utilize the context information, so that the obtained features lack the context information, the traditional algorithm treats each channel of the feature map equally during fusion, ignores the interdependence relationship of each channel among the feature maps, and uses an attention mechanism to model the interdependence relationship among the feature channels to self-adaptively adjust the features of each channel, so that the attention mechanism enables the network proposed by us to concentrate on more useful channels, the recognition learning capability is improved, and meanwhile, multi-residual connection is adopted, so that the network can adapt to a deeper structure, wherein long residual connection allows shallow residual learning, in each residual module, a plurality of residual blocks are stacked by using short residual connection, and the long residual connection, the short residual connection and the residual group allow a large amount of shallow information to pass through the jump connection based on identity, so that the information flow is simplified.
In order to discuss the effectiveness of the invention, the invention and other remote sensing image fusion methods (PCA, MTF _ GLP, PNN and the like) are compared in fusion effect, and the experimental data set is as follows:
after the experiment is carried out according to the experimental data set, the fusion method and the evaluation index parameters of each image are shown in table 1 (RCAMTNet is the invention), and the comparison of the fusion results and the images shown in fig. 5 shows that the method provided by the invention is obviously superior to the existing remote sensing image method according to the comparison of various evaluation indexes shown in table 1 and fig. 5.
TABLE 1
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. The double-current remote sensing image fusion method based on the residual channel attention mechanism is characterized by comprising the following steps of:
s1, extracting the features of the panchromatic image and the low-resolution multispectral remote sensing image by using a convolutional neural network, and splicing the two images to obtain a splicing feature;
step S2, constructing a residual error attention network, wherein the residual error attention network comprises a residual error attention module, and the residual error attention module comprises a channel attention mechanism;
step S3, inputting the splicing characteristics into a residual error attention network for convolution processing to obtain initial characteristics, carrying out weighted distribution processing on the initial characteristics by a residual error attention module according to a channel attention mechanism to obtain new characteristics, and obtaining reinforced characteristics according to the new characteristics;
and step S4, the enhanced features are subjected to deconvolution layer size enlargement, and then the features after enlargement are reconstructed through a convolution layer, so that a multispectral remote sensing image with high resolution is obtained.
2. The method for fusing dual-flow remote sensing images based on the residual channel attention mechanism as claimed in claim 1, wherein the step S1 is performed by downsampling the panchromatic image to fit the size of the multispectral remote sensing image with low resolution before performing feature extraction on the panchromatic image.
3. The method for dual-flow remote sensing image fusion based on the residual channel attention mechanism of claim 1, wherein the residual attention network of step S2 further comprises a residual attention cluster, the residual attention cluster comprising a number of blocks, long residual connections, and short residual connections, the short residual connections stacking the remaining blocks, the long residual connections and the short residual connections allowing shallow information to propagate directly backward through identity mapping.
4. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 1, wherein the specific step of the step S3 comprises:
step S31, splicing the characteristics Fb-1Inputting the data into a residual attention network, and obtaining an initial characteristic X after two convolutions;
5. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 4, wherein the specific expression of the step S31 is as follows:
wherein Xb-1For splicing features Fb-1The output, X, obtained after the first convolutionbFor the output obtained after the second convolution, the initial characteristic X is XbOne of (1), W1And W2Weights of the first and second convolutional layers, respectively, b1And b2Denotes the offset of the first and second convolutional layers, 3X 3 stands forThe size of the convolution kernel, δ (·) represents the ReLU activation function.
7. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 1, wherein the step S3 in which the residual attention module performs weighting distribution processing on the initial features according to the channel attention mechanism and obtains the enhanced features comprises the specific steps of:
s33, acquiring the channel number C, performing global average pooling on the input initial characteristics X, and acquiring a channel description z;
step S34, the channel description z is sequentially processed by a down-sampling layer and an up-sampling layer to obtain channel statistics w, and the channel statistics w contains the weight coefficient w of each channelc;
8. The method for fusing dual-flow remote sensing images based on the residual channel attention mechanism as claimed in claim 7, wherein in step S33, the number of channels C ═ (1, 2.., C), and the initial characteristic X ═ X1,X2,...Xc]Channel description of the c-th channel zcThe specific expression of (A) is as follows:
wherein f isGP(. H) is the global average pooling function, H, W is the size of the feature map, xc(i, j) is layer c feature xcThe value at (i, j).
9. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 7, wherein the specific expression of the channel statistic w in the step S34 is as follows:
w=S(WUδ(WDz));
where S (-) denotes a sigmoid activation function, δ (-) denotes a ReLU activation function, WDSet of weights for dimension-reduced convolutional layers, WUIs the set of weights for the ascending convolution layer.
10. The method for fusing dual-flow remote sensing images based on residual channel attention mechanism according to claim 7, wherein the new features in the step S35The specific expression is as follows:
wherein wcAnd xcThe weighting coefficients and the initial characteristics of the layer c channel are respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111156702.5A CN113920043A (en) | 2021-09-30 | 2021-09-30 | Double-current remote sensing image fusion method based on residual channel attention mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111156702.5A CN113920043A (en) | 2021-09-30 | 2021-09-30 | Double-current remote sensing image fusion method based on residual channel attention mechanism |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113920043A true CN113920043A (en) | 2022-01-11 |
Family
ID=79237667
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111156702.5A Pending CN113920043A (en) | 2021-09-30 | 2021-09-30 | Double-current remote sensing image fusion method based on residual channel attention mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113920043A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332592A (en) * | 2022-03-11 | 2022-04-12 | 中国海洋大学 | Ocean environment data fusion method and system based on attention mechanism |
CN114511470A (en) * | 2022-04-06 | 2022-05-17 | 中国科学院深圳先进技术研究院 | Attention mechanism-based double-branch panchromatic sharpening method |
CN114547017A (en) * | 2022-04-27 | 2022-05-27 | 南京信息工程大学 | Meteorological big data fusion method based on deep learning |
CN114972812A (en) * | 2022-06-02 | 2022-08-30 | 华侨大学 | Non-local attention learning method based on structural similarity |
CN117788472A (en) * | 2024-02-27 | 2024-03-29 | 南京航空航天大学 | Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm |
-
2021
- 2021-09-30 CN CN202111156702.5A patent/CN113920043A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332592A (en) * | 2022-03-11 | 2022-04-12 | 中国海洋大学 | Ocean environment data fusion method and system based on attention mechanism |
CN114511470A (en) * | 2022-04-06 | 2022-05-17 | 中国科学院深圳先进技术研究院 | Attention mechanism-based double-branch panchromatic sharpening method |
CN114511470B (en) * | 2022-04-06 | 2022-07-08 | 中国科学院深圳先进技术研究院 | Attention mechanism-based double-branch panchromatic sharpening method |
CN114547017A (en) * | 2022-04-27 | 2022-05-27 | 南京信息工程大学 | Meteorological big data fusion method based on deep learning |
CN114972812A (en) * | 2022-06-02 | 2022-08-30 | 华侨大学 | Non-local attention learning method based on structural similarity |
CN117788472A (en) * | 2024-02-27 | 2024-03-29 | 南京航空航天大学 | Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm |
CN117788472B (en) * | 2024-02-27 | 2024-05-14 | 南京航空航天大学 | Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113920043A (en) | Double-current remote sensing image fusion method based on residual channel attention mechanism | |
CN112836773B (en) | Hyperspectral image classification method based on global attention residual error network | |
CN109064396B (en) | Single image super-resolution reconstruction method based on deep component learning network | |
CN111311518B (en) | Image denoising method and device based on multi-scale mixed attention residual error network | |
WO2021018163A1 (en) | Neural network search method and apparatus | |
CN108830813A (en) | A kind of image super-resolution Enhancement Method of knowledge based distillation | |
CN111275618A (en) | Depth map super-resolution reconstruction network construction method based on double-branch perception | |
CN111325165B (en) | Urban remote sensing image scene classification method considering spatial relationship information | |
CN110059728B (en) | RGB-D image visual saliency detection method based on attention model | |
CN111695467A (en) | Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion | |
CN107316054A (en) | Non-standard character recognition methods based on convolutional neural networks and SVMs | |
CN111951164B (en) | Image super-resolution reconstruction network structure and image reconstruction effect analysis method | |
Luo et al. | Lattice network for lightweight image restoration | |
CN113066065B (en) | No-reference image quality detection method, system, terminal and medium | |
CN115496658A (en) | Lightweight image super-resolution reconstruction method based on double attention mechanism | |
CN113066037B (en) | Multispectral and full-color image fusion method and system based on graph attention machine system | |
CN112561028A (en) | Method for training neural network model, and method and device for data processing | |
Hu et al. | Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution | |
CN112270366B (en) | Micro target detection method based on self-adaptive multi-feature fusion | |
CN115660955A (en) | Super-resolution reconstruction model, method, equipment and storage medium for efficient multi-attention feature fusion | |
CN112734643A (en) | Lightweight image super-resolution reconstruction method based on cascade network | |
CN114937202A (en) | Double-current Swin transform remote sensing scene classification method | |
CN115331104A (en) | Crop planting information extraction method based on convolutional neural network | |
CN116168197A (en) | Image segmentation method based on Transformer segmentation network and regularization training | |
CN115526779A (en) | Infrared image super-resolution reconstruction method based on dynamic attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |