CN113920043A - Double-current remote sensing image fusion method based on residual channel attention mechanism - Google Patents

Double-current remote sensing image fusion method based on residual channel attention mechanism Download PDF

Info

Publication number
CN113920043A
CN113920043A CN202111156702.5A CN202111156702A CN113920043A CN 113920043 A CN113920043 A CN 113920043A CN 202111156702 A CN202111156702 A CN 202111156702A CN 113920043 A CN113920043 A CN 113920043A
Authority
CN
China
Prior art keywords
residual
channel
remote sensing
attention mechanism
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111156702.5A
Other languages
Chinese (zh)
Inventor
黄梦醒
刘适
毋媛媛
冯思玲
冯文龙
张雨
吴迪
黎贞凤
贺陈耔都
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202111156702.5A priority Critical patent/CN113920043A/en
Publication of CN113920043A publication Critical patent/CN113920043A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a double-current remote sensing image fusion method based on a residual channel attention mechanism, wherein a convolutional neural network extracts features from a full-color image and a low-resolution multispectral remote sensing image respectively, then the features are fused to form a compact feature map, then a residual attention network is constructed, the residual attention network uses the attention mechanism to model the interdependency relation between feature channels and self-adaptively adjust the features of each channel, so that the more useful channels can be concentrated, the residual attention network with the capability of identifying and learning is improved, multi-residual connection is adopted, long residual connection allows residual learning of shallow layers, long residual connection and short residual connection allow a large amount of shallow information to pass through jump connection based on identities, the flow of the information is simplified, and finally, after being reconstructed by a deconvolution layer and a convolution layer, a high-quality remote sensing image can be generated, has important significance for the field of remote sensing image fusion.

Description

Double-current remote sensing image fusion method based on residual channel attention mechanism
Technical Field
The invention relates to the technical field of remote sensing image fusion, in particular to a double-current remote sensing image fusion method based on a residual channel attention mechanism.
Background
The remote sensing image fusion is an algorithm for fusing a high-resolution panchromatic remote sensing image (PAN image) and a low-resolution multispectral remote sensing image (LMS image) into the high-resolution multispectral remote sensing image, the high-resolution multispectral remote sensing image can calculate the reflection spectrum of each pixel on the earth surface to obtain various information, thereby providing help for subsequent remote sensing scene segmentation, classification and feature extraction, such as forest resource investigation, ground feature classification, precise agriculture, weather forecast and the like, however, due to the limitation of current hardware, it is difficult to obtain a remote sensing image with high resolution by a single sensor, and only a full-color image of a single waveband of the earth surface and a multi-spectral remote sensing image of a multi-waveband can be respectively obtained, the two images carry different information but can be complementary in information, panchromatic sharpening is developed as a key technology in remote sensing image fusion in order to obtain a multispectral remote sensing image with high resolution. With the increasing importance of remote sensing images, the fusion algorithm of the remote sensing images is continuously improved, how to fuse the spatial information and the spectral information of the full-color image and the multispectral remote sensing image as much as possible to improve the fusion effect is a key concern in the fusion of the remote sensing images.
Many advanced deep learning based approaches have been proposed in recent years with great potential, the deep learning model being built from multiple transform layers, in each of which input data is linearly filtered to produce output data, the multiple layers being added together to form an overall transform with high non-linearity, deep learning, especially Convolutional Neural Networks (CNNs), providing better transformation modeling, which facilitates fitting complex transforms. In the training process, parameters are updated under the supervision of a training sample, the fitting precision is improved, the characteristics extracted by the deep learning method in the fusion of remote sensing images have stronger expressive ability than the characteristics extracted by the traditional method, and the characteristics are inspired by the strong ability of deep learning in the field of computer vision, such as: in the aspect of remote sensing image fusion based on a three-layer CNN structure of a convolutional neural network, a CNN-based remote sensing image fusion algorithm PNN (panschpening by CNN) is firstly provided, so that the performance of the remote sensing image fusion algorithm is remarkably improved, and a high-resolution multispectral remote sensing image is generated; the algorithm is used for remote sensing image fusion DRPNN (remote sensing by Deep Residual CNN) based on a Residual connection depth network, the Residual connection learning characteristic is fused, a very Deep convolution network framework can be formed under the support of a Residual connection framework, the network is not easy to degrade, the fusion precision is improved, and the network performance is also improved; the (Two-stream fusion network) TFnet respectively extracts the features of the PAN and MS images by adopting a dual-channel CNN, and strengthens the deep features by adopting residual connection learning shallow features and then based on an identity jumping process, so that the learning of the features is strengthened and the performance of a fusion network is improved.
Disclosure of Invention
Therefore, the invention provides a double-current remote sensing image fusion method based on a residual channel attention mechanism, a residual attention network is constructed, the network is enabled to be concentrated on more useful channels and enhance learning ability through the channel attention mechanism in the residual attention network, and the multispectral remote sensing image with high resolution can be obtained finally.
The technical scheme of the invention is realized as follows:
the double-current remote sensing image fusion method based on the residual channel attention mechanism comprises the following steps:
s1, extracting the features of the panchromatic image and the low-resolution multispectral remote sensing image by using a convolutional neural network, and splicing the two images to obtain a splicing feature;
step S2, constructing a residual error attention network, wherein the residual error attention network comprises a residual error attention module, and the residual error attention module comprises a channel attention mechanism;
step S3, inputting the splicing characteristics into a residual error attention network for convolution processing to obtain initial characteristics, carrying out weighted distribution processing on the initial characteristics by a residual error attention module according to a channel attention mechanism to obtain new characteristics, and obtaining reinforced characteristics according to the new characteristics;
and step S4, the enhanced features are subjected to deconvolution layer size enlargement, and then the features after enlargement are reconstructed through a convolution layer, so that a multispectral remote sensing image with high resolution is obtained.
Preferably, in step S1, before feature extraction is performed on the panchromatic image, the panchromatic image is downsampled to adapt to the size of the multispectral remote sensing image with low resolution.
Preferably, the residual attention network of step S2 further includes a residual attention group, the residual attention group includes several blocks, a long residual connection and a short residual connection, the short residual connection stacks the remaining blocks, and the long residual connection and the short residual connection allow the shallow information to be directly propagated backward through the identity mapping.
Preferably, the specific step of step S3 includes:
step S31, splicing the characteristics Fb-1Inputting the data into a residual attention network, and obtaining an initial characteristic X after two convolutions;
step S32, inputting the initial feature X into a residual attention module, and obtaining a new feature by a channel attention mechanism
Figure BDA0003288574220000031
According to the new characteristics
Figure BDA0003288574220000032
And splicing feature Fb-1Obtaining an enhanced feature Fb
Preferably, the specific expression of step S31 is:
Figure BDA0003288574220000033
wherein Xb-1For splicing features Fb-1The output, X, obtained after the first convolutionbFor the output obtained after the second convolution, the initial characteristic X is XbOne of (1), W1And W2Weights of the first and second convolutional layers, respectively, b1And b2Represents the bias of the first and second convolutional layers, 3 × 3 represents the size of the convolutional kernel, and δ (·) represents the ReLU activation function.
Preferably, the specific expression of step S32 is:
Fb=CA(Xb)+Fb-1
where CA (-) represents a channel attention mechanism function, a new feature
Figure BDA0003288574220000034
Preferably, the step S3, where the residual attention module performs weighted distribution processing on the initial features according to the channel attention mechanism and obtains the enhanced features, includes the specific steps of:
s33, acquiring the channel number C, performing global average pooling on the input initial characteristics X, and acquiring a channel description z;
step S34, the channel description z is sequentially processed by a down-sampling layer and an up-sampling layer to obtain channel statistics w, and the channel statistics w contains the weight coefficient w of each channelc
Step S35, weighting factor wcMultiplying the initial characteristic X to obtain a new characteristic
Figure BDA0003288574220000035
Preferably, in step S33, the number of channels C ═ 1, 2.., C), and the initial characteristic X ═ X1,X2,...Xc]Channel description of the c-th channel zcThe specific expression of (A) is as follows:
Figure BDA0003288574220000041
wherein f isGP(. H) is the global average pooling function, H, W is the size of the feature map, xc(i, j) is layer c feature xcThe value at (i, j).
Preferably, the specific expression of the channel statistic w in step S34 is as follows:
w=S(WUδ(WDz));
where S (-) denotes a sigmoid activation function, δ (-) denotes a ReLU activation function, WDSet of weights for dimension-reduced convolutional layers, WUIs the set of weights for the ascending convolution layer.
Preferably, the new feature in step S35
Figure BDA0003288574220000042
The specific expression is as follows:
Figure BDA0003288574220000043
wherein wcAnd xcThe weighting coefficients and the initial characteristics of the layer c channel are respectively.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a double-current remote sensing image fusion method based on a residual channel attention mechanism, which adopts a convolutional neural network as a feature extractor to represent a full-color image and a multispectral remote sensing image with low resolution, fuses and splices the two extracted features into spliced features, and then constructs the residual attention network, wherein a residual attention module containing the channel attention mechanism is arranged in the residual attention network, and the channel attention mechanism can learn the interdependency among channels to recalibrate the channel features, so that the spatial information and the spectral information in the image can be extracted in a concentrated manner to comprehensively reconstruct a panchromatic sharpened image, and the multispectral remote sensing image with high resolution can be obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only preferred embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a block flow diagram of a dual-flow remote sensing image fusion method based on a residual channel attention mechanism according to the present invention;
FIG. 2 is a schematic flow chart of a double-current remote sensing image fusion method based on a residual channel attention mechanism according to the present invention;
FIG. 3 is a flow chart of a residual attention network of the dual-flow remote sensing image fusion method based on the residual channel attention mechanism of the present invention;
FIG. 4 is a flow chart of a channel attention mechanism of the dual-flow remote sensing image fusion method based on a residual channel attention mechanism of the present invention;
FIG. 5 is a comparison diagram of other remote sensing image fusion algorithms of the dual-flow remote sensing image fusion method based on the residual channel attention mechanism.
Detailed Description
For a better understanding of the technical content of the present invention, a specific embodiment is provided below, and the present invention is further described with reference to the accompanying drawings.
Referring to fig. 1 to 4, the method for fusing dual-flow remote sensing images based on the residual channel attention mechanism provided by the invention comprises the following steps:
s1, extracting the features of the panchromatic image and the low-resolution multispectral remote sensing image by using a convolutional neural network, and splicing the two images to obtain a splicing feature;
step S2, constructing a residual error attention network, wherein the residual error attention network comprises a residual error attention module, and the residual error attention module comprises a channel attention mechanism;
step S3, inputting the splicing characteristics into a residual error attention network for convolution processing to obtain initial characteristics, carrying out weighted distribution processing on the initial characteristics by a residual error attention module according to a channel attention mechanism to obtain new characteristics, and obtaining reinforced characteristics according to the new characteristics;
and step S4, the enhanced features are subjected to deconvolution layer size enlargement, and then the features after enlargement are reconstructed through a convolution layer, so that a multispectral remote sensing image with high resolution is obtained.
The invention relates to a double-current remote sensing image fusion method based on a residual channel attention mechanism, which comprises the steps of firstly adopting a convolutional neural network as a feature extractor to represent a panchromatic image and a low-resolution multispectral remote sensing image, then splicing the two on a channel to form compact feature representation, namely splicing features, wherein the panchromatic image and the low-resolution multispectral remote sensing image are carriers of spectral information, so that when the feature extraction is carried out by adopting the convolutional neural network, the feature image can be well reconstructed, after the splicing features are obtained, the splicing features are input into a constructed residual attention network, the residual attention network comprises a plurality of residual attention modules, the residual attention modules comprise the channel attention mechanism, and the channel attention mechanism can adaptively endow different weights to each channel by modeling the interdependency among feature channels, the mechanism allows the network to concentrate on more useful channels and enhance learning ability, and the enhanced features processed by the residual attention network can be convolved twice to obtain a final high-resolution multispectral remote sensing image.
Preferably, in step S1, before feature extraction is performed on the panchromatic image, the panchromatic image is downsampled to adapt to the size of the multispectral remote sensing image with low resolution.
The size of the full-color image is 256 × 256 generally, and the size of the multispectral remote sensing image is 64 × 64 generally, so that the convolutional neural network is used for carrying out feature extractionBefore taking, downsampling is needed to be carried out on the panchromatic image, so that the size of the panchromatic image is applied to the multispectral remote sensing image with low resolution, in order to keep the characteristics and prevent information loss, a convolutional neural network serving as a characteristic extractor does not use a pooling layer, batch normalization and ReLU, the pooling layer, the batch normalization and the ReLU are simply connected together to realize a fusion strategy, and the spliced characteristics F after fusion splicingb-1The specific expression of (A) is as follows:
Figure BDA0003288574220000061
inputting full-color image and low-resolution multispectral remote sensing image by XpAnd XmRepresentation, features extracted by convolutional neural networks
Figure BDA0003288574220000062
And
Figure BDA0003288574220000063
indicating that the superscript 1 indicates feature extraction from layer 1, then
Figure BDA0003288574220000064
The expression of (a) is:
Figure BDA0003288574220000065
Figure BDA0003288574220000066
Figure BDA0003288574220000067
Figure BDA0003288574220000068
wherein W1And W2Are respectively provided withRepresenting the weights of the first and second convolutional layers, b1And b2Respectively, the offsets of the first and second convolutional layers, 3 x 3 represents the size of the convolutional kernel,
Figure BDA0003288574220000069
representing the convolution output of layer 2.
Preferably, the residual attention network of step S2 further includes a residual attention group, the residual attention group includes several blocks, a long residual connection and a short residual connection, the short residual connection stacks the remaining blocks, and the long residual connection and the short residual connection allow the shallow information to be directly propagated backward through the identity mapping.
Multiple residual concatenations may learn shallow features to enhance deep features, while flow of information is facilitated because long and short residual concatenations allow shallow information to propagate directly back through the identity map.
Preferably, the specific step of step S3 includes:
step S31, splicing the characteristics Fb-1Inputting the data into a residual attention network, and obtaining an initial characteristic X after two convolutions;
the specific expression of the step S31 is as follows:
Figure BDA0003288574220000071
wherein Xb-1For splicing features Fb-1The output, X, obtained after the first convolutionbFor the output obtained after the second convolution, the initial characteristic X is XbOne of (1), W1And W2Weights of the first and second convolutional layers, respectively, b1And b2Represents the bias of the first and second convolutional layers, 3 × 3 represents the size of the convolutional kernel, and δ (·) represents the ReLU activation function.
Step S32, inputting the initial feature X into a residual attention module, and obtaining a new feature by a channel attention mechanism
Figure BDA0003288574220000072
According to the new characteristics
Figure BDA0003288574220000073
And splicing feature Fb-1Obtaining an enhanced feature Fb
Preferably, the first and second liquid crystal materials are,
the specific expression of the step S32 is as follows:
Fb=CA(Xb)+Fb-1
where CA (-) represents a channel attention mechanism function, a new feature
Figure BDA0003288574220000074
After the stitching features are input into the residual attention network, under the action of multi-residual connection and a channel attention mechanism, the enhanced features can be obtained, and due to the existence of the channel attention mechanism, each channel can be endowed with different weights adaptively through modeling the interdependence among feature channels, and the mechanism allows the network to concentrate on more useful channels and enhance the learning capacity.
Preferably, the step S3, where the residual attention module performs weighted distribution processing on the initial features according to the channel attention mechanism and obtains the enhanced features, includes the specific steps of:
step S33, acquiring the channel quantity C, performing global average pooling on the input initial characteristics X, converting global space information into a channel descriptor, and acquiring a channel description z;
the number of channels C ═ 1, 2., C), the initial characteristic X ═ X1,X2,...Xc]Channel description of the c-th channel zcThe specific expression of (A) is as follows:
Figure BDA0003288574220000081
wherein f isGP(. H) is the global average pooling function, H, W is the size of the feature map, xc(i, j) is the c-th layerCharacteristic xcThe value at (i, j).
Besides global pooling, an aggregation technique is introduced, channel dependence is completely captured from aggregated information through global average pooling, and then a sigmoid activation function is introduced to learn nonlinear interaction between channels.
Step S34, the channel description z is sequentially processed by a down-sampling layer and an up-sampling layer to obtain channel statistics w, and the channel statistics w contains the weight coefficient w of each channelc
The specific expression of the channel statistic w in step S34 is as follows:
w=S(WUδ(WDz));
where S (-) denotes a sigmoid activation function, δ (-) denotes a ReLU activation function, WDSet of weights for dimension-reduced convolutional layers, WUIs the set of weights for the ascending convolution layer.
The dimensionality reduction convolutional layer has the functions of reducing dimensionality of the channels, setting the dimensionality reduction proportion to be r, activating the dimensionality reduced signals by using a ReLU activation function, increasing the number of the channels by r times through the dimensionality reduction convolutional layer, and obtaining the weight coefficient w of each channelcAnd the final channel statistic w is obtained and used to rescale the initial feature X.
Step S35, weighting factor wcMultiplying the initial characteristic X to obtain a new characteristic
Figure BDA0003288574220000082
The new feature in said step S35
Figure BDA0003288574220000083
The specific expression is as follows:
Figure BDA0003288574220000084
wherein wcAnd xcThe weighting coefficients and the initial characteristics of the layer c channel are respectively.
The whole adjustment process of the channel attention mechanism is actually that the weighting distribution is carried out again on the characteristics of different channels.
Compared with the prior art, the invention provides a double-current fusion architecture, a convolutional neural network extracts features from a full-color image and a multispectral remote sensing image respectively, then the features are fused to form a compact feature map which can simultaneously represent the space and spectral information of the full-color image and the multispectral image, each convolution operator only has a local receptive field in the convolution process and cannot fully utilize the context information, so that the obtained features lack the context information, the traditional algorithm treats each channel of the feature map equally during fusion, ignores the interdependence relationship of each channel among the feature maps, and uses an attention mechanism to model the interdependence relationship among the feature channels to self-adaptively adjust the features of each channel, so that the attention mechanism enables the network proposed by us to concentrate on more useful channels, the recognition learning capability is improved, and meanwhile, multi-residual connection is adopted, so that the network can adapt to a deeper structure, wherein long residual connection allows shallow residual learning, in each residual module, a plurality of residual blocks are stacked by using short residual connection, and the long residual connection, the short residual connection and the residual group allow a large amount of shallow information to pass through the jump connection based on identity, so that the information flow is simplified.
In order to discuss the effectiveness of the invention, the invention and other remote sensing image fusion methods (PCA, MTF _ GLP, PNN and the like) are compared in fusion effect, and the experimental data set is as follows:
Figure BDA0003288574220000091
after the experiment is carried out according to the experimental data set, the fusion method and the evaluation index parameters of each image are shown in table 1 (RCAMTNet is the invention), and the comparison of the fusion results and the images shown in fig. 5 shows that the method provided by the invention is obviously superior to the existing remote sensing image method according to the comparison of various evaluation indexes shown in table 1 and fig. 5.
TABLE 1
Figure BDA0003288574220000101
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. The double-current remote sensing image fusion method based on the residual channel attention mechanism is characterized by comprising the following steps of:
s1, extracting the features of the panchromatic image and the low-resolution multispectral remote sensing image by using a convolutional neural network, and splicing the two images to obtain a splicing feature;
step S2, constructing a residual error attention network, wherein the residual error attention network comprises a residual error attention module, and the residual error attention module comprises a channel attention mechanism;
step S3, inputting the splicing characteristics into a residual error attention network for convolution processing to obtain initial characteristics, carrying out weighted distribution processing on the initial characteristics by a residual error attention module according to a channel attention mechanism to obtain new characteristics, and obtaining reinforced characteristics according to the new characteristics;
and step S4, the enhanced features are subjected to deconvolution layer size enlargement, and then the features after enlargement are reconstructed through a convolution layer, so that a multispectral remote sensing image with high resolution is obtained.
2. The method for fusing dual-flow remote sensing images based on the residual channel attention mechanism as claimed in claim 1, wherein the step S1 is performed by downsampling the panchromatic image to fit the size of the multispectral remote sensing image with low resolution before performing feature extraction on the panchromatic image.
3. The method for dual-flow remote sensing image fusion based on the residual channel attention mechanism of claim 1, wherein the residual attention network of step S2 further comprises a residual attention cluster, the residual attention cluster comprising a number of blocks, long residual connections, and short residual connections, the short residual connections stacking the remaining blocks, the long residual connections and the short residual connections allowing shallow information to propagate directly backward through identity mapping.
4. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 1, wherein the specific step of the step S3 comprises:
step S31, splicing the characteristics Fb-1Inputting the data into a residual attention network, and obtaining an initial characteristic X after two convolutions;
step S32, inputting the initial feature X into a residual attention module, and obtaining a new feature by a channel attention mechanism
Figure FDA0003288574210000011
According to the new characteristics
Figure FDA0003288574210000012
And splicing feature Fb-1Obtaining an enhanced feature Fb
5. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 4, wherein the specific expression of the step S31 is as follows:
Figure FDA0003288574210000021
wherein Xb-1For splicing features Fb-1The output, X, obtained after the first convolutionbFor the output obtained after the second convolution, the initial characteristic X is XbOne of (1), W1And W2Weights of the first and second convolutional layers, respectively, b1And b2Denotes the offset of the first and second convolutional layers, 3X 3 stands forThe size of the convolution kernel, δ (·) represents the ReLU activation function.
6. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 5, wherein the specific expression of the step S32 is as follows:
Fb=CA(Xb)+Fb-1
where CA (-) represents a channel attention mechanism function, a new feature
Figure FDA0003288574210000022
7. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 1, wherein the step S3 in which the residual attention module performs weighting distribution processing on the initial features according to the channel attention mechanism and obtains the enhanced features comprises the specific steps of:
s33, acquiring the channel number C, performing global average pooling on the input initial characteristics X, and acquiring a channel description z;
step S34, the channel description z is sequentially processed by a down-sampling layer and an up-sampling layer to obtain channel statistics w, and the channel statistics w contains the weight coefficient w of each channelc
Step S35, weighting factor wcMultiplying the initial characteristic X to obtain a new characteristic
Figure FDA0003288574210000023
8. The method for fusing dual-flow remote sensing images based on the residual channel attention mechanism as claimed in claim 7, wherein in step S33, the number of channels C ═ (1, 2.., C), and the initial characteristic X ═ X1,X2,...Xc]Channel description of the c-th channel zcThe specific expression of (A) is as follows:
Figure FDA0003288574210000024
wherein f isGP(. H) is the global average pooling function, H, W is the size of the feature map, xc(i, j) is layer c feature xcThe value at (i, j).
9. The method for fusing the dual-flow remote sensing images based on the residual channel attention mechanism according to claim 7, wherein the specific expression of the channel statistic w in the step S34 is as follows:
w=S(WUδ(WDz));
where S (-) denotes a sigmoid activation function, δ (-) denotes a ReLU activation function, WDSet of weights for dimension-reduced convolutional layers, WUIs the set of weights for the ascending convolution layer.
10. The method for fusing dual-flow remote sensing images based on residual channel attention mechanism according to claim 7, wherein the new features in the step S35
Figure FDA0003288574210000031
The specific expression is as follows:
Figure FDA0003288574210000032
wherein wcAnd xcThe weighting coefficients and the initial characteristics of the layer c channel are respectively.
CN202111156702.5A 2021-09-30 2021-09-30 Double-current remote sensing image fusion method based on residual channel attention mechanism Pending CN113920043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111156702.5A CN113920043A (en) 2021-09-30 2021-09-30 Double-current remote sensing image fusion method based on residual channel attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111156702.5A CN113920043A (en) 2021-09-30 2021-09-30 Double-current remote sensing image fusion method based on residual channel attention mechanism

Publications (1)

Publication Number Publication Date
CN113920043A true CN113920043A (en) 2022-01-11

Family

ID=79237667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111156702.5A Pending CN113920043A (en) 2021-09-30 2021-09-30 Double-current remote sensing image fusion method based on residual channel attention mechanism

Country Status (1)

Country Link
CN (1) CN113920043A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332592A (en) * 2022-03-11 2022-04-12 中国海洋大学 Ocean environment data fusion method and system based on attention mechanism
CN114511470A (en) * 2022-04-06 2022-05-17 中国科学院深圳先进技术研究院 Attention mechanism-based double-branch panchromatic sharpening method
CN114547017A (en) * 2022-04-27 2022-05-27 南京信息工程大学 Meteorological big data fusion method based on deep learning
CN114972812A (en) * 2022-06-02 2022-08-30 华侨大学 Non-local attention learning method based on structural similarity
CN117788472A (en) * 2024-02-27 2024-03-29 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332592A (en) * 2022-03-11 2022-04-12 中国海洋大学 Ocean environment data fusion method and system based on attention mechanism
CN114511470A (en) * 2022-04-06 2022-05-17 中国科学院深圳先进技术研究院 Attention mechanism-based double-branch panchromatic sharpening method
CN114511470B (en) * 2022-04-06 2022-07-08 中国科学院深圳先进技术研究院 Attention mechanism-based double-branch panchromatic sharpening method
CN114547017A (en) * 2022-04-27 2022-05-27 南京信息工程大学 Meteorological big data fusion method based on deep learning
CN114972812A (en) * 2022-06-02 2022-08-30 华侨大学 Non-local attention learning method based on structural similarity
CN117788472A (en) * 2024-02-27 2024-03-29 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm
CN117788472B (en) * 2024-02-27 2024-05-14 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm

Similar Documents

Publication Publication Date Title
CN113920043A (en) Double-current remote sensing image fusion method based on residual channel attention mechanism
CN112836773B (en) Hyperspectral image classification method based on global attention residual error network
CN109064396B (en) Single image super-resolution reconstruction method based on deep component learning network
CN111311518B (en) Image denoising method and device based on multi-scale mixed attention residual error network
WO2021018163A1 (en) Neural network search method and apparatus
CN108830813A (en) A kind of image super-resolution Enhancement Method of knowledge based distillation
CN111275618A (en) Depth map super-resolution reconstruction network construction method based on double-branch perception
CN111325165B (en) Urban remote sensing image scene classification method considering spatial relationship information
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
CN111695467A (en) Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion
CN107316054A (en) Non-standard character recognition methods based on convolutional neural networks and SVMs
CN111951164B (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
Luo et al. Lattice network for lightweight image restoration
CN113066065B (en) No-reference image quality detection method, system, terminal and medium
CN115496658A (en) Lightweight image super-resolution reconstruction method based on double attention mechanism
CN113066037B (en) Multispectral and full-color image fusion method and system based on graph attention machine system
CN112561028A (en) Method for training neural network model, and method and device for data processing
Hu et al. Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution
CN112270366B (en) Micro target detection method based on self-adaptive multi-feature fusion
CN115660955A (en) Super-resolution reconstruction model, method, equipment and storage medium for efficient multi-attention feature fusion
CN112734643A (en) Lightweight image super-resolution reconstruction method based on cascade network
CN114937202A (en) Double-current Swin transform remote sensing scene classification method
CN115331104A (en) Crop planting information extraction method based on convolutional neural network
CN116168197A (en) Image segmentation method based on Transformer segmentation network and regularization training
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination