CN114066755B - Remote sensing image thin cloud removing method and system based on full-band feature fusion - Google Patents
Remote sensing image thin cloud removing method and system based on full-band feature fusion Download PDFInfo
- Publication number
- CN114066755B CN114066755B CN202111332467.2A CN202111332467A CN114066755B CN 114066755 B CN114066755 B CN 114066755B CN 202111332467 A CN202111332467 A CN 202111332467A CN 114066755 B CN114066755 B CN 114066755B
- Authority
- CN
- China
- Prior art keywords
- cloud
- image
- thin cloud
- remote sensing
- thin
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000003595 spectral effect Effects 0.000 claims abstract description 56
- 238000012549 training Methods 0.000 claims abstract description 55
- 238000012360 testing method Methods 0.000 claims abstract description 39
- 238000005070 sampling Methods 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims description 42
- 238000010586 diagram Methods 0.000 claims description 20
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000003860 storage Methods 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 11
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 238000011084 recovery Methods 0.000 claims description 9
- 239000011800 void material Substances 0.000 claims description 9
- 230000000295 complement effect Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000003672 processing method Methods 0.000 claims description 3
- 238000010183 spectrum analysis Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 11
- 238000012952 Resampling Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a remote sensing image thin cloud removing method and system based on full-wave band feature fusion, wherein the method comprises the following steps: performing multispectral influence thin cloud removal on the multispectral remote sensing image to be processed by using the trained thin cloud removal network; the trained thin cloud removal network is obtained through the following steps: acquiring multispectral remote sensing images under the conditions of cloud and no cloud in the same region to obtain a training set and a test set; sampling to obtain spatial characteristics and spectral characteristics of spectral bands of different resolutions of the image; respectively obtaining feature graphs of the images under the cloud condition and the cloud-free condition through feature fusion; calculating multi-path supervision loss, and optimizing preset network parameters of the thin cloud removal network; and training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network. The method has the advantages of high precision of removing the thin cloud, small error, greatly improved removal training compared with the prior art, and wide application space in multispectral remote sensing images.
Description
Technical Field
The invention relates to a remote sensing image thin cloud removing method and system based on full-band feature fusion, and belongs to the technical field of remote sensing image thin cloud removal.
Background
With the rising of more and more remote sensing satellites, massive data acquired by the remote sensing satellites provides abundant information for vegetation health monitoring, disaster monitoring, land coverage classification and the like. However, thin clouds are always important factors influencing the quality of remote sensing images, and therefore, the removal of the thin clouds is an essential step for preprocessing the remote sensing images. At present, more and more detection spectrum bands of a satellite sensor are provided, high spatial resolution is generally provided in a visible light near infrared band, and low resolution is provided in a short wave infrared band.
Although the detection accuracy of the current deep learning method is greatly higher than that of the traditional method, the deep learning-based method generally has two modes for processing multispectral image data containing thin clouds: firstly, thin cloud removal is carried out by using a high-resolution waveband; secondly, the wave bands with different resolutions are resampled to the same spatial resolution by an artificially designed sampling function and then trained. The first method cannot fully utilize the spectral information of the multispectral image; in the second method, the manually designed resampling function has strong subjectivity, and the sampled image texture information can be damaged.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a remote sensing image thin cloud removing method and system based on full-band feature fusion, and can improve the thin cloud removing capability. In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the invention provides a remote sensing image thin cloud removing method based on full-band feature fusion, which comprises the following steps:
acquiring a multispectral remote sensing image to be processed;
performing multispectral influence thin cloud removal on the multispectral remote sensing image to be processed by using the trained thin cloud removal network, and outputting the multispectral remote sensing image with the thin cloud removed;
the trained thin cloud removal network is obtained through the following steps:
acquiring multispectral remote sensing images under the conditions of cloud and no cloud in the same region, and preprocessing the acquired images to obtain a training set and a test set;
sampling the obtained image by using a pre-constructed convolutional neural network to obtain spatial characteristics and spectral characteristics of spectral bands with different resolutions of the image;
fusing the obtained spatial features and the spectral features by using a pre-constructed two-way feature fusion module to respectively obtain an image feature map under the cloud condition and an image feature map under the cloud-free condition; the pre-constructed two-way feature fusion module further comprises a global cavity residual error module, and the global cavity residual error module is used for complementing the spatial features and the spectral features of the wave bands which are influenced by the thin clouds and are large in the input features by utilizing the wave bands which are influenced by the thin clouds;
calculating multi-path supervision loss based on the image characteristic diagram under the cloud condition and the image characteristic diagram under the cloud-free condition, and optimizing the preset network parameters of the thin cloud removal network;
and training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network.
With reference to the first aspect, further, the preprocessing the acquired image includes:
segmenting the obtained image into small blocks;
carrying out manual visual interpretation on the small blocks, putting the image blocks with clouds into a cloud folder, and putting the image blocks without clouds into a cloud-free folder;
dividing image blocks in a cloud folder into a cloud training set and a cloud test set, and dividing image blocks in a non-cloud folder into a non-cloud training set and a non-cloud test set; the training set is formed by the cloud training set and the cloud-free training set, and the test set is formed by the cloud test set and the cloud-free test set.
With reference to the first aspect, further, the pre-constructed convolutional neural network includes high, medium, and low resolution branches;
the high-resolution branch, the medium-resolution branch and the low-resolution branch respectively carry out down-sampling on the corresponding resolution of the input image, and the output characteristics of the high-resolution branch and the characteristics output by the medium-resolution branch are connected on a channel to obtain first characteristics;
the medium-resolution branch downsamples the first characteristic and outputs a second characteristic, and the second characteristic is connected with the characteristic output by the low-resolution branch on a channel;
and outputting the spatial characteristics and the spectral characteristics of the spectral bands with different resolutions of the image.
With reference to the first aspect, further, the pre-constructed two-way feature fusion module includes 2 parallel deep convolution branches and 1 × 1 convolution layer;
the features are respectively subjected to convolution processing through 2 depth convolution paths to obtain 2 groups of output features;
connecting the 2 groups of output features on the channel;
the number of connected feature channels is compressed to the same number as the input features using 1x1 convolutional layers.
With reference to the first aspect, further, the global cavity residual error module includes 2 groups of 3D convolutional layers and cavity convolutional layers in parallel, where an input end of the cavity convolutional layer is connected to an output end of the 3D convolutional layer;
processing the input characteristics by utilizing a group of 3D convolution layers, and inputting a processing result into the void convolution layer; adding the output characteristics of the cavity convolution layer and the input characteristics to obtain first completion characteristics;
processing the first completion characteristics by using the other group of 3D convolution layers, and inputting a processing result into the hole convolution layer; and adding the output characteristics of the void convolution layer and the input characteristics to obtain second complementary characteristics, namely the spatial characteristics and the spectral characteristics of the wave band which is greatly influenced by the thin cloud in the input characteristics.
With reference to the first aspect, preferably, the grid artifacts caused by the hole convolution can be eliminated by the cascaded global hole residual structure.
With reference to the first aspect, further, the calculating the multipath supervision loss is calculated by:
L=L h +L m +L l +C(L_edge h +L_edge m +L_edge l ) (1)
in the formula (1), L represents a multipath supervision loss, L h Representing high resolution thin cloud image removal loss, L m Representing medium resolution thin cloud image removal loss, L l Representing low resolution thin cloud image removal loss; l _ edge h Loss of recovery, L _ edge, representing high resolution thin cloud edge features m Loss of recovery, L _ edge, representing medium resolution thin cloud edge features l Representing a loss of recovery of low resolution thin cloud edge features; c represents a weight coefficient.
With reference to the first aspect, preferably, the weight coefficient is 0.01.
In a second aspect, the present invention provides a remote sensing image thin cloud removing system based on full-band feature fusion, including:
an acquisition module: the multispectral remote sensing image processing method comprises the steps of obtaining a multispectral remote sensing image to be processed;
an output module: and the thin cloud removing network is used for removing the multispectral influence thin cloud of the multispectral remote sensing image to be processed by using the trained thin cloud removing network and outputting the multispectral remote sensing image with the thin cloud removed.
In combination with the second aspect, further, the output module includes a network processing module for training a thin cloud removal network, the network processing module includes:
a preprocessing module: the system comprises a multi-spectral remote sensing image acquisition module, a training set acquisition module, a test set acquisition module and a data processing module, wherein the multi-spectral remote sensing image acquisition module is used for acquiring multi-spectral remote sensing images under the conditions of cloud and no cloud in the same region and preprocessing the acquired images to obtain a training set and a test set;
a sampling module: the system comprises a convolution neural network, a spectrum analysis module and a data processing module, wherein the convolution neural network is used for sampling an acquired image by utilizing the pre-constructed convolution neural network to obtain spatial characteristics and spectral characteristics of spectral bands with different resolutions of the image;
a feature fusion module: the system comprises a two-way feature fusion module, a two-way feature fusion module and a two-way feature fusion module, wherein the two-way feature fusion module is used for fusing the obtained spatial features and spectral features to respectively obtain an image feature map under the cloud condition and an image feature map under the cloud-free condition; the pre-constructed two-way feature fusion module further comprises a global cavity residual error module, wherein the global cavity residual error module is used for complementing the spatial features and the spectral features of the wave bands which are influenced by the thin clouds and are in the input features by using the wave bands which are influenced by the thin clouds;
an optimization module: the method is used for calculating the multi-path supervision loss and optimizing the preset network parameters of the thin cloud removal network based on the image characteristic diagram under the cloud condition and the characteristic diagram of the image under the cloud-free condition;
training a testing module: and the method is used for training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network.
In a third aspect, the invention provides a remote sensing image thin cloud removing device based on full-band feature fusion, which comprises a processor and a storage medium, wherein the processor is used for processing a remote sensing image;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method of the first aspect.
In a fourth aspect, the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
Compared with the prior art, the remote sensing image thin cloud removing method and system based on full-band feature fusion provided by the embodiment of the invention have the beneficial effects that:
acquiring a multispectral remote sensing image to be processed; performing multispectral influence thin cloud removal on the multispectral remote sensing image to be processed by using the trained thin cloud removal network, and outputting the multispectral remote sensing image with the thin cloud removed; the thin cloud removing capability can be improved, and the method has the advantages of high thin cloud removing precision and small error;
in training a thin cloud removal network, sampling an acquired image by using a pre-constructed convolutional neural network to obtain spatial characteristics and spectral characteristics of spectral bands of different resolutions of the image; the invention uses the convolution neural network to replace the artificially designed image resampling method, can automatically learn the optimal image sampling parameters for each wave band of the multispectral image according to the target, and can fuse the spectral characteristics of the spectral wave bands with different resolutions;
the method comprises the steps of fusing the obtained spatial features and spectral features by utilizing a pre-constructed two-way feature fusion module to respectively obtain an image feature map under the cloud condition and an image feature map under the cloud-free condition; multi-scale features can be extracted from input spectrum wave bands, and the multi-scales can be fused without increasing parameters;
the pre-constructed two-way feature fusion module further comprises a global cavity residual error module, wherein the global cavity residual error module is used for complementing the spatial features and the spectral features of the wave bands which are influenced by the thin clouds and are large in the input features by utilizing the wave bands which are influenced by the thin clouds; the lost information of the cavity convolution can be supplemented by few parameters;
the method comprises the steps of calculating multi-path supervision loss based on an image feature map under the cloud condition and an image feature map under the cloud-free condition, and optimizing preset network parameters of a thin cloud removal network; the thin cloud removing method can realize supervision on thin cloud removing of high, medium and low resolutions, and improves the thin cloud removing capability of the remote sensing image thin cloud removing method fusing full-band spectral features on different resolutions.
Drawings
Fig. 1 is a flowchart of a remote sensing image thin cloud removing method based on full-band feature fusion according to an embodiment of the present invention;
fig. 2 is a network structure diagram of a remote sensing image thin cloud removing method based on full-band feature fusion according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a hybrid separable convolution in a two-path feature fusion module in the remote sensing image thin cloud removing method based on full-band feature fusion according to the first embodiment of the present invention;
fig. 4 is a schematic structural diagram of a global cavity residual error module in the remote sensing image thin cloud removing method based on full-band feature fusion according to the first embodiment of the present invention;
fig. 5 is a structural diagram of a remote sensing image thin cloud removing system based on full-band feature fusion according to a second embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The first embodiment is as follows:
as shown in fig. 1, an embodiment of the present invention provides a remote sensing image thin cloud removing method based on full-band feature fusion, including:
acquiring a multispectral remote sensing image to be processed;
performing multispectral influence thin cloud removal on the multispectral remote sensing image to be processed by using the trained thin cloud removal network, and outputting the multispectral remote sensing image with the thin cloud removed;
the trained thin cloud removal network is obtained through the following steps:
acquiring multispectral remote sensing images under the conditions of cloud and no cloud in the same region, and preprocessing the acquired images to obtain a training set and a test set;
sampling the obtained image by using a pre-constructed convolutional neural network to obtain spatial characteristics and spectral characteristics of spectral bands with different resolutions of the image;
fusing the obtained spatial features and the spectral features by using a pre-constructed two-way feature fusion module to respectively obtain an image feature map under the cloud condition and an image feature map under the cloud-free condition; the pre-constructed two-way feature fusion module further comprises a global cavity residual error module, wherein the global cavity residual error module is used for complementing the spatial features and the spectral features of the wave bands which are influenced by the thin clouds and are in the input features by using the wave bands which are influenced by the thin clouds;
calculating multi-path supervision loss based on the image characteristic diagram under the cloud condition and the image characteristic diagram under the cloud-free condition, and optimizing the preset network parameters of the thin cloud removal network;
and training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network.
Training a thin cloud removal network within a computer, the computer configured to: the AMDRyzen9 3950X 16 core processor, the Nvidia GeForceRTX3090 graphics processor, the main frequency of 3.49GHz, the memory of 64GB and the operating system of windows10. The implementation of the remote sensing image thin cloud removal network fused with the full-band spectral features is based on a Tensorflow2.0 deep learning framework toolkit.
The specific training process is as follows:
step 1: acquiring multispectral remote sensing images under the conditions of cloud existence and no cloud existence in the same area, and preprocessing the acquired images to obtain a training set and a testing set.
Acquiring 24 pairs of image data with cloud and without cloud of Sentinel-2, synthesizing RGB band data, cutting each scene image according to a 384x384 window through a python script, and then putting the images with cloud and without cloud into folders with cloud and without cloud through artificial visual interpretation; 20 pairs of the original files are divided into training data, 4 pairs of the original files are divided into verification data, 4 original files with 10 meter wave bands in the training data are cut according to 384x384 windows, 6 original files with 20 meter wave bands are cut according to 192x192 windows, 3 original files with 60 meter wave bands are cut according to 64x64 windows, 10 meter, 20 meter and 60 meter cloud and non-cloud slices in the same positions form a group of training data, and 15680 groups of training slice data are calculated.
And 2, step: and sampling the acquired image by using a pre-constructed convolutional neural network to obtain the spatial characteristics and the spectral characteristics of the spectral bands with different resolutions of the image.
The pre-constructed convolutional neural network comprises high, medium and low resolution branches; the high-resolution branch, the medium-resolution branch and the low-resolution branch respectively carry out down-sampling on the corresponding resolution of the input image, and the output characteristics of the high-resolution branch and the characteristics output by the medium-resolution branch are connected on a channel to obtain first characteristics; the medium-resolution branch performs downsampling on the first characteristic and outputs a second characteristic, and the second characteristic is connected with the characteristic output by the low-resolution branch on a channel; and outputting the spatial characteristics and the spectral characteristics of the spectral bands with different resolutions of the image.
Specifically, as shown in fig. 3, the high, medium, and low resolution branches are all formed by a common convolutional network and a two-way feature fusion module. All input branches firstly pass through a convolution layer to extract original features, the size of a convolution kernel is 3x3, the step length is 1, and the consistency of the output size and the input size is ensured. The high resolution branch output is sent to a Parallel Down-sample Residual Block (PDRB), the Block comprises a two-way feature fusion Block and a maximum pooling layer, the convolution kernel is 3x3, the step length is 2, the input is Down-sampled by 4 times, the output is connected with the output of the medium resolution branch on a channel, then the output is connected with the output of the medium resolution branch through a fusion Down-sample Residual Block (FDRB), the Block comprises a two-way feature fusion Block path and two maximum pooling layer paths, the convolution kernel is 3x3, the convolution layer with the step length of 3 is Down-sampled by 9 times, the output of each path is added and then input to the next layer, the output of each path is connected with the output of the low resolution branch on the channel, and the outputs of all branches are fused together; then a parallel down-sampling residual error module is carried out; to this end, the convolution kernel 3x3, convolution layer with step size 2, is down sampled by a factor of 4.
A convolution neural network is used for replacing an image resampling method of artificial design, optimal image sampling parameters can be automatically learned for each wave band of a multispectral image according to a target, and spectral features of different resolution spectral wave bands can be fused.
And step 3: and fusing the obtained spatial features and the spectral features by using a pre-constructed two-way feature fusion module to respectively obtain the feature map of the image under the cloud condition and the feature map of the image under the cloud-free condition.
Step 3.1: and constructing a two-way feature fusion module.
The pre-constructed two-way feature fusion module comprises 2 parallel deep convolution branches and 1x1 convolution layer; the features are respectively subjected to convolution processing through 2 depth convolution paths to obtain 2 groups of output features; connecting the 2 groups of output features on the channel; the number of connected feature channels is compressed to the same number as the input features using 1x1 convolutional layers.
As shown in fig. 2, specifically, one two-way feature fusion module includes two processing paths, the first processing path includes one deep convolution, the convolution kernels are all 3x3 in size and 2 in step size, the second processing path includes two deep convolutions, the convolution kernels are all 3x3 in size and 1 and 2 in step size, and finally, the processing results of the two paths are connected on a channel and output through a 1x1 convolution fusion process.
The step can extract multi-scale features from the input spectral band, and can fuse the multi-scales without increasing parameters.
Step 3.2: and constructing a global hole residual error module.
The global cavity residual error module comprises 2 groups of 3D convolutional layers and cavity convolutional layers which are parallel, wherein the input end of each cavity convolutional layer is connected with the output end of the 3D convolutional layer; processing the input characteristics by utilizing a group of 3D convolution layers, and inputting a processing result into the void convolution layer; adding the output characteristics of the cavity convolution layer and the input characteristics to obtain first completion characteristics; processing the first completion characteristics by using the other group of 3D convolution layers, and inputting a processing result into the hole convolution layer; adding the output characteristics of the void convolution layer and the input characteristics to obtain second complementary characteristics, namely the spatial characteristics and the spectral characteristics of the wave band which is greatly influenced by the thin cloud in the input characteristics
As shown in FIG. 4, specifically, the convolution kernel of the global shared convolution layer is shared in all channels, and the size of the convolution kernel is (2Rate + 1) x (2Rate + 1) Rate which is the void Rate of the void convolution, and is set to 2,3, 4 from bottom to top; trainable parameters of the cavity convolution layer are 3x3, and the convolution kernel size is (2Rate + 1) x (2Rate + 1); the input of the basic unit is firstly passed through the global shared convolutional layer and then passed through the hole convolutional layer, the input and output of the hole convolutional layer are added and then inputted into the next basic unit, and the input and output of the shared hole convolutional module are added to form cascade residual error; here, a feature fusion channel is constructed using 6 hole residual modules to extract features.
The step can extract information in input features with few parameters, and can eliminate grid artifacts caused by cavity convolution through a cascaded residual error structure.
And 4, step 4: and calculating multi-path supervision loss based on the image characteristic diagram under the cloud condition and the image characteristic diagram under the cloud-free condition, and optimizing the preset network parameters of the thin cloud removal network.
Calculating the multipath supervision loss by the following formula:
L=L h +L m +L l +C(L_edge h +L_edge m +L_edge l ) (1)
in the formula (1), L represents a multipath supervision loss, L h Representing high resolution thin cloud image removal loss, L m Representing medium resolution thin cloud image removal loss, L l Representing low resolution thin cloud image removal loss; l _ edge h Loss of recovery, L _ edge, representing high resolution thin cloud edge features m Indicating a loss of recovery of the medium resolution thin cloud edge feature, L _ edge l Indicates low scoreLoss of recovery of resolution thin cloud edge features; c represents a weight coefficient.
Specifically, the feature map of the image under the cloud condition and the feature map of the image under the cloud-free condition are input into a feature fusion channel formed by a global cavity residual error module, the overall structure of the method is based on a U-Net structure, the up-sampling process corresponds to the input branch, the up-sampling is carried out to different resolutions by setting different step lengths of the transposition convolution, and thin cloud removal results of high, medium and low resolutions are output respectively. According to the method, a mean square error is used as a loss function, cloud masks with different resolutions and thin cloud removal results are substituted into the loss function together, and the loss function is high-resolution thin cloud removal loss Lh, medium-resolution thin cloud removal loss Lm and low-resolution thin cloud removal loss Ll; and adding the recovery loss L _ edgeh, L _ edgem and L _ edgel of the edge characteristics into a loss function, adding different weight coefficients into the image and the edge loss to obtain the multi-path supervision loss function, and optimizing network parameters through a back propagation algorithm.
The step can realize supervision on thin cloud removal of three resolutions of high, medium and low, and improve the thin cloud removal capability of the remote sensing image thin cloud removal method fusing the full-band spectral characteristics on different resolutions.
And 5: and training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network.
Training the remote sensing image thin cloud removal network fusing the full-band spectral features by using the training data in the step 1, initializing all convolution kernels in the network by adopting Gaussian distribution with the mean value of 0 and the variance of 0.01, initializing the bias by adopting a fixed value of 0.0, adopting an Adam optimization algorithm, setting the batch size to be 16, setting the initial learning rate to be 0.002, keeping the former 50000 iterations unchanged, reducing the former 50000 iterations to 0.98 after every 100 iterations, carrying out model precision verification once after 5 iterations in actual training, and basically converging the model after 40 iterations.
According to the steps, based on the convolution network, the cloud in the remote sensing image is detected by utilizing the full-wave band spectral characteristics as a target, an effective thin cloud removing network is established, and good thin cloud removing precision is achieved.
The method can be popularized to other thin cloud removal tasks of multispectral remote sensing images of the same type, and only s is needed to set appropriate sampling parameters. And judging whether the thin cloud removal network needs to be retrained or not according to the task condition. If necessary, establishing training data of the multispectral cloud image and cloud-free image pairs according to the step 1, and training the network again to obtain the thin cloud removal network suitable for the multispectral remote sensing image.
Example two:
as shown in fig. 5, an embodiment of the present invention provides a remote sensing image thin cloud removing system based on full-band feature fusion, including:
an acquisition module: the multispectral remote sensing image processing method comprises the steps of obtaining a multispectral remote sensing image to be processed;
an output module: the thin cloud removing method is used for performing thin cloud removing on the multispectral remote sensing image to be processed by using the trained thin cloud removing network, and outputting the multispectral remote sensing image with the thin cloud removed.
The output module includes a network processing module for training a thin cloud removal network, the network processing module including:
a preprocessing module: the system comprises a multispectral remote sensing image acquisition unit, a cloud acquisition unit and a cloud acquisition unit, wherein the multispectral remote sensing image acquisition unit is used for acquiring multispectral remote sensing images under the conditions of cloud existence and no cloud existence in the same region, and preprocessing the acquired images to obtain a training set and a test set;
a sampling module: the system comprises a convolution neural network, a spectrum analysis module and a data processing module, wherein the convolution neural network is used for sampling an acquired image by utilizing the pre-constructed convolution neural network to obtain spatial characteristics and spectral characteristics of spectral bands with different resolutions of the image;
a feature fusion module: the system comprises a two-way feature fusion module, a two-way feature fusion module and a two-way feature fusion module, wherein the two-way feature fusion module is used for fusing the obtained spatial features and spectral features to respectively obtain an image feature map under the cloud condition and an image feature map under the cloud-free condition; the pre-constructed two-way feature fusion module further comprises a global cavity residual error module, wherein the global cavity residual error module is used for complementing the spatial features and the spectral features of the wave bands which are influenced by the thin clouds and are in the input features by using the wave bands which are influenced by the thin clouds;
an optimization module: the method comprises the steps of calculating multipath supervision loss and optimizing preset network parameters of a thin cloud removal network based on an image feature map under the cloud condition and an image feature map under the cloud-free condition;
training a testing module: and the method is used for training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network.
Example three:
the embodiment of the invention provides a remote sensing image thin cloud removing device based on full-wave-band feature fusion, which comprises a processor and a storage medium, wherein the processor is used for processing a remote sensing image;
the storage medium is to store instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method of embodiment one.
Example four:
embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method of an embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (6)
1. A remote sensing image thin cloud removing method based on full-waveband feature fusion is characterized by comprising the following steps:
acquiring a multispectral remote sensing image to be processed;
performing multispectral influence thin cloud removal on the multispectral remote sensing image to be processed by using the trained thin cloud removal network, and outputting the multispectral remote sensing image with the thin cloud removed;
the trained thin cloud removal network is obtained through the following steps:
acquiring multispectral remote sensing images under the conditions of cloud and no cloud in the same region, and preprocessing the acquired images to obtain a training set and a test set;
sampling the acquired image by using a pre-constructed convolutional neural network to obtain spatial characteristics and spectral characteristics of spectral bands with different resolutions of the image; wherein the pre-constructed convolutional neural network comprises high, medium and low resolution branches; the high-resolution branch, the medium-resolution branch and the low-resolution branch respectively carry out down-sampling on the corresponding resolution of the input image, and the output characteristics of the high-resolution branch and the characteristics output by the medium-resolution branch are connected on a channel to obtain first characteristics; the medium-resolution branch downsamples the first characteristic and outputs a second characteristic, and the second characteristic is connected with the characteristic output by the low-resolution branch on a channel; outputting the spatial characteristics and the spectral characteristics of the spectral bands with different resolutions of the image;
fusing the obtained spatial features and the spectral features by using a pre-constructed two-way feature fusion module to respectively obtain an image feature map under the cloud condition and an image feature map under the cloud-free condition; the pre-constructed two-way feature fusion module comprises 2 parallel deep convolution branches and 1x1 convolution layer; the features are respectively subjected to convolution processing through 2 depth convolution paths to obtain 2 groups of output features; connecting the 2 groups of output features on the channel; compressing the number of the connected characteristic channels to the number of channels same as the input characteristic by using a 1x1 convolution layer; the global cavity residual error module is used for complementing the spatial characteristics and the spectral characteristics of the wave band which is influenced by the thin cloud and is large in input characteristics by utilizing the wave band which is influenced by the thin cloud; the global cavity residual error module comprises 2 groups of 3D convolutional layers and cavity convolutional layers which are parallel, wherein the input end of each cavity convolutional layer is connected with the output end of each 3D convolutional layer; processing the input characteristics by utilizing a group of 3D convolution layers, and inputting a processing result into the void convolution layer; adding the output characteristics of the cavity convolution layer and the input characteristics to obtain first completion characteristics; processing the first completion characteristic by using another group of 3D convolution layers, and inputting a processing result into the void convolution layer; adding the output characteristics of the cavity convolution layer and the input characteristics to obtain second complementary characteristics, namely the spatial characteristics and the spectral characteristics of the wave band which is greatly influenced by the thin cloud in the input characteristics;
calculating multi-path supervision loss based on the image characteristic diagram under the cloud condition and the image characteristic diagram under the cloud-free condition, and optimizing the preset network parameters of the thin cloud removal network;
and training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network.
2. The remote sensing image thin cloud removing method based on full-band feature fusion according to claim 1, wherein the preprocessing the acquired image comprises:
segmenting the obtained image into small blocks;
carrying out manual visual interpretation on the small blocks, putting the image blocks with clouds into a cloud folder, and putting the image blocks without clouds into a cloud-free folder;
dividing image blocks in a cloud folder into a cloud training set and a cloud test set, and dividing image blocks in a non-cloud folder into a non-cloud training set and a non-cloud test set; the training set is formed by the cloud training set and the non-cloud training set, and the test set is formed by the cloud test set and the non-cloud test set.
3. The remote sensing image thin cloud removing method based on full-band feature fusion of claim 1, wherein the calculation of multipath supervision loss is calculated by the following formula:
L=L h +L m +L l +C(L_edge h +L_edge m +L_edge l ) (1)
in the formula (1), L represents a multipath supervision loss, L h Represents high resolution thin cloud image removal loss, L m Representing medium resolution thin cloud image removal loss, L l Representing low resolution thin cloud image removal loss; l _ edge h Represents the recovery loss of high resolution thin cloud edge features, L _ edge m Indicating a loss of recovery of the medium resolution thin cloud edge feature, L _ edge l Restoration of edge features representing low resolution thin cloudsLoss; c represents a weight coefficient.
4. A remote sensing image thin cloud removing system based on full-wave band feature fusion is characterized by comprising:
an acquisition module: the multispectral remote sensing image processing method comprises the steps of obtaining a multispectral remote sensing image to be processed;
an output module: the thin cloud removing network is used for removing the multispectral influence thin cloud of the multispectral remote sensing image to be processed by using the trained thin cloud removing network and outputting the multispectral remote sensing image with the thin cloud removed;
wherein the output module comprises a network processing module for training a thin cloud removal network, the network processing module comprising:
a preprocessing module: the system comprises a multi-spectral remote sensing image acquisition module, a training set acquisition module, a test set acquisition module and a data processing module, wherein the multi-spectral remote sensing image acquisition module is used for acquiring multi-spectral remote sensing images under the conditions of cloud and no cloud in the same region and preprocessing the acquired images to obtain a training set and a test set;
a sampling module: the system comprises a convolution neural network, a spectrum analysis module and a data processing module, wherein the convolution neural network is used for sampling an acquired image by utilizing the pre-constructed convolution neural network to obtain spatial characteristics and spectral characteristics of spectral bands with different resolutions of the image; wherein the pre-constructed convolutional neural network comprises high, medium and low resolution branches; the high-resolution branch, the medium-resolution branch and the low-resolution branch respectively carry out down-sampling on the corresponding resolution of the input image, and the output characteristics of the high-resolution branch and the characteristics output by the medium-resolution branch are connected on a channel to obtain first characteristics; the medium-resolution branch downsamples the first characteristic and outputs a second characteristic, and the second characteristic is connected with the characteristic output by the low-resolution branch on a channel; outputting the spatial characteristics and the spectral characteristics of spectral bands with different resolutions of the image;
a feature fusion module: the system is used for fusing the obtained spatial features and the spectral features by utilizing a pre-constructed two-way feature fusion module to respectively obtain an image feature map under the cloud condition and an image feature map under the cloud-free condition; the pre-constructed two-way feature fusion module comprises 2 parallel deep convolution branches and 1x1 convolution layer; the features are respectively subjected to convolution processing through 2 depth convolution paths to obtain 2 groups of output features; connecting the 2 groups of output features on the channel; compressing the number of the connected characteristic channels to the number of channels which is the same as the input characteristic by using a 1x1 convolution layer; the global cavity residual error module is used for complementing the spatial characteristics and the spectral characteristics of the wave band which is influenced by the thin cloud and is large in input characteristics by using the wave band which is influenced by the thin cloud and is small in influence; the global cavity residual error module comprises 2 groups of 3D convolutional layers and cavity convolutional layers which are arranged in parallel, and the input end of each cavity convolutional layer is connected with the output end of each 3D convolutional layer; processing the input characteristics by utilizing a group of 3D convolution layers, and inputting a processing result into the void convolution layer; adding the output characteristics of the cavity convolution layer and the input characteristics to obtain first completion characteristics; processing the first completion characteristics by using the other group of 3D convolution layers, and inputting a processing result into the hole convolution layer; adding the output characteristics of the cavity convolution layer and the input characteristics to obtain second complementary characteristics, namely the spatial characteristics and the spectral characteristics of the wave band which is greatly influenced by the thin cloud in the input characteristics;
an optimization module: the method comprises the steps of calculating multipath supervision loss and optimizing preset network parameters of a thin cloud removal network based on an image feature map under the cloud condition and an image feature map under the cloud-free condition;
training a testing module: and training and testing the optimized thin cloud removal network by utilizing the training set and the testing set to obtain the trained thin cloud removal network.
5. A remote sensing image thin cloud removing device based on full-wave band feature fusion is characterized by comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate according to the instructions to perform the steps of the method according to any one of claims 1 to 3.
6. Computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of a method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111332467.2A CN114066755B (en) | 2021-11-11 | 2021-11-11 | Remote sensing image thin cloud removing method and system based on full-band feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111332467.2A CN114066755B (en) | 2021-11-11 | 2021-11-11 | Remote sensing image thin cloud removing method and system based on full-band feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114066755A CN114066755A (en) | 2022-02-18 |
CN114066755B true CN114066755B (en) | 2023-02-14 |
Family
ID=80275214
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111332467.2A Active CN114066755B (en) | 2021-11-11 | 2021-11-11 | Remote sensing image thin cloud removing method and system based on full-band feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114066755B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114882378B (en) * | 2022-05-19 | 2024-09-24 | 武汉大学 | Unpaired remote sensing image cloud and fog detection and removal method |
CN115546076B (en) * | 2022-12-05 | 2023-04-07 | 耕宇牧星(北京)空间科技有限公司 | Remote sensing image thin cloud removing method based on convolutional network |
CN116343063B (en) * | 2023-05-26 | 2023-08-11 | 南京航空航天大学 | Road network extraction method, system, equipment and computer readable storage medium |
CN116823664B (en) * | 2023-06-30 | 2024-03-01 | 中国地质大学(武汉) | Remote sensing image cloud removal method and system |
CN117611494B (en) * | 2024-01-24 | 2024-04-30 | 北京理工大学 | Panchromatic remote sensing image thin cloud removing method |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014179482A1 (en) * | 2013-04-30 | 2014-11-06 | The Regents Of The University Of California | Fire urgency estimator in geosynchronous orbit (fuego) |
WO2019049324A1 (en) * | 2017-09-08 | 2019-03-14 | Nec Corporation | Image processing device, image processing method and storage medium |
CN108460739A (en) * | 2018-03-02 | 2018-08-28 | 北京航空航天大学 | A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network |
CN108921799B (en) * | 2018-06-22 | 2021-07-23 | 西北工业大学 | Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network |
AU2019362019B2 (en) * | 2018-10-19 | 2024-06-27 | Climate Llc | Machine learning techniques for identifying clouds and cloud shadows in satellite imagery |
CN109934200B (en) * | 2019-03-22 | 2023-06-23 | 南京信息工程大学 | RGB color remote sensing image cloud detection method and system based on improved M-Net |
US12130179B2 (en) * | 2019-07-01 | 2024-10-29 | David P. Groeneveld | Systems and methods for converting satellite images to surface reflectance using scene statistics |
US11140422B2 (en) * | 2019-09-25 | 2021-10-05 | Microsoft Technology Licensing, Llc | Thin-cloud system for live streaming content |
CN111274865B (en) * | 2019-12-14 | 2023-09-19 | 深圳先进技术研究院 | Remote sensing image cloud detection method and device based on full convolution neural network |
CN112465733B (en) * | 2020-08-31 | 2022-06-28 | 长沙理工大学 | Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning |
CN112529788B (en) * | 2020-11-13 | 2022-08-19 | 北京航空航天大学 | Multispectral remote sensing image thin cloud removing method based on thin cloud thickness map estimation |
CN113129247B (en) * | 2021-04-21 | 2023-04-07 | 重庆邮电大学 | Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution |
-
2021
- 2021-11-11 CN CN202111332467.2A patent/CN114066755B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114066755A (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114066755B (en) | Remote sensing image thin cloud removing method and system based on full-band feature fusion | |
CN108921799B (en) | Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network | |
CN109934153B (en) | Building extraction method based on gating depth residual error optimization network | |
CN112861729B (en) | Real-time depth completion method based on pseudo-depth map guidance | |
CN110163213B (en) | Remote sensing image segmentation method based on disparity map and multi-scale depth network model | |
CN113673590B (en) | Rain removing method, system and medium based on multi-scale hourglass dense connection network | |
CN110991430B (en) | Ground feature identification and coverage rate calculation method and system based on remote sensing image | |
CN112488978A (en) | Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation | |
CN112819737A (en) | Remote sensing image fusion method of multi-scale attention depth convolution network based on 3D convolution | |
CN116309070A (en) | Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment | |
CN113610070A (en) | Landslide disaster identification method based on multi-source data fusion | |
CN111179196B (en) | Multi-resolution depth network image highlight removing method based on divide-and-conquer | |
CN114092794B (en) | Sea ice image classification method, system, medium, equipment and processing terminal | |
CN114494821A (en) | Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation | |
CN104794681A (en) | Remote sensing image fusion method based on multi-redundancy dictionary and sparse reconstruction | |
CN117115666B (en) | Plateau lake extraction method, device, equipment and medium based on multi-source data | |
CN114170144A (en) | Power transmission line pin defect detection method, equipment and medium | |
CN112419197A (en) | Universal single-time phase and multi-time phase SAR image speckle noise removing method | |
CN116310828A (en) | High-resolution remote sensing image change detection method and device combining transducer and CNN | |
CN113239736A (en) | Land cover classification annotation graph obtaining method, storage medium and system based on multi-source remote sensing data | |
CN114549385A (en) | Optical and SAR image fusion cloud removing method based on deep dense residual error network | |
CN111738168A (en) | Satellite image river two-side sand production extraction method and system based on deep learning | |
CN116311218A (en) | Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion | |
CN114092803A (en) | Cloud detection method and device based on remote sensing image, electronic device and medium | |
CN112800851B (en) | Water body contour automatic extraction method and system based on full convolution neuron network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |