CN113379618A - Optical remote sensing image cloud removing method based on residual dense connection and feature fusion - Google Patents
Optical remote sensing image cloud removing method based on residual dense connection and feature fusion Download PDFInfo
- Publication number
- CN113379618A CN113379618A CN202110491313.1A CN202110491313A CN113379618A CN 113379618 A CN113379618 A CN 113379618A CN 202110491313 A CN202110491313 A CN 202110491313A CN 113379618 A CN113379618 A CN 113379618A
- Authority
- CN
- China
- Prior art keywords
- cloud
- network model
- remote sensing
- sensing image
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000004927 fusion Effects 0.000 title claims abstract description 39
- 230000003287 optical effect Effects 0.000 title claims abstract description 29
- 230000006870 function Effects 0.000 claims abstract description 94
- 238000000605 extraction Methods 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 27
- 230000003042 antagnostic effect Effects 0.000 claims abstract description 8
- 230000008447 perception Effects 0.000 claims abstract description 8
- 230000004913 activation Effects 0.000 claims description 43
- 238000010586 diagram Methods 0.000 claims description 7
- 230000008034 disappearance Effects 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000003595 mist Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an optical remote sensing image cloud removing method based on residual error dense connection and feature fusion, which comprises the following steps: the optical remote sensing image cloud removing method is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating clear cloud-free images, and the judging network model is responsible for playing games with a generating network, so that the cloud removing performance of the generating network is improved; the method comprises the steps that a network model is generated, cloud removing of a remote sensing image is achieved on the basis of a feature extraction part and a feature reconstruction part, the feature extraction part is built by adopting a residual error intensive connection unit, and the feature reconstruction part is built by adopting a residual error intensive connection unit and a feature fusion unit; the discrimination network model is responsible for distinguishing the cloud-removed picture from the real cloud-free picture and providing a gradient for the network to carry out countermeasure training; training a generation network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonistic loss function, and training a discrimination network by adopting a cross entropy loss function; and the trained model parameters are adopted, and the cloud-free remote sensing image can be restored by only one cloud remote sensing image.
Description
Technical Field
The invention relates to the technical field of image processing technology and deep learning, in particular to an optical remote sensing image cloud removing method based on residual error dense connection and feature fusion.
Background
In recent years, remote sensing technology has been rapidly developed and has been applied to various fields such as ecological monitoring, weather prediction, disaster prediction, and traffic monitoring. An optical remote sensing image shot by a remote sensing satellite can be shielded by aerial cloud and fog and the like, so that a series of problems of quality reduction, color distortion, key information loss and the like of the remote sensing image are caused, and the analysis and the processing of the content of the remote sensing image are seriously influenced.
The traditional remote sensing image cloud removing technology mainly comprises a physical model method, homomorphic filtering, wavelet transformation and the like. The physical model method mainly comprises the steps of analyzing a physical model formed by a cloud layer, analyzing model parameters according to certain assumed prior knowledge, and finally reversely pushing out a cloud-free image through the physical model, however, the algorithm has obvious limitations and cannot be applied to all situations, and when the assumed prior fails, the algorithm result is not ideal; the homomorphic filtering algorithm assumes that the cloud and mist are distributed in the low-frequency information of the remote sensing image, and removes the influence of the cloud and mist by enhancing the high-frequency information; the wavelet transform method utilizes the difference of frequencies of cloud and ground information to obtain wavelet coefficients with different resolutions through wavelet decomposition, and processes detail and approximate coefficients under different resolutions so as to restore images. However, such filtering methods may obscure the surface information during the process of processing the image, which may cause adverse effects.
With the development of Convolutional Neural Networks (CNN), more and more deep learning algorithms have achieved excellent effects in the field of computer vision technology, such as object detection, object recognition, and image processing. The deep learning algorithm provides a wide idea for a remote sensing image cloud removing technology, the remote sensing image cloud removing technology mainly aims to remove the influence of cloud and fog, recover detail information in the remote sensing image as much as possible, and improve the image quality and definition, and the technology has very important significance.
In conclusion, the remote sensing image cloud removing technology designed by the deep learning method can avoid the limitation of a physical model, has the characteristics of simplicity, convenience, practicability, high applicability and the like, and can obtain a clear cloud-free remote sensing image by learning the mapping relation between the cloud remote sensing image and the cloud-free remote sensing image and utilizing a single cloud-containing remote sensing image without additional assumed information. Therefore, the optical remote sensing image cloud removing method independent of the physical model has very strong practical value and application prospect.
Disclosure of Invention
The invention provides an optical remote sensing image cloud removing method based on residual error intensive connection and feature fusion, which is characterized in that the optical remote sensing image cloud removing method is designed by adopting a generating network model and a judging network model, the generating network model adopts a residual error intensive connection unit and a feature fusion unit to realize cloud removal of remote sensing images, and the judging network model improves cloud removing performance through countertraining, so that the mapping function between a cloud remote sensing image and a non-cloud remote sensing image can be adaptively learned, the non-cloud image can be directly recovered from the cloud remote sensing image, and detailed description is given below:
the technical scheme of the invention is as follows: an optical remote sensing image cloud removing method based on residual dense connection and feature fusion, the method comprising the following steps:
the optical remote sensing image cloud removing method is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating clear cloud-free images, and the judging network model is responsible for playing games with a generating network, so that the cloud removing performance of the generating network is improved;
the method comprises the steps that a network model is generated, cloud removing of a remote sensing image is achieved on the basis of a feature extraction part and a feature reconstruction part, the feature extraction part is built by adopting a residual error intensive connection unit, and the feature reconstruction part is built by adopting a residual error intensive connection unit and a feature fusion unit;
the discrimination network model is responsible for distinguishing the cloud-removed picture from the real cloud-free picture and providing a gradient for the network to carry out countermeasure training;
training a generation network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonistic loss function, and training a discrimination network by adopting a cross entropy loss function;
and the trained model parameters are adopted, and the cloud-free remote sensing image can be restored by only one cloud remote sensing image.
The generating of the network model specifically comprises:
the generated network model is built based on convolution, deconvolution, LReLU activation function, ReLU activation function and Tanh activation function;
the generation network model comprises two parts of image feature extraction and image feature reconstruction;
the image feature extraction part consists of m residual error intensive connection units and comprises convolution and an LReLU activation function;
the image characteristic reconstruction part consists of n residual error intensive connections and n characteristic fusion units and comprises deconvolution, a ReLU activation function and a Tanh activation function;
the residual error intensive connection unit consists of f convoluted intensive connections and g residual error learning units, the feature multiplexing is realized through the f convoluted intensive connections, the feature information flow between layers is maximized, and the problem of gradient disappearance is prevented by adopting the residual error learning;
the feature fusion unit adopts receptive fields with the sizes of a multiplied by a and b multiplied by b to complete multi-scale feature extraction and fusion tasks.
The judgment network model specifically comprises the following steps:
the discrimination network model is built based on convolution, batch normalization, LReLU activation function and Sigmoid activation function and consists of q convolution layers;
the discrimination network model respectively takes a real non-cloud image and a de-cloud image as a positive sample and a negative sample, adopts a cloud image as reference information, and distinguishes the two through training;
the discrimination network model and the generation network model are alternately trained to continuously provide a gradient for the generation network, and the discrimination network model and the generation network model are always in the confrontation game.
The average absolute error loss function is specifically:
LossM=||E-G(F)||1
wherein G (·) represents the output of the generation network, E represents the non-cloud picture, F represents the cloud picture, and G (F) represents the de-cloud picture;
the perceptual loss function is specifically:
wherein β (-) represents a characteristic diagram of the convolutional layer 2_2 output of VGG 16;
the countermeasure loss function is specifically:
LossA=∑[-logD(F,G(F))]
wherein, D (-) represents the output of the discrimination network, D (F, G (F)) represents the output of the cloud image G (F) passing through the discrimination network when the cloud image F is taken as the reference information;
the cross entropy loss function is specifically:
LossD=∑[-logD(F,E)+log(1-D(F,G(F)))]
in the formula, D (F, E) represents the output of the cloud image E through the discrimination network when the cloud image F is used as the reference information.
The technical scheme provided by the invention has the beneficial effects that:
1. the method does not depend on the assumption and the derivation of a physical model, avoids the limitation of the physical model, learns the mapping functions of the cloud remote sensing image and the cloud-free remote sensing image through the deep convolutional neural network, and has strong applicability;
2. according to the cloud-removing remote sensing image acquisition method, extra information does not need to be provided, the cloud-removing remote sensing image can be obtained only by one cloud remote sensing image, and the method is simple and easy to implement and easy to operate;
3. the invention has natural cloud removing effect and higher efficiency.
Drawings
FIG. 1 is a flow chart of an optical remote sensing image cloud removing method based on residual error dense connection and feature fusion;
FIG. 2 is a schematic diagram of a network model structure;
FIG. 3 is a schematic diagram of a residual dense connection unit structure;
FIG. 4 is a schematic diagram of a feature fusion unit structure;
FIG. 5 is a schematic diagram of a discriminating network model structure;
FIG. 6 shows the experimental results including cloud remote sensing images and cloud remote sensing images;
fig. 7 is another cloud remote sensing image and a cloud-removed remote sensing image in the experiment result.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, embodiments of the present invention are described in further detail below with reference to fig. 1-7.
Example 1
In order to realize cloud removal of high-quality optical remote sensing images, the embodiment of the invention provides an optical remote sensing image cloud removal method based on residual dense connection and feature fusion, and the method is described in detail in the following description with reference to fig. 1:
101: the optical remote sensing image cloud removing method is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating clear cloud-free images, and the judging network model is responsible for playing games with a generating network, so that the cloud removing performance of the generating network is improved;
102: the method comprises the steps that a network model is generated, cloud removing of a remote sensing image is achieved on the basis of a feature extraction part and a feature reconstruction part, the feature extraction part is built by adopting a residual error intensive connection unit, and the feature reconstruction part is built by adopting a residual error intensive connection unit and a feature fusion unit;
103: the discrimination network model is responsible for distinguishing the cloud-removed picture from the real cloud-free picture and providing a gradient for the network to carry out countermeasure training;
104: training a generation network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonistic loss function, and training a discrimination network by adopting a cross entropy loss function;
105: the trained model parameters are adopted, and the cloud-free remote sensing image can be restored by only one cloud remote sensing image;
the specific steps in step 101 are as follows:
1) adopting a counterstudy strategy, and realizing an optical remote sensing image cloud removing method by generating a network model and judging the network model;
2) the generation network model is responsible for generating a clear non-cloud picture, the input is a cloud picture F, and the output is a cloud-removing picture G (F);
3) and the discrimination network model is responsible for distinguishing the cloud-free image E and the cloud-removing image G (F) and playing a game with the generated network to improve the cloud-removing performance of the generated network, and the discrimination network model is input into the cloud-free image E or the cloud-removing image G (F) and output into the probability that the image is the cloud-free image E.
The specific steps of generating the network model building in the step 102 are as follows:
1) the generated network model is built based on convolution, deconvolution, LReLU activation function (slope is alpha), ReLU activation function and Tanh activation function, and comprises two parts of image Feature Extraction (Feature Extraction) and image Feature Reconstruction (Feature Reconstruction), and the structure is shown in FIG. 2;
2) the image feature extraction part consists of m Residual Dense Connected Units (RDCUs) and comprises a convolution sum and an LReLU activation function; the image Feature reconstruction part consists of n residual error dense connections and n Feature Fusion Units (FFUs), and comprises deconvolution, a ReLU activation function and a Tanh activation function; the feature extraction part reduces the size of the feature map by convolution with the stride of 2, and the feature reconstruction part reduces the size of the feature map by convolution with the stride of 2The size is enlarged by deconvolution, an LReLU activation function is adopted in the image feature extraction part, a Tanh activation function is adopted in the last layer of the image feature reconstruction part, and ReLU activation functions are used in the rest layers;
3) the residual dense connection unit is composed based on the dense connection of f convolutions and g residual learning, and the structure is shown in fig. 3. Further, feature multiplexing is achieved through dense connection of f convolutions, feature information flow among layers is maximized, then the number of feature channels is reduced through one 1 x 1 convolution, and finally the problem of gradient disappearance is prevented through residual learning. The convolution steps in the unit are all 1, and except the last convolution kernel size of 1 × 1, the sizes of the rest convolution kernels are all 4 × 4. The structure of the residual error intensive connection unit is not limited;
4) the feature fusion unit extracts features and fuses by using the receptive fields with the sizes of a × a and b × b, and the structure is shown in fig. 4. Further, the unit firstly adopts the receptive fields of a × a and b × b to complete multi-scale feature extraction and fusion, and then adopts 1 × 1 convolution to reduce the number of features, wherein the convolution steps are all 1. The structure of the feature fusion unit is not limited;
5) the convolution kernel sizes of the remaining convolutions are d, unless otherwise specified.
The specific steps of distinguishing the network model in step 103 are as follows:
1) the judgment network model is built based on convolution, Batch Normalization (BN), LReLU (slope of alpha) and Sigmoid activation functions, and consists of q convolution layers, the last two convolution steps are 1, the rest steps are 2, the last layer of activation function is a Sigmoid activation function, and the rest are LReLU activation functions, and the structure is shown in FIG. 5;
2) the discrimination network model takes the real non-cloud picture E as a positive sample and the de-cloud pictures G (F) as negative samples for training, adopts the cloud picture F as reference information, and outputs the probability with the value range of [0,1 ]. Specifically, when the input is a real cloud-free image E, a high probability is output, otherwise, a low probability is output;
3) and the discrimination network model and the generation network model are alternately trained to continuously provide a gradient for the generation network, and the discrimination network model and the generation network model are always in the confrontation game.
The specific steps of constructing the loss function in step 104 are as follows:
1) training a generation network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonistic loss function, and training a discrimination network by adopting a cross entropy loss function, which is specifically described as follows;
2) the mean absolute error loss function is shown in equation (1):
LossM=||E-G(F)||1 (1)
wherein G (·) represents the output of the generation network, E represents the non-cloud picture, F represents the cloud picture, and G (F) represents the de-cloud picture;
3) the perceptual loss function is shown in equation (2):
wherein β (-) represents a characteristic diagram of the convolutional layer 2_2 output of VGG 16;
4) the penalty function is shown in equation (3):
LossA=∑[-logD(F,G(F))] (3)
wherein, D (-) represents the output of the discrimination network, D (F, G (F)) represents the output of the cloud image G (F) passing through the discrimination network when the cloud image F is taken as the reference information;
5) the overall loss function of the training generation network is a linear combination of the three functions, as shown in equation (4):
LossG=λLossM+βLossP+αLA (4)
wherein λ, β and α are Loss, respectivelyM、LossPAnd LAThe weight of (c);
6) the training discriminant network uses a cross-entropy loss function, as shown in equation (5):
LossD=∑[-logD(F,E)+log(1-D(F,G(F)))] (5)
in the formula, D (F, E) represents the output of the cloud image E through the discrimination network when the cloud image F is used as the reference information.
Wherein, the specific steps of step 105 are as follows: and generating a cloud-free remote sensing image by using the trained model parameters and using one cloud remote sensing image.
Example 2
The scheme of example 1 is described in detail below with reference to specific drawings and calculation formulas, and is described in detail below:
201: the optical remote sensing image cloud removing method is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating clear cloud-free images, and the judging network model is responsible for playing games with a generating network, so that the cloud removing performance of the generating network is improved;
202: the method comprises the steps that a network model is generated, cloud removing of a remote sensing image is achieved on the basis of a feature extraction part and a feature reconstruction part, the feature extraction part is built by adopting a residual error intensive connection unit, and the feature reconstruction part is built by adopting a residual error intensive connection unit and a feature fusion unit;
203: the discrimination network model is responsible for distinguishing the cloud-removed picture from the real cloud-free picture and providing a gradient for the network to carry out countermeasure training;
204: training a generation network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonistic loss function, and training a discrimination network by adopting a cross entropy loss function;
205: the trained model parameters are adopted, and the cloud-free remote sensing image can be restored by only one cloud remote sensing image;
wherein, the specific steps in step 201 are as follows:
1) adopting a counterstudy strategy, and realizing an optical remote sensing image cloud removing method by generating a network model and judging the network model;
2) the generation network model is responsible for generating a clear non-cloud picture, the input is a cloud picture F, and the output is a cloud-removing picture G (F);
3) and the discrimination network model is responsible for distinguishing the cloud-free image E and the cloud-removing image G (F) and playing a game with the generated network to improve the cloud-removing performance of the generated network, and the discrimination network model is input into the cloud-free image E or the cloud-removing image G (F) and output into the probability that the image is the cloud-free image E.
The specific steps of generating the network model building in step 202 are as follows:
1) the generated network model is built based on convolution, deconvolution, LReLU activation function (slope of 0.2), ReLU activation function and Tanh activation function, and comprises two parts of image Feature Extraction (Feature Extraction) and image Feature Reconstruction (Feature Reconstruction), and the structure is shown in FIG. 2;
2) the image feature extraction part is composed of 3 Residual Dense Connected Units (RDCUs) including a convolution sum and an LReLU activation functionCounting; the image Feature reconstruction part consists of 3 residual error dense connections and 3 Feature Fusion Units (FFUs), and comprises deconvolution, ReLU activation functions and Tanh activation functions; the feature extraction part reduces the size of the feature map by convolution with the stride of 2, and the feature reconstruction part reduces the size of the feature map by convolution with the stride of 2The size is enlarged by deconvolution, an LReLU activation function is adopted in the image feature extraction part, a Tanh activation function is adopted in the last layer of the image feature reconstruction part, and ReLU activation functions are used in the rest layers;
3) the residual dense connection unit is composed based on 3 convolution dense connections and 1 residual learning, and the structure is shown in fig. 3. Further, feature multiplexing is achieved through dense connection of 3 convolutions, feature information flow between layers is maximized, then the number of feature channels is reduced through one 1 x 1 convolution, and finally the gradient disappearance problem is prevented through residual learning. The convolution steps in the unit are all 1, and except the last convolution kernel size of 1 × 1, the sizes of the rest convolution kernels are all 4 × 4. The structure of the residual error intensive connection unit is not limited;
4) the feature fusion unit extracts features and fuses by using receptive fields with the sizes of 3 × 3 and 5 × 5, and the structure is shown in fig. 4. Further, the unit firstly adopts 3 × 3 and 5 × 5 receptive fields to complete multi-scale feature extraction and fusion, and then adopts 1 × 1 convolution to reduce the number of features, and the convolution steps are all 1. The structure of the feature fusion unit is not limited;
5) the convolution kernel size of the remaining convolutions is 4 x 4, unless otherwise specified.
The specific steps of distinguishing the network model in step 203 are as follows:
1) the judgment network model is built based on convolution, Batch Normalization (BN), LReLU activation functions (the slope is 0.2) and Sigmoid activation functions, and consists of 5 convolution layers, the last two convolution steps are 1, the rest steps are 2, the last layer of activation functions are Sigmoid activation functions, and the rest are LReLU activation functions, and the structure is shown in FIG. 5;
2) the discrimination network model takes the real non-cloud picture E as a positive sample and the de-cloud pictures G (F) as negative samples for training, adopts the cloud picture F as reference information, and outputs the probability with the value range of [0,1 ]. Specifically, when the input is a real cloud-free image E, a high probability is output, otherwise, a low probability is output;
3) and the discrimination network model and the generation network model are alternately trained to continuously provide a gradient for the generation network, and the discrimination network model and the generation network model are always in the confrontation game.
The specific steps of constructing the loss function in step 204 are as follows:
1) training a generation network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonistic loss function, and training a discrimination network by adopting a cross entropy loss function, which is specifically described as follows;
2) the average absolute error loss function is shown as formula (1), the perceptual loss function is shown as formula (2), the antagonistic loss function is shown as formula (3), the total loss function of the training generation network is a linear combination of the three functions, as shown as formula (4), the cross entropy loss function used by the training discrimination network is shown as formula (5), and details are not repeated here.
3) Preferably, the weight setting is specifically λ ═ 100.0, β ═ 1.0, and α ═ 0.5.
Wherein, the specific steps of step 205 are: and the trained model parameters are adopted, and the cloud-free remote sensing image can be restored by only one cloud remote sensing image.
Example 3
The feasibility of the protocols of examples 1 and 2 was verified by experimental data as described below:
two cloud remote sensing images are selected from the disclosed remote sensing image RICE data set, and the cloud removing method is used for carrying out experiments, and the results are shown in fig. 6 and 7. Therefore, the cloud removing method has a natural cloud removing effect, is thorough, and can well realize the cloud removing task of the optical remote sensing image.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (9)
1. An optical remote sensing image cloud removing method based on residual error dense connection and feature fusion is characterized by comprising the following steps:
the optical remote sensing image cloud removing method is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating clear cloud-free images, and the judging network model is responsible for playing games with a generating network, so that the cloud removing performance of the generating network is improved;
the method comprises the steps that a network model is generated, cloud removing of a remote sensing image is achieved on the basis of a feature extraction part and a feature reconstruction part, the feature extraction part is built by adopting a residual error intensive connection unit, and the feature reconstruction part is built by adopting a residual error intensive connection unit and a feature fusion unit;
the discrimination network model is responsible for distinguishing the cloud-removed picture from the real cloud-free picture and providing a gradient for the network to carry out countermeasure training;
training a generation network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonistic loss function, and training a discrimination network by adopting a cross entropy loss function;
and recovering a non-cloud remote sensing image from only one cloud remote sensing image by adopting the trained model parameters.
2. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion according to claim 1, wherein the generated network model specifically comprises:
the generated network model is built based on convolution, deconvolution, LReLU activation function, ReLU activation function and Tanh activation function;
the generation of the network model comprises two parts of image feature extraction and image feature reconstruction.
3. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion as claimed in claim 2, wherein the image feature extraction part and the reconstruction part are specifically:
the image feature extraction part consists of m residual error intensive connection units and comprises convolution and an LReLU activation function;
the image characteristic reconstruction part consists of n residual error intensive connections and n characteristic fusion units, and comprises deconvolution, a ReLU activation function and a Tanh activation function.
4. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion according to claim 3, wherein the residual error dense connection unit and the feature fusion unit are specifically:
the residual error intensive connection unit consists of f convoluted intensive connections and g residual error learning units, the feature multiplexing is realized through the f convoluted intensive connections, the feature information flow between layers is maximized, and the problem of gradient disappearance is prevented by adopting the residual error learning;
the feature fusion unit adopts receptive fields with the sizes of a multiplied by a and b multiplied by b to complete multi-scale feature extraction and fusion tasks.
5. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion according to claim 1, wherein the discrimination network model specifically comprises:
the discrimination network model is built based on convolution, batch normalization, LReLU activation function and Sigmoid activation function and consists of q convolution layers;
the discrimination network model respectively takes a real non-cloud image and a de-cloud image as a positive sample and a negative sample, adopts a cloud image as reference information, and distinguishes the two through training;
the discrimination network model and the generation network model are alternately trained to continuously provide a gradient for the generation network, and the discrimination network model and the generation network model are always in the confrontation game.
6. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion as claimed in claim 2, wherein the average absolute error loss function adopted for generating the network model is specifically as follows:
LossM=||E-G(F)||1
wherein G (-) represents the output of the generation network, E represents the non-cloud picture, F represents the cloud picture, and G (F) represents the de-cloud picture.
7. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion as claimed in claim 2, wherein the perception loss function adopted for generating the network model is specifically as follows:
in the formula, β (·) represents a characteristic diagram of the convolutional layer 2_2 output of VGG 16.
8. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion as claimed in claim 2, wherein the countermeasure loss function adopted for generating the network model is specifically:
LossA=∑[-logD(F,G(F))]
in the formula, D (·) represents the output of the discrimination network, and D (F, g (F)) represents the output of the cloud image g (F) passing through the discrimination network when the cloud image F is used as the reference information.
9. The optical remote sensing image cloud removing method based on residual error dense connection and feature fusion according to claim 5, wherein the cross entropy loss function adopted by the discriminant network model is specifically as follows:
LossD=∑[-logD(F,E)+log(1-D(F,G(F)))]
in the formula, D (F, E) represents the output of the cloud image E through the discrimination network when the cloud image F is used as the reference information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110491313.1A CN113379618B (en) | 2021-05-06 | 2021-05-06 | Optical remote sensing image cloud removing method based on residual dense connection and feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110491313.1A CN113379618B (en) | 2021-05-06 | 2021-05-06 | Optical remote sensing image cloud removing method based on residual dense connection and feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113379618A true CN113379618A (en) | 2021-09-10 |
CN113379618B CN113379618B (en) | 2024-04-12 |
Family
ID=77570395
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110491313.1A Active CN113379618B (en) | 2021-05-06 | 2021-05-06 | Optical remote sensing image cloud removing method based on residual dense connection and feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113379618B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113935917A (en) * | 2021-10-14 | 2022-01-14 | 中国石油大学(华东) | Optical remote sensing image thin cloud removing method based on cloud picture operation and multi-scale generation countermeasure network |
CN114140357A (en) * | 2021-12-02 | 2022-03-04 | 哈尔滨工程大学 | Multi-temporal remote sensing image cloud region reconstruction method based on cooperative attention mechanism |
CN115294392A (en) * | 2022-08-09 | 2022-11-04 | 安徽理工大学 | Visible light remote sensing image cloud removing method and system based on generated network model |
CN116168302A (en) * | 2023-04-25 | 2023-05-26 | 耕宇牧星(北京)空间科技有限公司 | Remote sensing image rock vein extraction method based on multi-scale residual error fusion network |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472818A (en) * | 2018-10-17 | 2019-03-15 | 天津大学 | A kind of image defogging method based on deep neural network |
CN110570371A (en) * | 2019-08-28 | 2019-12-13 | 天津大学 | image defogging method based on multi-scale residual error learning |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
KR102095443B1 (en) * | 2019-10-17 | 2020-05-26 | 엘아이지넥스원 주식회사 | Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning |
CN111383192A (en) * | 2020-02-18 | 2020-07-07 | 清华大学 | SAR-fused visible light remote sensing image defogging method |
CN112150395A (en) * | 2020-10-15 | 2020-12-29 | 山东工商学院 | Encoder-decoder network image defogging method combining residual block and dense block |
-
2021
- 2021-05-06 CN CN202110491313.1A patent/CN113379618B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472818A (en) * | 2018-10-17 | 2019-03-15 | 天津大学 | A kind of image defogging method based on deep neural network |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
CN110570371A (en) * | 2019-08-28 | 2019-12-13 | 天津大学 | image defogging method based on multi-scale residual error learning |
KR102095443B1 (en) * | 2019-10-17 | 2020-05-26 | 엘아이지넥스원 주식회사 | Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning |
CN111383192A (en) * | 2020-02-18 | 2020-07-07 | 清华大学 | SAR-fused visible light remote sensing image defogging method |
CN112150395A (en) * | 2020-10-15 | 2020-12-29 | 山东工商学院 | Encoder-decoder network image defogging method combining residual block and dense block |
Non-Patent Citations (1)
Title |
---|
刘宇航: "基于生成式对抗网络的图像去雾算法研究", 《万方学位论文库》, pages 1 - 75 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113935917A (en) * | 2021-10-14 | 2022-01-14 | 中国石油大学(华东) | Optical remote sensing image thin cloud removing method based on cloud picture operation and multi-scale generation countermeasure network |
CN114140357A (en) * | 2021-12-02 | 2022-03-04 | 哈尔滨工程大学 | Multi-temporal remote sensing image cloud region reconstruction method based on cooperative attention mechanism |
CN114140357B (en) * | 2021-12-02 | 2024-04-19 | 哈尔滨工程大学 | Multi-temporal remote sensing image cloud zone reconstruction method based on cooperative attention mechanism |
CN115294392A (en) * | 2022-08-09 | 2022-11-04 | 安徽理工大学 | Visible light remote sensing image cloud removing method and system based on generated network model |
CN115294392B (en) * | 2022-08-09 | 2023-05-09 | 安徽理工大学 | Visible light remote sensing image cloud removal method and system based on network model generation |
CN116168302A (en) * | 2023-04-25 | 2023-05-26 | 耕宇牧星(北京)空间科技有限公司 | Remote sensing image rock vein extraction method based on multi-scale residual error fusion network |
CN116168302B (en) * | 2023-04-25 | 2023-07-14 | 耕宇牧星(北京)空间科技有限公司 | Remote sensing image rock vein extraction method based on multi-scale residual error fusion network |
Also Published As
Publication number | Publication date |
---|---|
CN113379618B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liang et al. | ResWCAE: Biometric Pattern Image Denoising Using Residual Wavelet-Conditioned Autoencoder | |
Tian et al. | Deep learning on image denoising: An overview | |
CN113379618A (en) | Optical remote sensing image cloud removing method based on residual dense connection and feature fusion | |
CN107506712B (en) | Human behavior identification method based on 3D deep convolutional network | |
CN111915530B (en) | End-to-end-based haze concentration self-adaptive neural network image defogging method | |
CN112507997B (en) | Face super-resolution system based on multi-scale convolution and receptive field feature fusion | |
CN110796625B (en) | Image compressed sensing reconstruction method based on group sparse representation and weighted total variation | |
Zhang et al. | Efficient feature learning and multi-size image steganalysis based on CNN | |
CN111275637A (en) | Non-uniform motion blurred image self-adaptive restoration method based on attention model | |
CN110443761B (en) | Single image rain removing method based on multi-scale aggregation characteristics | |
CN111915592A (en) | Remote sensing image cloud detection method based on deep learning | |
CN112270654A (en) | Image denoising method based on multi-channel GAN | |
CN111783890B (en) | Small pixel countermeasure sample defense method for image recognition process | |
CN112365414A (en) | Image defogging method based on double-path residual convolution neural network | |
CN106952317A (en) | Based on the high spectrum image method for reconstructing that structure is sparse | |
Sungheetha et al. | A novel CapsNet based image reconstruction and regression analysis | |
CN116051408B (en) | Image depth denoising method based on residual error self-coding | |
CN112418041A (en) | Multi-pose face recognition method based on face orthogonalization | |
Tseng et al. | An interpretable compression and classification system: Theory and applications | |
CN109003247B (en) | Method for removing color image mixed noise | |
CN111738939B (en) | Complex scene image defogging method based on semi-training generator | |
Alsayyh et al. | A Novel Fused Image Compression Technique Using DFT, DWT, and DCT. | |
CN111047537A (en) | System for recovering details in image denoising | |
Meena et al. | A novel method to distinguish photorealistic computer generated images from photographic images | |
CN114926348A (en) | Device and method for removing low-illumination video noise |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |