CN113379618B - Optical remote sensing image cloud removing method based on residual dense connection and feature fusion - Google Patents

Optical remote sensing image cloud removing method based on residual dense connection and feature fusion Download PDF

Info

Publication number
CN113379618B
CN113379618B CN202110491313.1A CN202110491313A CN113379618B CN 113379618 B CN113379618 B CN 113379618B CN 202110491313 A CN202110491313 A CN 202110491313A CN 113379618 B CN113379618 B CN 113379618B
Authority
CN
China
Prior art keywords
cloud
network model
image
remote sensing
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110491313.1A
Other languages
Chinese (zh)
Other versions
CN113379618A (en
Inventor
刘宇航
佘玉成
杨志
王晓宇
王丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Dongfanghong Satellite Co Ltd
Original Assignee
Aerospace Dongfanghong Satellite Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Dongfanghong Satellite Co Ltd filed Critical Aerospace Dongfanghong Satellite Co Ltd
Priority to CN202110491313.1A priority Critical patent/CN113379618B/en
Publication of CN113379618A publication Critical patent/CN113379618A/en
Application granted granted Critical
Publication of CN113379618B publication Critical patent/CN113379618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an optical remote sensing image cloud removal method based on residual dense connection and feature fusion, which comprises the following steps: the cloud removing method of the optical remote sensing image is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating a clear cloud-free image, the judging network model is responsible for performing game with the generating network, and the cloud removing performance of the generating network is improved; the method comprises the steps that a generated network model is used for realizing remote sensing image cloud removal based on a feature extraction part and a feature reconstruction part, wherein the feature extraction part is built by adopting a residual intensive connection unit, and the feature reconstruction part is built by adopting a residual intensive connection unit and a feature fusion unit; the discrimination network model is responsible for discriminating the cloud pattern from the real cloud pattern, and providing gradients for the generation network to perform countermeasure training; training a generating network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonism loss function, and training a judging network by adopting a cross entropy loss function; and the cloud-free remote sensing image can be recovered by only one cloud remote sensing image by adopting the trained model parameters.

Description

Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
Technical Field
The invention relates to the technical field of image processing technology and deep learning, in particular to an optical remote sensing image cloud removing method based on residual dense connection and feature fusion.
Background
In recent years, remote sensing technology has rapidly developed and has been applied to various fields such as ecological monitoring, weather prediction, disaster prediction and traffic monitoring. The optical remote sensing image shot by the remote sensing satellite can be shielded by aerial cloud and the like, so that a series of problems such as quality degradation, color distortion, key information loss and the like of the remote sensing image are caused, and analysis and processing of the remote sensing image content are seriously influenced.
The traditional remote sensing image cloud removing technology mainly comprises a physical model method, homomorphic filtering, wavelet transformation and the like. The physical model method is mainly used for analyzing a physical model formed by cloud layers, analyzing model parameters according to some assumption priori knowledge, and finally reversely deducing a cloud-free image through the physical model, however, the algorithm has obvious limitations, cannot be suitable for all cases, and has an undesirable algorithm result under the condition of assumption priori failure; the homomorphic filtering algorithm assumes that cloud and fog are distributed in low-frequency information of a remote sensing image, and the cloud and fog influence is removed through enhancing high-frequency information; the wavelet transformation method utilizes the difference of the frequency of cloud and ground information, obtains wavelet coefficients with different resolutions through wavelet decomposition, processes details and approximate coefficients under different resolutions, and then restores images. However, such filtering methods may obscure the surface information during processing of the image, with adverse effects.
With the development of convolutional neural networks (Convolutional Neural Networks, CNN), more and more deep learning algorithms have achieved very good effects in the technical field of computer vision, such as the fields of target detection, target recognition, image processing, and the like. The deep learning algorithm provides a wide thought for a remote sensing image cloud removing technology, the remote sensing image cloud removing technology mainly aims at removing the influence of cloud and fog, the detail information in the remote sensing image is recovered as much as possible, and the image quality and the definition are improved.
In summary, the deep learning method is adopted to design the remote sensing image cloud removal technology, so that the limitation of a physical model can be avoided, the method has the characteristics of simplicity, easiness, high applicability and the like, and the clear cloud-free remote sensing image can be obtained by learning the mapping relation between the cloud remote sensing image and the cloud-free remote sensing image and utilizing the Shan Zhangyou cloud remote sensing image without additional assumption information. Therefore, the optical remote sensing image cloud removing method independent of the physical model has very strong practical value and application prospect.
Disclosure of Invention
The invention provides an optical remote sensing image cloud removing method based on residual dense connection and feature fusion, which adopts a method for generating a network model and designing the optical remote sensing image cloud removing method by a discrimination network model, wherein the generated network model adopts a residual dense connection unit and a feature fusion unit to realize remote sensing image cloud removing, and the discrimination network model improves cloud removing performance through countermeasure training.
The technical scheme of the invention is as follows: an optical remote sensing image cloud removal method based on residual dense connection and feature fusion, the method comprising:
the cloud removing method of the optical remote sensing image is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating a clear cloud-free image, the judging network model is responsible for performing game with the generating network, and the cloud removing performance of the generating network is improved;
the method comprises the steps that a generated network model is used for realizing remote sensing image cloud removal based on a feature extraction part and a feature reconstruction part, wherein the feature extraction part is built by adopting a residual intensive connection unit, and the feature reconstruction part is built by adopting a residual intensive connection unit and a feature fusion unit;
the discrimination network model is responsible for discriminating the cloud pattern from the real cloud pattern, and providing gradients for the generation network to perform countermeasure training;
training a generating network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonism loss function, and training a judging network by adopting a cross entropy loss function;
and the cloud-free remote sensing image can be recovered by only one cloud remote sensing image by adopting the trained model parameters.
The generating network model specifically comprises the following steps:
the generated network model is built based on convolution, deconvolution, LReLU activation function, reLU activation function and Tanh activation function;
the network model generation comprises two parts, namely image feature extraction and image feature reconstruction;
the image feature extraction part consists of m residual dense connection units, including convolution and LReLU activation functions;
the image feature reconstruction part consists of n residual intensive connections and n feature fusion units, and comprises deconvolution, a ReLU activation function and a Tanh activation function;
the residual dense connection unit consists of f convolutions of dense connection and g residual learning, realizes characteristic multiplexing through the f convolutions of dense connection, maximizes characteristic information flow among layers, and adopts residual learning to prevent gradient vanishing;
the feature fusion unit adopts receptive fields with the sizes of a multiplied by a and b multiplied by b to complete multi-scale feature extraction and fusion tasks.
The distinguishing network model specifically comprises the following steps:
the discrimination network model is built based on convolution, batch normalization, LReLU activation function and Sigmoid activation function and consists of q convolution layers;
the discrimination network model takes a real cloud-free image and a cloud-free image as positive and negative samples respectively, takes the cloud image as reference information, and distinguishes the real cloud-free image and the cloud-free image through training;
the discrimination network model and the generation network model are trained alternately, gradient is continuously provided for the generation network, and the discrimination network model and the generation network model are always in the anti-game.
The average absolute error loss function is specifically:
Loss M =||E-G(F)|| 1
wherein G (·) represents the output of the generation network, E represents no cloud picture, F represents cloud picture, and G (F) represents de-cloud picture;
the perceptual loss function is specifically:
where β (·) represents the signature of the convolutional layer 2_2 output of VGG 16;
the countermeasures loss function is specifically:
Loss A =∑[-logD(F,G(F))]
wherein D (·) represents the output of the discrimination network, and D (F, G (F)) represents the output of the de-clouding image G (F) through the discrimination network when the cloud image F is used as reference information;
the cross entropy loss function is specifically:
Loss D =∑[-logD(F,E)+log(1-D(F,G(F)))]
in the formula, D (F, E) represents the output of the cloud picture E through the discrimination network when the cloud picture F is used as reference information.
The technical scheme provided by the invention has the beneficial effects that:
1. the method does not depend on assumption and derivation of a physical model, avoids limitation of the physical model, learns mapping functions of cloud remote sensing images and cloud-free remote sensing images through a deep convolutional neural network, and has strong applicability;
2. according to the cloud remote sensing image processing method, additional information is not required to be provided, the cloud remote sensing image can be obtained only by one cloud remote sensing image, and the method is simple and easy to implement and easy to operate;
3. the invention has natural cloud removing effect and higher efficiency.
Drawings
FIG. 1 is a flow chart of an optical remote sensing image cloud removal method based on residual dense connection and feature fusion;
FIG. 2 is a schematic diagram of a network model structure;
FIG. 3 is a schematic diagram of a residual dense connection unit structure;
FIG. 4 is a schematic diagram of a feature fusion unit;
FIG. 5 is a schematic diagram of a discrimination network model;
FIG. 6 shows cloud-based and cloud-free remote sensing images of experimental results;
fig. 7 is another cloud-based remote sensing image and cloud-free remote sensing image in the experimental results.
Detailed Description
To make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail with reference to fig. 1 to 7.
Example 1
In order to realize high-quality optical remote sensing image cloud removal, an embodiment of the invention provides an optical remote sensing image cloud removal method based on residual dense connection and feature fusion, and the method is described in detail below with reference to fig. 1:
101: the cloud removing method of the optical remote sensing image is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating a clear cloud-free image, the judging network model is responsible for performing game with the generating network, and the cloud removing performance of the generating network is improved;
102: the method comprises the steps that a generated network model is used for realizing remote sensing image cloud removal based on a feature extraction part and a feature reconstruction part, wherein the feature extraction part is built by adopting a residual intensive connection unit, and the feature reconstruction part is built by adopting a residual intensive connection unit and a feature fusion unit;
103: the discrimination network model is responsible for discriminating the cloud pattern from the real cloud pattern, and providing gradients for the generation network to perform countermeasure training;
104: training a generating network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonism loss function, and training a judging network by adopting a cross entropy loss function;
105: the cloud-free remote sensing image can be recovered by only one cloud remote sensing image by adopting the trained model parameters;
the specific steps in the step 101 are as follows:
1) Adopting an countermeasure learning strategy, and realizing an optical remote sensing image cloud removal method by generating a network model and judging the network model;
2) The network model is generated to generate a clear cloud-free image, wherein the cloud image F is input, and the cloud image G (F) is output;
3) The judging network model is responsible for distinguishing the cloud-free image E and the cloud-free image G (F), performing game with the generating network, improving the cloud-free performance of the generating network, inputting the cloud-free image E or the cloud-free image G (F), and outputting the probability of the cloud-free image E.
The specific steps for generating the network model in step 102 are as follows:
1) Generating a network model based on convolution, deconvolution, LReLU activation function (slope is alpha), reLU activation function and Tanh activation function construction, wherein the network model comprises two parts of image feature extraction (Feature Extraction) and image feature reconstruction (Feature Reconstruction), and the structure is shown in figure 2;
2) The image feature extraction part consists of m residual dense connection units (Residual Densely Connected Unit, RDCU) including convolution and lrerlu activation functions; the image feature reconstruction part consists of n residual dense connections and n feature fusion units (Feature Fusion Unit, FFU) including deconvolution, reLU activation functions and Tanh activation functions; the feature extraction part adopts convolution with the stride of 2 to reduce the size of the feature map, the feature reconstruction part adopts deconvolution with the stride of 12 to enlarge the size, the image feature extraction part adopts an LReLU activation function, the last layer of the image feature reconstruction part adopts a Tanh activation function, and the other layers use a ReLU activation function;
3) The residual dense connection unit is composed based on f convolved dense connections and g residual learning, and the structure is shown in fig. 3. Further, feature multiplexing is achieved through dense connection of f convolutions, feature information flow among layers is maximized, then the number of feature channels is reduced through one 1×1 convolution, and finally residual error learning is adopted to prevent gradient disappearance. The convolution steps in the unit are all 1, and the sizes of the rest convolution kernels are all 4 multiplied by 4 except that the size of the last convolution kernel is 1 multiplied by 1. The structure of the residual dense connection unit is not limited;
4) The feature fusion unit adopts receptive fields with the sizes of a multiplied by a and b multiplied by b to extract and fuse features, and the structure is shown in figure 4. Further, the unit firstly adopts the receptive fields of a×a and b×b to complete multi-scale feature extraction and fusion, then adopts 1×1 convolution to reduce the feature quantity, and convolution steps are 1. The structure of the feature fusion unit is not limited;
5) The convolution kernel sizes of the remaining convolutions are d x d, except for the specific descriptions.
The specific steps for determining the network model in step 103 are as follows:
1) The discrimination network model is built based on convolution, batch normalization (Batch Normalization, BN), LReLU activation function (slope is alpha) and Sigmoid activation function, and consists of q convolution layers, wherein the final two convolution steps are 1, the rest steps are 2, the last layer of activation function is Sigmoid activation function, and the rest is LReLU activation function, and the structure is shown in figure 5;
2) The discrimination network model takes the real cloud-free image E as a positive sample and the cloud-free image G (F) as a negative sample for training, and takes the cloud image F as reference information to output the probability of the value range of [0,1 ]. Specifically, when the input is the real cloud-free image E, outputting high probability, otherwise, outputting low probability;
3) The discrimination network model and the generation network model are trained alternately, gradient is continuously provided for the generation network, and the discrimination network model and the generation network model are always in the anti-game.
The specific steps for constructing the loss function in step 104 are as follows:
1) Training a generating network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonism loss function, and training a judging network by adopting a cross entropy loss function, wherein the method is specifically described below;
2) The average absolute error loss function is shown in formula (1):
Loss M =||E-G(F)|| 1 (1)
wherein G (·) represents the output of the generation network, E represents no cloud picture, F represents cloud picture, and G (F) represents de-cloud picture;
3) The perceptual loss function is shown in equation (2):
where β (·) represents the signature of the convolutional layer 2_2 output of VGG 16;
4) The countering loss function is shown in formula (3):
Loss A =Σ[-logD(F,G(F))] (3)
wherein D (·) represents the output of the discrimination network, and D (F, G (F)) represents the output of the de-clouding image G (F) through the discrimination network when the cloud image F is used as reference information;
5) The overall loss function of the training generation network is a linear combination of the three functions, as shown in equation (4):
Loss G =λLoss M +βLoss P +αL A (4)
wherein lambda, beta and alpha are respectively Loss M 、Loss P And L A Weights of (2);
6) The training discrimination network uses a cross entropy loss function as shown in equation (5):
Loss D =∑[-logD(F,E)+log(1-D(F,G(F)))] (5)
in the formula, D (F, E) represents the output of the cloud picture E through the discrimination network when the cloud picture F is used as reference information.
The specific steps of step 105 are as follows: and generating a cloud-free remote sensing image by using the trained model parameters through one cloud remote sensing image.
Example 2
The scheme in the embodiment 1 is described in detail below with reference to the specific drawings and the calculation formula:
201: the cloud removing method of the optical remote sensing image is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating a clear cloud-free image, the judging network model is responsible for performing game with the generating network, and the cloud removing performance of the generating network is improved;
202: the method comprises the steps that a generated network model is used for realizing remote sensing image cloud removal based on a feature extraction part and a feature reconstruction part, wherein the feature extraction part is built by adopting a residual intensive connection unit, and the feature reconstruction part is built by adopting a residual intensive connection unit and a feature fusion unit;
203: the discrimination network model is responsible for discriminating the cloud pattern from the real cloud pattern, and providing gradients for the generation network to perform countermeasure training;
204: training a generating network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonism loss function, and training a judging network by adopting a cross entropy loss function;
205: the cloud-free remote sensing image can be recovered by only one cloud remote sensing image by adopting the trained model parameters;
the specific steps in step 201 are as follows:
1) Adopting an countermeasure learning strategy, and realizing an optical remote sensing image cloud removal method by generating a network model and judging the network model;
2) The network model is generated to generate a clear cloud-free image, wherein the cloud image F is input, and the cloud image G (F) is output;
3) The judging network model is responsible for distinguishing the cloud-free image E and the cloud-free image G (F), performing game with the generating network, improving the cloud-free performance of the generating network, inputting the cloud-free image E or the cloud-free image G (F), and outputting the probability of the cloud-free image E.
The specific steps for generating the network model in step 202 are as follows:
1) The generating network model is built based on convolution, deconvolution, LReLU activation function (slope is 0.2), reLU activation function and Tanh activation function, and comprises two parts of image feature extraction (Feature Extraction) and image feature reconstruction (Feature Reconstruction), and the structure is shown in figure 2;
2) The image feature extraction part consists of 3 residual dense connection units (Residual Densely Connected Unit, RDCU), including convolution and lrerlu activation functions; the image feature reconstruction part consists of 3 residual dense connections and 3 feature fusion units (Feature Fusion Unit, FFU), including deconvolution, reLU activation functions and Tanh activation functions; wherein the feature extraction part reduces the feature map size by convolution with a stride of 2, and the feature reconstruction part reduces the feature map size by convolution with a stride of 1 2 The image feature extraction part adopts an LReLU activation function, and the last layer of the image feature reconstruction part adopts a Tanh activation functionThe remaining layers use a ReLU activation function;
3) The residual dense connection unit is composed based on dense connection of 3 convolutions and 1 residual learning, and the structure is shown in fig. 3. Further, feature multiplexing is achieved through dense connection of 3 convolutions, feature information flow among layers is maximized, then the number of feature channels is reduced through one 1×1 convolution, and finally residual error learning is adopted to prevent gradient disappearance. The convolution steps in the unit are all 1, and the sizes of the rest convolution kernels are all 4 multiplied by 4 except that the size of the last convolution kernel is 1 multiplied by 1. The structure of the residual dense connection unit is not limited;
4) The feature fusion unit adopts receptive fields with the sizes of 3×3 and 5×5 to extract features and fuse, and the structure is shown in fig. 4. Further, the unit firstly adopts 3×3 and 5×5 receptive fields to complete multi-scale feature extraction and fusion, then adopts 1×1 convolution to reduce the feature quantity, and convolution steps are 1. The structure of the feature fusion unit is not limited;
5) The convolution kernel sizes of the remaining convolutions are all 4 x 4, except for the specific illustration.
The specific steps for determining the network model in step 203 are as follows:
1) The discrimination network model is built based on convolution, batch normalization (Batch Normalization, BN), LReLU activation function (slope is 0.2) and Sigmoid activation function, and consists of 5 convolution layers, wherein the last two convolution steps are 1, the rest steps are 2, the last layer of activation function is Sigmoid activation function, and the rest is LReLU activation function, and the structure is shown in figure 5;
2) The discrimination network model takes the real cloud-free image E as a positive sample and the cloud-free image G (F) as a negative sample for training, and takes the cloud image F as reference information to output the probability of the value range of [0,1 ]. Specifically, when the input is the real cloud-free image E, outputting high probability, otherwise, outputting low probability;
3) The discrimination network model and the generation network model are trained alternately, gradient is continuously provided for the generation network, and the discrimination network model and the generation network model are always in the anti-game.
The specific steps for constructing the loss function in step 204 are:
1) Training a generating network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonism loss function, and training a judging network by adopting a cross entropy loss function, wherein the method is specifically described below;
2) The average absolute error loss function is shown in formula (1), the perceived loss function is shown in formula (2), the antagonism loss function is shown in formula (3), the total loss function of the training generation network is a linear combination of the three functions, the cross entropy loss function used by the training discrimination network is shown in formula (5), and the description is omitted here.
3) Preferably, the weight setting is specifically λ=100.0, β=1.0, α=0.5.
The specific steps of step 205 are: and the cloud-free remote sensing image can be recovered by only one cloud remote sensing image by adopting the trained model parameters.
Example 3
The protocols in examples 1 and 2 were validated by experimental data as follows:
two cloud remote sensing images are selected from the published remote sensing image RICE data set, and experiments are carried out by using the cloud removal method, and the results are shown in fig. 6 and 7. Therefore, the cloud removing effect of the method is natural and thorough, and the cloud removing task of the optical remote sensing image can be well realized.
Those skilled in the art will appreciate that the drawings are schematic representations of only one preferred embodiment, and that the above-described embodiment numbers are merely for illustration purposes and do not represent advantages or disadvantages of the embodiments.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (1)

1. An optical remote sensing image cloud removal method based on residual dense connection and feature fusion is characterized by comprising the following steps:
the cloud removing method of the optical remote sensing image is designed by adopting a generating network model and a judging network model, wherein the generating network model is responsible for generating a clear cloud-free image, the judging network model is responsible for performing game with the generating network, and the cloud removing performance of the generating network is improved;
the method comprises the steps that a generated network model is used for realizing remote sensing image cloud removal based on a feature extraction part and a feature reconstruction part, wherein the feature extraction part is built by adopting a residual intensive connection unit, and the feature reconstruction part is built by adopting a residual intensive connection unit and a feature fusion unit;
the discrimination network model is responsible for discriminating the cloud pattern from the real cloud pattern, and providing gradients for the generation network to perform countermeasure training;
training a generating network by adopting a linear combination of an average absolute error loss function, a perception loss function and an antagonism loss function, and training a judging network by adopting a cross entropy loss function;
recovering a cloud-free remote sensing image through a cloud remote sensing image by adopting trained model parameters;
the network model generation method specifically comprises the following steps:
the generated network model is built based on convolution, deconvolution, LReLU activation function, reLU activation function and Tanh activation function;
the network model generation comprises two parts, namely image feature extraction and image feature reconstruction;
the image feature extraction part and the reconstruction part specifically include:
the image feature extraction part consists of m residual dense connection units, including convolution and LReLU activation functions;
the image feature reconstruction part consists of n residual intensive connections and n feature fusion units, and comprises deconvolution, a ReLU activation function and a Tanh activation function;
the residual dense connection unit and the characteristic fusion unit are specifically as follows:
the residual dense connection unit consists of f convolutions of dense connection modules and g residual learning modules, the characteristic multiplexing is realized through the f convolutions of dense connection modules, the characteristic information flow between layers is maximized, and the gradient disappearance problem is prevented by adopting residual learning;
the feature fusion unit adopts receptive fields with the sizes of a multiplied by a and b multiplied by b to complete multi-scale feature extraction and fusion tasks;
the discrimination network model specifically comprises the following steps:
the discrimination network model is built based on convolution, batch normalization, LReLU activation function and Sigmoid activation function and consists of q convolution layers;
the discrimination network model takes a real cloud-free image and a cloud-free image as positive and negative samples respectively, takes the cloud image as reference information, and distinguishes the real cloud-free image and the cloud-free image through training;
the judging network model and the generating network model are trained alternately, gradient is continuously provided for the generating network, and the judging network model and the generating network model are always in the countermeasure game;
the average absolute error loss function adopted for generating the network model is specifically:
Loss M =||E-G(F)|| 1
wherein G (·) represents the output of the generation network, E represents no cloud picture, F represents cloud picture, and G (F) represents de-cloud picture;
the perceptual loss function adopted for generating the network model is specifically:
where β (·) represents the signature of the convolutional layer 2_2 output of VGG 16;
the antagonism loss function adopted for generating the network model is specifically as follows:
Loss A =∑[-logD(F,G(F))]
wherein D (·) represents the output of the discrimination network, and D (F, G (F)) represents the output of the de-clouding image G (F) through the discrimination network when the cloud image F is used as reference information;
the cross entropy loss function adopted by the discrimination network model is specifically as follows:
Loss D =∑[-logD(F,E)+log(1-D(F,G(F)))]
in the formula, D (F, E) represents the output of the cloud picture E through the discrimination network when the cloud picture F is used as reference information.
CN202110491313.1A 2021-05-06 2021-05-06 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion Active CN113379618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110491313.1A CN113379618B (en) 2021-05-06 2021-05-06 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110491313.1A CN113379618B (en) 2021-05-06 2021-05-06 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion

Publications (2)

Publication Number Publication Date
CN113379618A CN113379618A (en) 2021-09-10
CN113379618B true CN113379618B (en) 2024-04-12

Family

ID=77570395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110491313.1A Active CN113379618B (en) 2021-05-06 2021-05-06 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion

Country Status (1)

Country Link
CN (1) CN113379618B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140357B (en) * 2021-12-02 2024-04-19 哈尔滨工程大学 Multi-temporal remote sensing image cloud zone reconstruction method based on cooperative attention mechanism
CN115294392B (en) * 2022-08-09 2023-05-09 安徽理工大学 Visible light remote sensing image cloud removal method and system based on network model generation
CN116168302B (en) * 2023-04-25 2023-07-14 耕宇牧星(北京)空间科技有限公司 Remote sensing image rock vein extraction method based on multi-scale residual error fusion network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
KR102095443B1 (en) * 2019-10-17 2020-05-26 엘아이지넥스원 주식회사 Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning
CN111383192A (en) * 2020-02-18 2020-07-07 清华大学 SAR-fused visible light remote sensing image defogging method
CN112150395A (en) * 2020-10-15 2020-12-29 山东工商学院 Encoder-decoder network image defogging method combining residual block and dense block

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning
KR102095443B1 (en) * 2019-10-17 2020-05-26 엘아이지넥스원 주식회사 Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning
CN111383192A (en) * 2020-02-18 2020-07-07 清华大学 SAR-fused visible light remote sensing image defogging method
CN112150395A (en) * 2020-10-15 2020-12-29 山东工商学院 Encoder-decoder network image defogging method combining residual block and dense block

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于生成式对抗网络的图像去雾算法研究;刘宇航;《万方学位论文库》;第1-75页 *

Also Published As

Publication number Publication date
CN113379618A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
Tian et al. Deep learning on image denoising: An overview
CN113379618B (en) Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN109543502B (en) Semantic segmentation method based on deep multi-scale neural network
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN108805002B (en) Monitoring video abnormal event detection method based on deep learning and dynamic clustering
CN111275637A (en) Non-uniform motion blurred image self-adaptive restoration method based on attention model
CN110443761B (en) Single image rain removing method based on multi-scale aggregation characteristics
Zhang et al. Efficient feature learning and multi-size image steganalysis based on CNN
CN110503613B (en) Single image-oriented rain removing method based on cascade cavity convolution neural network
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
Chen et al. Densely connected convolutional neural network for multi-purpose image forensics under anti-forensic attacks
CN115272681B (en) Ocean remote sensing image semantic segmentation method and system based on high-order feature class decoupling
CN114821432B (en) Video target segmentation anti-attack method based on discrete cosine transform
CN113627543A (en) Anti-attack detection method
CN107239827B (en) Spatial information learning method based on artificial neural network
CN113139618B (en) Robustness-enhanced classification method and device based on integrated defense
CN109003247B (en) Method for removing color image mixed noise
CN114626042A (en) Face verification attack method and device
Tian et al. Convolutional neural networks for steganalysis via transfer learning
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
CN116977694A (en) Hyperspectral countermeasure sample defense method based on invariant feature extraction
CN111047537A (en) System for recovering details in image denoising
Xu et al. SFRNet: Feature extraction-fusion steganalysis network based on squeeze-and-excitation block and RepVgg Block
Li et al. Distribution-transformed network for impulse noise removal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant