CN114936972A - Remote sensing image thin cloud removing method based on multi-path perception gradient - Google Patents
Remote sensing image thin cloud removing method based on multi-path perception gradient Download PDFInfo
- Publication number
- CN114936972A CN114936972A CN202210357921.8A CN202210357921A CN114936972A CN 114936972 A CN114936972 A CN 114936972A CN 202210357921 A CN202210357921 A CN 202210357921A CN 114936972 A CN114936972 A CN 114936972A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- thin cloud
- cloud
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000008447 perception Effects 0.000 title claims abstract description 49
- 230000006870 function Effects 0.000 claims abstract description 104
- 238000000605 extraction Methods 0.000 claims abstract description 90
- 238000012549 training Methods 0.000 claims abstract description 41
- 238000012360 testing method Methods 0.000 claims abstract description 14
- 238000012795 verification Methods 0.000 claims abstract description 6
- 108091006146 Channels Proteins 0.000 claims description 56
- 238000010586 diagram Methods 0.000 claims description 43
- 230000004913 activation Effects 0.000 claims description 40
- 238000011176 pooling Methods 0.000 claims description 10
- 238000010200 validation analysis Methods 0.000 claims description 8
- 238000012935 Averaging Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 5
- 150000001875 compounds Chemical class 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 239000003595 mist Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a remote sensing image thin cloud removing method based on multi-path perception gradient, which comprises the following steps: establishing a remote sensing image thin cloud removal data set, and forming a training set, a verification set and a test set according to a certain proportion; constructing a perception gradient extraction module for extracting image thin cloud characteristics; building a cloud layer thickness estimation module for adaptively estimating the cloud layer thickness; constructing a remote sensing image thin cloud removal network for converting a single thin cloud remote sensing image into a clear remote sensing image; training a remote sensing image thin cloud removal network by adopting a remote sensing image thin cloud removal data set, wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function; and importing the model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
Description
Technical Field
The invention relates to the technical field of image processing technology and deep learning, in particular to a remote sensing image thin cloud removing method based on multi-path perception gradient.
Background
An optical remote sensing image shot by a remote sensing satellite is often influenced by a cloud layer in the environment, so that a series of problems of blocking of key contents in the image, loss of detail information, color distortion and the like are caused, the utilization efficiency of the optical remote sensing image is greatly reduced, the interpretation of the remote sensing image is seriously influenced, and a lot of remote sensing applications cannot be smoothly carried out. The optical remote sensing image influenced by the thick cloud has no utilization value, and the influence of the thin cloud is removed after the thin cloud remote sensing image is processed by a proper technical means, so that the subsequent image processing and application are facilitated.
The traditional remote sensing image thin cloud removing method adopts an image filtering method, a statistical prior method and the like, and the method removes cloud layer influence in an image through filtering or provides statistical prior information through statistically analyzing the difference between a cloud image and a non-cloud image so as to complete a thin cloud removing task. The method has obvious limitation and cannot adapt to complicated and variable conditions.
With the rapid development of the deep neural network, the remote sensing image cloud and fog removing method designed by adopting the deep convolutional neural network is widely concerned. The convolutional neural network can extract image characteristics and reconstruct image content to realize cloud and mist removal of the remote sensing image, and the difficulty lies in how to design the network and the module to extract the characteristics suitable for cloud and mist removal and keep the restored image real and natural. The thin cloud of the thin cloud remote sensing image is removed by learning the conversion relation between the thin cloud remote sensing image and the clear remote sensing image in a self-adaptive manner through the neural network structure.
Disclosure of Invention
The invention solves the technical problems that: the method for removing the cloud of the remote sensing image based on the multi-channel perception gradient is simple and feasible, complex assumptions and prior are not needed, and a fog-free image can be directly recovered from a fog image.
The technical scheme of the invention is as follows: a remote sensing image thin cloud removing method based on multi-path perception gradient comprises the following steps:
1) establishing a remote sensing image thin cloud removal data set which comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and forming a training set, a verification set and a test set in proportion;
2) constructing a perception gradient extraction module for extracting image thin cloud characteristics;
3) building a cloud layer thickness estimation module for adaptively estimating the cloud layer thickness;
4) constructing a remote sensing image thin cloud removal network based on the perception gradient extraction module obtained in the step 2) and the cloud layer thickness estimation module obtained in the step 3), wherein the remote sensing image thin cloud removal network is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
5) training a remote sensing image thin cloud removal network by using the data set obtained in the step 1), wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
6) and importing the model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
In the step 1), the remote sensing image thin cloud removal data set specifically comprises:
11) selecting n clear remote sensing images R, and generating a simulated thin cloud to obtain a thin cloud remote sensing image C and a cloud layer thickness image T; cutting the remote sensing image into an image with the size of NxN, forming a remote sensing image thin cloud removal data set by a clear remote sensing image R, a thin cloud remote sensing image C and a cloud layer thickness image T with corresponding relations, and recording as { R } i ,C i ,T i I belongs to (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
12) the remote sensing image thin cloud removal data set is expressed as p 1 :p 2 :p 3 Is divided into a training set, a validation set and a test set, where p 1 、p 2 And p 3 Is a positive integer, and p 1 >p 2 ,p 1 >p 3 。
In the step 2), the set-up perception gradient extraction module specifically comprises a perception feature extraction unit, a gradient information extraction unit, a residual error feature extraction unit and a residual error connection.
The perception feature extraction unit specifically adopts a VGG19 network to extract image features, simulates a human visual system to extract features of an image perception layer, and adopts a VGG19 network n 1 Layer n 2 The output result is used as perception characteristic information for a subsequent thin cloud removal task, wherein n is 1 And n 2 Is a positive integer.
The gradient information extraction unit specifically adopts a Sobel operator filter to make the stride d on the characteristic diagram 1 The convolution operation of (2) is used for extracting image gradient information, wherein the gradient information comprises cloud layer related characteristics; wherein d is 1 Is a positive integer.
The residual error feature extraction unit consists of e residual error units, and each residual error unit comprises s 1 Convolution + ReLU activation function, 1 characteristic calibration unit and 1 residual error learning, wherein the sizes of convolution kernels are f multiplied by f, and the step length is d 2 Wherein e, s 1 F and d 2 Are all positive integers.
The characteristic calibration unit consists of 3 branches and performs an image characteristic calibration task, and the input of the unit is alpha in Output is alpha out ;
The branch 1 gives a weight to each pixel of the feature map to realize pixel-level feature calibration, and consists of g convolution + ReLU activation function combinations and 1 convolution + Sigmoid activation function combination, and the output result of the branch 1 is alpha s The size of a convolution kernel is zxz, the step is x, the size of a characteristic diagram and the number of channels are not changed by a branch 1, wherein g, z and x are positive integers;
the branch 2 does not do any operation, and the output is still the input alpha of the characteristic calibration unit in ;
The branch 3 gives the same weight to the pixels in each channel of the feature map to realize channel-level feature calibration, and consists of an average value pooling unit, v convolution + ReLU activation function combinations, 1 convolution + Sigmoid activation function combination and 1 feature size expansion unit; pooling of average values of featuresAs a result of averaging the pixel values of each channel of the map, the feature map size is changed from W × H × C to 1 × 1 × C; the characteristic size expansion unit is used for copying and expanding the characteristic diagram from the size of 1 multiplied by C to W multiplied by H multiplied by C, namely copying from 1 multiplied by 1 value to W multiplied by H identical values, and keeping the size of the input and output characteristic diagrams of the branch 3 and the number of channels unchanged; branch 3 output result is x 0 c The convolution kernel size is a multiplied by a, the step length is k, wherein v, a and k are positive integers;
output alpha of the feature calibration unit out The result of the multiplication operation on the corresponding pixel for the output feature maps of the 3 branches is as follows:
in the formula, alpha out Calibrating the output of the cell for features, alpha s As a result of the output of branch 1, α in As a result of the output of branch 2, alpha c Is the output result of branch 3.
In the step 3), the cloud layer thickness estimation module is set up and comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness, the cloud layer thickness estimation module inputs a thin cloud remote sensing image C and outputs a predicted cloud layer thicknessAnd characteristic diagram phi out ;
The edge feature extraction part comprises w branches, each branch has the same structure and consists of the gradient information extraction part in the step 2) and a residual error unit, and the sizes of convolution kernels adopted by the branches are gradually increased; for the r branch, the result of inputting the gradient information extraction part and the residual error unit is recorded as phi r The convolution kernel size is (2r +1) × (2r +1), wherein r belongs to (1, …, w), and r and w are positive integers; summing corresponding pixels of the outputs of two adjacent branches, wherein the summed branches total w-1, and the result of the j branch is recorded as delta j The following are described:
wherein j ∈ (1, …, w-1), j and w are positive integers,representing feature map corresponding to the position element summation;
the characteristic calibration part consists of w-1 characteristic calibration units, and for the ith branch of the characteristic calibration part, the output of the characteristic calibration unit is recorded as pi i As follows:
π i =FC(δ i );
wherein i ∈ (1, …, w-1), FC (·) represents the output of the feature calibration unit;
the outputs of the characteristic calibration portion FC are connected together on the channel and the result is recorded as φ out As the 1 st output of the cloud layer thickness estimation module, the following is described:
φ out =concat(π 1 ,…,π w-1 );
where concat (. cndot.) represents cascading feature maps over the channels;
the feature map phi out Feeding convolution and ReLU activation function to obtain predicted cloud layer thicknessAs the 2 nd output of the cloud layer thickness estimation module, the following is described:
where conv (. cndot.) represents a convolution with a convolution kernel size of l × l and with a step size of d 3 ReLU (. cndot.) denotes the ReLU activation function, l and d 3 Is a positive integer.
In the step 4), the constructing of the remote sensing image thin cloud removing network specifically comprises the following steps:
adopting a sensing gradient extraction module in the step 2), a cloud layer thickness estimation module in the step 3) and residual characteristic extractionThe unit and the Tanh activation function are taken to build a remote sensing image thin cloud removal network based on multi-path perception gradient, and the remote sensing image thin cloud removal network is used for converting a single thin cloud remote sensing image into a clear remote sensing image; the input of the remote sensing image thin cloud removing network based on the multi-path perception gradient is a thin cloud remote sensing image C, and the output is a predicted clear remote sensing imageAnd predicting cloud layer thickness images
In said step 5), the characteristic loss function L F The method specifically comprises the following steps:
in the formula, theta (-) represents an output characteristic diagram of the VGG19 network, u represents a convolutional layer number of the VGG19 network, q, t and y represent sequence numbers of the length, width and channel of the characteristic diagram, O represents the layer number of the used VGG19, W, H and C represent the length, width and channel size of the characteristic diagram, R represents a clear remote sensing image,and the predicted clear remote sensing image is represented, u belongs to (1, …, O), q belongs to (1, …, W), t belongs to (1, …, H), y belongs to (1, …, C), u, q, t, y, O, W, H and C are positive integers.
In said step 5), a gradient loss function L G The method specifically comprises the following steps:
wherein · represents an image gradient extracted using a Prewitt operator, q, t, and y represent a feature map length, width, and a channel sequence number, W, H and C represent a feature map length, width, and channel size, q ∈ (1, …, W), t ∈ (1, …, H), y ∈ (1, …, C), q, t, y, W, H, and C are positive integers.
In the step 5), a cloud layer thickness loss function L R The method specifically comprises the following steps:
wherein R represents a clear remote sensing image,representing a predicted sharp remote sensing image.
In the step 6), the thin cloud removing method specifically comprises the following steps: the remote sensing image thin cloud removal data set in claim 2 is adopted to train a remote sensing image thin cloud removal network, loss functions used in training comprise characteristic loss functions, gradient loss functions and cloud layer thickness loss functions, training is performed for b generations, model parameters obtained after training are led into the remote sensing image thin cloud removal network, and a single thin cloud remote sensing image is input to complete remote sensing image thin cloud removal.
The technical scheme provided by the invention has the beneficial effects that:
1. the traditional remote sensing image thin cloud removing method is based on filtering or hypothesis prior and the like, physical model derivation is carried out, and limitation is obvious;
2. according to the method, the thin cloud of the single remote sensing image can be removed, other images are not needed to be used as references, the remote sensing image thin cloud removing network is trained by using the characteristic loss function, the gradient loss function and the cloud layer thickness loss function, the high-quality thin cloud of the single remote sensing image is removed, the method is simple and easy to implement, and the efficiency is high;
3. the method is strong in robustness, can be used as image preprocessing application to be deployed in embedded equipment, achieves real-time remote sensing image thin cloud removal, and is wide in application range and real and natural in thin cloud removal effect.
Drawings
FIG. 1 is a flow chart of a remote sensing image thin cloud removing method based on multi-path perception gradient;
FIG. 2 is a schematic diagram of a perceptual gradient extraction module;
FIG. 3 is a schematic diagram of a residual feature extraction unit;
FIG. 4 is a schematic diagram of a feature calibration unit;
FIG. 5 is a schematic diagram of a cloud layer thickness estimation module;
FIG. 6 is a schematic diagram of a remote sensing image thin cloud removal network structure based on multi-path perceptual gradient.
Detailed Description
The method comprises the following steps:
1) establishing a remote sensing image thin cloud removal data set, wherein the data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to a certain proportion;
2) constructing a perception gradient extraction module, which comprises a perception feature extraction unit, a gradient information extraction unit and a residual error feature extraction unit, and is used for extracting image thin cloud features;
3) building a cloud layer thickness estimation module, which comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness;
4) constructing a remote sensing image thin cloud removal network based on the perception gradient extraction module in the step 2) and the cloud layer thickness estimation module in the step 3), wherein the remote sensing image thin cloud removal network is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
5) training a remote sensing image thin cloud removal network by adopting the remote sensing image thin cloud removal data set in the step 1), wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
6) and importing the model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
The image thin cloud removal data set in the step 1) specifically comprises:
11) and selecting n clear remote sensing images R, and generating a simulated thin cloud to obtain a thin cloud remote sensing image C and a cloud layer thickness image T. Due to the fact that the size of the remote sensing image is large, the remote sensing image is cut into an image with the size of NxN, the clear remote sensing image R, the thin cloud remote sensing image C and the cloud layer thickness image T which have the corresponding relation form a remote sensing image thin cloud removal data set which is marked as { R i ,C i ,T i I belongs to (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
12) the remote sensing image thin cloud removal data set is expressed as p 1 :p 2 :p 3 Is divided into a training set, a validation set and a test set, for training, validation and testing of the method of the invention, wherein p is 1 、p 2 And p 3 Is a positive integer, p 1 >p 2 ,p 1 >p 3 。
Wherein, the perceptual gradient extraction module in the step 2) is specifically:
as shown in fig. 2, a Perceptual Gradient Extraction module is built, which includes a Perceptual Feature Extraction unit (PFE), a Gradient Information Extraction unit (GIE), a Residual Feature Extraction unit (RFE), and a Residual connection, and is used for extracting image thin cloud features.
The perceptual feature extraction unit specifically includes:
the perception feature extraction unit PFE adopts a VGG19 network to extract image features, simulates a human visual system to extract features of an image perception layer, and adopts a VGG19 network n 1 Layer n 2 The output result is used as perception characteristic information for a subsequent thin cloud removal task, wherein n is 1 And n 2 Is a positive integer.
Wherein, the gradient information extraction unit specifically comprises:
the gradient information extraction unit GIE adopts a Sobel operator filter to make steps d on the characteristic diagram 1 The convolution operation of the image is used for extracting image gradient information, the gradient information comprises more cloud layer related characteristics, the thin cloud removal is facilitated,wherein d is 1 Is a positive integer.
Wherein, the residual error feature extraction unit specifically comprises:
as shown in fig. 3, the Residual feature extraction Unit RFE is composed of e Residual Units (RU), each of which includes s 1 Convolution + ReLU activation function, 1 Feature Calibration unit (FC) and 1 residual learning, wherein the sizes of convolution kernels are f × f, and the step length is d 2 Wherein e, s 1 F and d 2 Are all positive integers.
Wherein the characteristic calibration unit specifically comprises:
21) as shown in FIG. 4, the feature calibration unit, which consists of 3 branches, performs the image feature calibration task, and has an input of α in Output is alpha out ;
22) The branch 1 gives a weight to each pixel of the feature map to realize pixel-level feature calibration, and specifically consists of g convolution + ReLU activation function combinations and 1 convolution + Sigmoid activation function combination, and the output result of the branch 1 is alpha s The size of a convolution kernel is zxz, the step is x, the size of a characteristic diagram and the number of channels are not changed by a branch 1, wherein g, z and x are positive integers;
23) branch 2 does not do anything and the output remains the input alpha of the characteristic calibration unit FC in ;
24) The branch 3 gives a same weight to pixels in each channel of the feature map, realizes channel-level feature calibration, and specifically comprises average value pooling, v convolution + ReLU activation function combinations, 1 convolution + Sigmoid activation function combination, and 1 feature size expansion unit. Averaging pooling averages pixel values of each channel of the feature map as a result, and the feature map size is changed from W × H × C to 1 × 1 × C; the characteristic size expansion unit is used for copying and expanding the characteristic diagram from the size of 1 multiplied by C to W multiplied by H multiplied by C, namely copying from 1 multiplied by 1 value to W multiplied by H identical values, and keeping the size of the input and output characteristic diagrams of the branch 3 and the number of channels unchanged; branch 3 output result is x 0 c The size of a convolution kernel is a multiplied by a, the step length is k, wherein v, a and k are positive integers; the output of the characteristic calibration unit FC is 3 branch output characteristicsThe result of the multiplication operation performed on the corresponding pixel in the figure is as follows:
in the formula, alpha out Calibrating the output of the cell FC for features, alpha s As a result of the output of branch 1, α in As a result of the output of branch 2, alpha c Is the output result of branch 3.
In step 3), the cloud layer thickness estimation module specifically includes:
31) as shown in fig. 5, a cloud layer thickness estimation module is constructed, which comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness, wherein the cloud layer thickness estimation module inputs a thin cloud remote sensing image C and outputs a predicted cloud layer thicknessAnd characteristic diagram phi out ;
32) The edge feature extraction part comprises w branches, each branch has the same structure and consists of a gradient information extraction part GIE and a residual error unit RU in the step 2), and the convolution kernel size adopted by each branch is gradually increased. For the r-th branch, the result of the input passing through GIE and RU is recorded as φ r The convolution kernel size is (2r +1) × (2r +1), where r ∈ (1, …, w), and r and w are positive integers. The outputs of two adjacent branches correspond to pixel summation, so that the summed branches total w-1, and the result of the j branch after summation is recorded as delta j The method comprises the following steps:
wherein j ∈ (1, …, w-1), j and w are positive integers,representing a feature map corresponding to the position element summation;
33) feature calibration sectionThe characteristic calibration unit FC is composed of w-1 characteristic calibration units FC, and for the ith branch of the characteristic calibration part, the output of the characteristic calibration unit FC is recorded as pi i The method comprises the following steps:
π i =FC(δ i )
where i ∈ (1, …, w-1), FC (-) represents the output of the characteristic calibration unit FC;
34) the outputs of the characteristic calibration portion FC are connected together on the channel and the result is recorded as φ out As the 1 st output of the cloud layer thickness estimation module, the following is specifically described:
φ out =concat(π 1 ,…,π w-1 )
where concat (. cndot.) represents cascading feature maps over the channels;
a feature map phi out Feeding convolution and ReLU activation function to obtain predicted cloud layer thicknessAs the 2 nd output of the cloud layer thickness estimation module, the following is specifically mentioned:
where conv (. cndot.) represents a convolution with a convolution kernel size of l × l and with a step size of d 3 ReLU (. circle.) denotes the ReLU activation function, l and d 3 Is a positive integer.
In the step 4), the remote sensing image thin cloud removal network specifically comprises the following steps:
41) as shown in fig. 6, a remote sensing image thin cloud removal network based on multi-path perception gradient is built by adopting the perception gradient extraction module in the step 2), the cloud layer thickness estimation module in the step 3), the residual error feature extraction unit RFE and the Tanh activation function, and is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
42) the remote sensing image thin cloud removing network based on the multi-path perception gradient has the input of a thin cloud remote sensing image C and the output of a prediction clear remote sensing image CAnd predicting cloud layer thickness images
Wherein, in step 5), the characteristic loss function L F The method specifically comprises the following steps:
in the formula, theta (-) represents an output characteristic diagram of the VGG19 network, u represents a convolutional layer number of the VGG19 network, q, t and y represent a characteristic diagram length, width and channel number, O represents the number of layers of the VGG19 used, W, H and C represent a characteristic diagram length, width and channel size, R represents a clear remote sensing image,and the predicted clear remote sensing image is represented, u belongs to (1, …, O), q belongs to (1, …, W), t belongs to (1, …, H), y belongs to (1, …, C), u, q, t, y, O, W, H and C are positive integers.
Wherein, in step 5), the gradient loss function L G The method specifically comprises the following steps:
wherein · represents an image gradient extracted using a Prewitt operator, q, t, and y represent a feature map length, width, and a channel sequence number, W, H and C represent a feature map length, width, and channel size, q ∈ (1, …, W), t ∈ (1, …, H), y ∈ (1, …, C), q, t, y, W, H, and C are positive integers.
Wherein, in the step 5), the cloud layer thickness loss function L R The method specifically comprises the following steps:
in the step 6), the thin cloud removing method specifically comprises the following steps:
and (4) importing the model parameters obtained by training the generation b into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal. In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
Example 1
The embodiment of the invention provides a remote sensing image thin cloud removing method based on multi-path perception gradient, and the method is described in detail in the following description with reference to fig. 1:
101: establishing a remote sensing image thin cloud removal data set, wherein the data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to a certain proportion;
102: constructing a perception gradient extraction module, which comprises a perception feature extraction unit, a gradient information extraction unit and a residual error feature extraction unit, and is used for extracting image thin cloud features;
103: building a cloud layer thickness estimation module, which comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness;
104: constructing a remote sensing image thin cloud removal network based on the sensing gradient extraction module in the step 102 and the cloud layer thickness estimation module in the step 103, wherein the remote sensing image thin cloud removal network is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
105: training a remote sensing image thin cloud removal network by adopting the remote sensing image thin cloud removal data set in the step 101, wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
106: and importing the model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
The specific steps in step 101 are as follows:
1) and selecting n clear remote sensing images R, and generating a simulated thin cloud to obtain a thin cloud remote sensing image C and a cloud layer thickness image T. Since the size of the remote sensing image is large, the remote sensing image is cut into an N multiplied by N imageThe clear remote sensing image R, the thin cloud remote sensing image C and the cloud layer thickness image T in the corresponding relation form a remote sensing image thin cloud removal data set which is marked as { R i ,C i ,T i I belongs to (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
2) the remote sensing image thin cloud removal data set is expressed as p 1 :p 2 :p 3 Is divided into a training set, a validation set and a test set, for training, validation and testing of the method of the invention, wherein p is 1 、p 2 And p 3 Is a positive integer, p 1 >p 2 ,p 1 >p 3 。
The specific steps in step 102 are as follows:
1) as shown in fig. 2, a Perceptual Gradient Extraction module is constructed, which includes a Perceptual Feature Extraction unit (PFE), a Gradient Information Extraction unit (GIE), a Residual Feature Extraction unit (RFE), and a Residual connection, and is used for extracting image thin cloud features;
2) the perception feature extraction unit PFE adopts a VGG19 network to extract image features, simulates a human visual system to extract features of an image perception layer, and adopts a VGG19 network n 1 Layer n 2 The output result is used as perception characteristic information for a subsequent thin cloud removal task, wherein n is 1 And n 2 Is a positive integer;
3) the gradient information extraction unit GIE adopts a Sobel operator filter to make a stride d on the characteristic diagram 1 The convolution operation of (a) is used for extracting image gradient information, wherein the gradient information contains more cloud layer related features and is beneficial to removing thin clouds, and d 1 Is a positive integer;
4) as shown in fig. 3, the Residual feature extraction Unit RFE is composed of e Residual Units (RU), each of which includes s 1 Convolution + ReLU activation function, 1 Feature Calibration unit (FC) and 1 residual learning, wherein the sizes of convolution kernels are f multiplied by f, and the step length is d 2 Wherein e, s 1 F and d 2 Are all positive integers;
5) as shown in FIG. 4, the feature calibration unit, which consists of 3 branches, performs the image feature calibration task, and has an input of α in Output is alpha out ;
The branch 1 gives a weight to each pixel of the feature map to realize pixel-level feature calibration, and specifically consists of g convolution + ReLU activation function combinations and 1 convolution + Sigmoid activation function combination, and the output result of the branch 1 is alpha s The size of a convolution kernel is zxz, the step is x, the size of a characteristic diagram and the number of channels are not changed by a branch 1, wherein g, z and x are positive integers;
The branch 3 gives the same weight to the pixels in each channel of the feature map to realize channel-level feature calibration, and specifically comprises an average value pooling unit, v convolution + ReLU activation function combinations, 1 convolution + Sigmoid activation function combination and 1 feature size expansion unit. Averaging pooling averages pixel values of each channel of the feature map as a result, and the feature map size is changed from W × H × C to 1 × 1 × C; the characteristic size expansion unit is used for copying and expanding the characteristic diagram from the size of 1 multiplied by C to W multiplied by H multiplied by C, namely copying from 1 multiplied by 1 value to W multiplied by H identical values, and keeping the size of the input and output characteristic diagrams of the branch 3 and the number of channels unchanged; branch 3 output result is x 0 c The convolution kernel size is a multiplied by a, the step length is k, wherein v, a and k are positive integers;
the output of the feature calibration unit FC is the result of the product operation performed on the corresponding pixel by the output feature maps of the 3 branches, which is specifically described as follows:
in the formula, alpha out Calibrating the output of the cell FC for features, alpha s As a result of the output of branch 1, α in As a result of the output of branch 2, alpha c Is the output result of branch 3.
Wherein, the specific steps in step 103 are as follows:
1) as shown in fig. 5, a cloud layer thickness estimation module is constructed, which comprises an edge feature extraction part and a feature calibration part, and is used for adaptively estimating the cloud layer thickness, wherein the input of the cloud layer thickness estimation module is a thin cloud remote sensing image C, and the output of the cloud layer thickness estimation module is predicted cloud layer thicknessAnd characteristic diagram phi out ;
2) As shown in fig. 5, the edge feature extraction portion includes w branches, each branch has the same structure and is composed of a gradient information extraction portion GIE and a residual error unit RU in step 102, and sizes of convolution kernels adopted by the branches are gradually increased. For the r-th branch, the result of the input passing through GIE and RU is recorded as φ r The convolution kernel size is (2r +1) × (2r +1), where r ∈ (1, …, w), and r and w are positive integers. The outputs of two adjacent branches correspond to pixel summation, so that the summed branches total w-1, and the result of the j branch after summation is recorded as delta j The method comprises the following steps:
wherein j ∈ (1, …, w-1), j and w are positive integers,representing a feature map corresponding to the position element summation;
3) as shown in FIG. 5, the characteristic calibration portion is composed of w-1 characteristic calibration cells FC, and for the ith branch of the characteristic calibration portion, the output of the characteristic calibration cells FC is recorded as π i The method comprises the following steps:
π i =FC(δ i ) (3)
where i ∈ (1, …, w-1), FC (-) represents the output of the characteristic calibration unit FC;
4) the outputs of the characteristic calibration portion FC are connected together on the channel and the result is noted as φ out As the 1 st output of the cloud layer thickness estimation moduleThe method comprises the following steps:
φ out =concat(π 1 ,…,π w-1 ) (4)
where concat (. cndot.) represents cascading feature maps over the channels;
the feature map phi out Feeding convolution and ReLU activation function to obtain predicted cloud layer thicknessAs the 2 nd output of the cloud layer thickness estimation module, the following is specifically mentioned:
where conv (-) represents a convolution with a convolution kernel size of l × l and a step size of d 3 ReLU (. circle.) denotes the ReLU activation function, l and d 3 Is a positive integer.
Wherein, the specific steps in step 104 are as follows:
1) as shown in fig. 6, a remote sensing image thin cloud removal network based on multi-path sensing gradient is built by adopting a sensing gradient extraction module in step 102, a cloud layer thickness estimation module in step 103, a residual error feature extraction unit RFE and a Tanh activation function, and is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
2) the input of the remote sensing image thin cloud removing network based on the multi-path perception gradient is a thin cloud remote sensing image C, and the output is a predicted clear remote sensing imageAnd predicting cloud thickness images
Wherein, the specific steps in step 105 are as follows:
1) training a remote sensing image thin cloud removal network by adopting the data set in the step 101, wherein the loss function used for training comprises a characteristic loss function, a gradient loss function and a cloud layer thickness loss function, the training is totally b generations, and the specific function form is as follows;
2) characteristic loss function L F The method comprises the following specific steps:
in the formula, theta (-) represents an output characteristic diagram of the VGG19 network, u represents a convolutional layer number of the VGG19 network, q, t and y represent sequence numbers of the length, width and channel of the characteristic diagram, O represents the layer number of the used VGG19, W, H and C represent the length, width and channel size of the characteristic diagram, R represents a clear remote sensing image,representing a predicted clear remote sensing image, wherein u belongs to (1, …, O), q belongs to (1, …, W), t belongs to (1, …, H), y belongs to (1, …, C), u, q, t, y, O, W, H and C are positive integers;
3) gradient loss function L G The method comprises the following specific steps:
wherein · represents an image gradient extracted using a Prewitt operator, q, t, and y represent feature diagram lengths, widths, and channel sequence numbers, W, H and C represent feature diagram lengths, widths, and channel sizes, q ∈ (1, …, W), t ∈ (1, …, H), y ∈ (1, …, C), q, t, y, W, H, and C are positive integers;
4) cloud thickness loss function L R The method specifically comprises the following steps:
5) the overall loss function L is a weighted sum of the loss functions, and specifically includes:
L=L F +σL G +λL R (9)
in the formula, σ and λ are weight coefficients.
Wherein, the specific steps in step 106 are as follows: and (4) importing the model parameters obtained by training the generation b into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
Example 2
The embodiment of the invention provides a remote sensing image thin cloud removing method based on multi-path perception gradient, and the method is described in detail in the following description with reference to fig. 1:
201: establishing a remote sensing image thin cloud removal data set, wherein the data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to a certain proportion;
202: constructing a perception gradient extraction module which comprises a perception characteristic extraction unit, a gradient information extraction unit and a residual characteristic extraction unit and is used for extracting image thin cloud characteristics;
203: building a cloud layer thickness estimation module, which comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness;
204: constructing a remote sensing image thin cloud removing network based on the sensing gradient extraction module in the step 202 and the cloud layer thickness estimation module in the step 203, wherein the remote sensing image thin cloud removing network is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
205: training a remote sensing image thin cloud removal network by adopting the remote sensing image thin cloud removal data set in the step 201, wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
206: and importing the model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
Wherein, the specific steps in step 201 are as follows:
1) and selecting 200 clear remote sensing images R, and generating a simulated thin cloud to obtain a thin cloud remote sensing image C and a cloud layer thickness image T. Due to the fact that the size of the remote sensing image is large, the remote sensing image is cut into an image with the size of 256 multiplied by 256, and the clear remote sensing image R, the thin cloud remote sensing image C and the cloud layer thickness image T which have the corresponding relation form remote sensing image thin cloud removal dataSet, denoted as { R i ,C i ,T i I belongs to (1, …, m) }, wherein i is the serial number of the image, the number m of the images is 4000, and i is a positive integer;
2) the remote sensing image thin cloud removal data set is expressed as p 1 :p 2 :p 3 Is divided into a training set, a validation set and a test set, for training, validation and testing of the method of the invention, wherein p is 1 :p 2 :p 3 The setting is 6:2: 2.
Wherein, the specific steps in step 202 are as follows:
1) as shown in fig. 2, a Perceptual Gradient Extraction module is constructed, which includes a Perceptual Feature Extraction unit (PFE), a Gradient Information Extraction unit (GIE), a Residual Feature Extraction unit (RFE), and a Residual connection, and is used for extracting image thin cloud features;
2) the perception feature extraction unit PFE adopts a VGG19 network to extract image features, simulates a human visual system to extract features of an image perception layer, and adopts the 2 nd output result of the 3 rd layer of the VGG19 network as perception feature information for a subsequent thin cloud removal task;
3) the gradient information extraction unit GIE adopts a Sobel operator filter to carry out convolution operation with the step of 1 on the characteristic diagram, and is used for extracting image gradient information which contains more cloud layer related characteristics and is beneficial to removing thin clouds;
4) as shown in fig. 3, the Residual Feature extraction Unit RFE is composed of 6 Residual Units (RU), each of which includes 6 convolution + ReLU activation functions, 1 Feature Calibration Unit (FC), and 1 Residual learning, where the convolution kernel sizes are 3 × 3 and the step size is 1;
5) as shown in FIG. 4, the feature calibration unit, which consists of 3 branches, performs the image feature calibration task, and has an input of α in Output is alpha out ;
The branch 1 gives a weight to each pixel of the feature map to realize the pixel-level feature calibration, and specifically comprises 5 convolution + ReLU activation function combinations and 1 convolution + Sigmoid the combination of the activation functions is combined, and the output result of branch 1 is alpha s The convolution kernel size is 3 multiplied by 3, the step is 1, and the branch 1 does not change the characteristic diagram size and the channel number;
The branch 3 gives the same weight to the pixels in each channel of the feature map to realize channel-level feature calibration, and specifically comprises an average value pooling unit, 5 convolution + ReLU activation function combinations, 1 convolution + Sigmoid activation function combination and 1 feature size expansion unit. Averaging pooling averages pixel values of each channel of the feature map as a result, and the feature map size is changed from W × H × C to 1 × 1 × C; the characteristic size expansion unit is used for copying and expanding the characteristic diagram from the size of 1 multiplied by C to W multiplied by H multiplied by C, namely copying from 1 multiplied by 1 value to W multiplied by H identical values, and keeping the size of the input and output characteristic diagrams of the branch 3 and the number of channels unchanged; branch 3 output result is x 0 c The convolution kernel size is 3 × 3, and the stride is 1;
the output of the feature calibration unit FC is the result of the product operation performed on the corresponding pixel by the output feature maps of the 3 branches, which is specifically described as follows:
in the formula, alpha out Calibrating the output of the cell FC for features, alpha s As a result of the output of branch 1, α in As a result of the output of branch 2, alpha c The output result of branch 3.
Wherein, the specific steps in step 203 are:
1) as shown in fig. 5, a cloud layer thickness estimation module is constructed, which comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness, wherein the cloud layer thickness estimation module inputs a thin cloud remote sensing image C and outputs a predicted cloud layer thicknessAnd characteristic diagram phi out ;
2) As shown in fig. 5, the edge feature extraction portion includes 6 branches, each branch has the same structure and is composed of a gradient information extraction portion GIE and a residual error unit RU in step 102, and sizes of convolution kernels adopted by the branches are gradually increased. For the r-th branch, the result of the input passing through GIE and RU is recorded as φ r The convolution kernel size is (2r +1) × (2r +1), where r ∈ (1, …,6), and r is a positive integer. The outputs of two adjacent branches correspond to pixel summation, so that the summed branches total 5, and the result of the j branch after summation is recorded as delta j The method comprises the following steps:
wherein j is a positive integer belonging to j e (1, …,5),representing a feature map corresponding to the position element summation;
3) as shown in FIG. 5, the characteristic calibration portion is composed of 5 characteristic calibration cells FC, and for the ith branch of the characteristic calibration portion, the output of the characteristic calibration cell FC is recorded as π i The method comprises the following steps:
π i =FC(δ i ) (3)
where i ∈ (1, …,5), FC (·) denotes the output of the characteristic calibration unit FC;
4) the outputs of the characteristic calibration portion FC are connected together on the channel and the result is noted as φ out As the 1 st output of the cloud layer thickness estimation module, the following is specifically described:
φ out =concat(π 1 ,…,π 5 ) (4)
where concat (. cndot.) represents cascading feature maps over the channels;
the feature map phi out Feeding convolution and ReLU activation function to obtain predicted cloud layer thicknessAs the 2 nd output of the cloud layer thickness estimation module, the following is specifically mentioned:
where conv (·) represents a convolution with a convolution kernel size of 3 × 3, with a step size of 1, and ReLU (·) represents a ReLU activation function.
Wherein, the specific steps in step 204 are as follows:
1) as shown in fig. 6, a remote sensing image thin cloud removal network based on multi-channel sensing gradients is constructed by adopting the sensing gradient extraction module in step 202, the cloud layer thickness estimation module in step 203, the residual error feature extraction unit RFE and the Tanh activation function, and is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
2) the input of the remote sensing image thin cloud removing network based on the multi-path perception gradient is a thin cloud remote sensing image C, and the output is a predicted clear remote sensing imageAnd predicting cloud layer thickness images
Wherein, the specific steps in step 205 are:
1) training a remote sensing image thin cloud removal network by adopting the data set in the step 201, wherein the loss function used for training comprises a characteristic loss function, a gradient loss function and a cloud layer thickness loss function, the training is performed for 200 generations in total, and the specific function form is as follows;
2) characteristic loss function L F Specifically, as shown in formula (6), θ (·) represents an output feature map of the VGG19 network, u represents a convolutional layer number of the VGG19 network, q, t and y represent feature map length, width and channel number, the first 10 layers of convolutional feature map of the VGG19 is used, the feature map length, width and channel size are 128, R represents a clear remote sensing image,representing a predicted clear remote sensing image, u e (1, …,10), q e (1, …,128), t e (1, …,128) and y e (1, …, 128); gradient loss function L G Specifically, as shown in formula (7), v (·) represents the image gradient extracted by using a Prewitt operator, q, t, and y represent the feature map length, width, and channel sequence number, the feature map length, width, and channel size are 128, u belongs to (1, …,10), q belongs to (1, …,128), t belongs to (1, …,128), y belongs to (1, …, 128); cloud thickness loss function L R The concrete formula is shown as a formula (8); the overall loss function L is a weighted sum of the above loss functions, specifically, as shown in formula (9), where σ is 10.0 and λ is 5.0 in formula (9).
Wherein, the specific steps in step 206 are as follows: and importing the model parameters obtained by training for 200 generations into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments. The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.
Claims (13)
1. A remote sensing image thin cloud removing method based on multi-path perception gradient is characterized by comprising the following steps:
1) establishing a remote sensing image thin cloud removal data set which comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and forming a training set, a verification set and a test set in proportion;
2) constructing a perception gradient extraction module for extracting image thin cloud characteristics;
3) building a cloud layer thickness estimation module for adaptively estimating the cloud layer thickness;
4) constructing a remote sensing image thin cloud removing network based on the sensing gradient extraction module obtained in the step 2) and the cloud layer thickness estimation module obtained in the step 3), wherein the remote sensing image thin cloud removing network is used for converting a single thin cloud remote sensing image into a clear remote sensing image;
5) training a remote sensing image thin cloud removal network by using the data set obtained in the step 1), wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
6) and importing the model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
2. The remote sensing image thin cloud removing method based on multi-path perceptual gradient according to claim 1, wherein in the step 1), the remote sensing image thin cloud removing data set specifically comprises:
11) selecting n clear remote sensing images R, and generating a simulated thin cloud to obtain a thin cloud remote sensing image C and a cloud layer thickness image T; cutting the remote sensing image into an image with the size of NxN, forming a remote sensing image thin cloud removal data set by a clear remote sensing image R, a thin cloud remote sensing image C and a cloud layer thickness image T with corresponding relations, and recording as { R } i ,C i ,T i I belongs to (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
12) the remote sensing image thin cloud removal data set is expressed as p 1 :p 2 :p 3 Is divided into a training set, a validation set and a test set, where p 1 、p 2 And p 3 Is a positive integer, and p 1 >p 2 ,p 1 >p 3 。
3. The remote sensing image thin cloud removing method based on multi-path perceptual gradient according to claim 1, wherein in the step 2), the set up perceptual gradient extraction module specifically comprises a perceptual feature extraction unit, a gradient information extraction unit, a residual feature extraction unit and a residual connection.
4. The remote sensing image thin cloud removing method based on multi-path perceptual gradient of claim 3, wherein the perceptual feature extraction unit specifically adopts a VGG19 network to extract an imageSimulating the human visual system to extract the characteristics of the image perception level, and adopting the Nth VGG19 network 1 Layer n 2 The output result is used as perception characteristic information for a subsequent thin cloud removal task, wherein n is 1 And n 2 Is a positive integer.
5. The remote sensing image thin cloud removing method based on multi-path perceptual gradient as claimed in claim 3, wherein the gradient information extraction unit specifically adopts a Sobel operator filter to make a stride d on a feature map 1 The convolution operation is used for extracting image gradient information, wherein the gradient information comprises cloud layer related characteristics; wherein d is 1 Is a positive integer.
6. The method for removing the thin cloud of the remote sensing image based on the multi-path perceptual gradient as claimed in claim 3, wherein the residual feature extraction unit is composed of e residual units, and each residual unit comprises s 1 Convolution + ReLU activation function, 1 characteristic calibration unit and 1 residual error learning, wherein the sizes of convolution kernels are f multiplied by f, and the step length is d 2 Wherein e, s 1 F and d 2 Are all positive integers.
7. The remote sensing image thin cloud removing method based on multi-path perceptual gradient as claimed in claim 6, wherein the feature calibration unit is composed of 3 branches and performs an image feature calibration task, and the unit input is α in Output is alpha out ;
The branch 1 gives a weight to each pixel of the feature map to realize pixel-level feature calibration, and consists of g convolution + ReLU activation function combinations and 1 convolution + Sigmoid activation function combination, and the output result of the branch 1 is alpha s The size of a convolution kernel is zxz, the step is x, the size of a characteristic diagram and the number of channels are not changed by a branch 1, wherein g, z and x are positive integers;
branch 2 does not do any operation, and the output is still the input alpha of the characteristic calibration unit in ;
The branch 3 being a characteristic diagramPixels in each channel are endowed with the same weight to realize channel-level feature calibration, and the channel-level feature calibration system is composed of an average value pooling unit, v convolution + ReLU activation function combinations, 1 convolution + Sigmoid activation function combination and 1 feature size expansion unit; averaging pooling averages pixel values of each channel of the feature map as a result, and the feature map size is changed from W × H × C to 1 × 1 × C; the characteristic size expansion unit is used for copying and expanding the characteristic diagram from the size of 1 multiplied by C to W multiplied by H multiplied by C, namely copying from 1 multiplied by 1 value to W multiplied by H identical values, and keeping the size of the input and output characteristic diagrams of the branch 3 and the number of channels unchanged; branch 3 output result is x 0 c The convolution kernel size is a multiplied by a, the step length is k, wherein v, a and k are positive integers;
output alpha of the feature calibration unit out The result of the multiplication operation on the corresponding pixel for the output feature maps of the 3 branches is as follows:
in the formula, alpha out Calibrating the output of the cell for features, alpha s As a result of the output of branch 1, alpha in As a result of the output of branch 2, alpha c Is the output result of branch 3.
8. The remote sensing image thin cloud removing method based on multi-path perceptual gradient according to claim 7, wherein in the step 3), the built cloud layer thickness estimation module comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness, the input of the cloud layer thickness estimation module is a thin cloud remote sensing image C, and the output is predicted cloud layer thicknessAnd characteristic diagram phi out ;
The edge feature extraction part comprises w branches, each branch has the same structure and consists of the gradient information extraction part in the step 2) and a residual error unit, and the sizes of convolution kernels adopted by the branches are gradually increased(ii) a For the r branch, the result of inputting the gradient information extraction part and the residual error unit is recorded as phi r The convolution kernel size is (2r +1) × (2r +1), wherein r belongs to (1, …, w), and r and w are positive integers; summing corresponding pixels of the outputs of two adjacent branches, wherein the summed branches total w-1, and the result of the j branch is recorded as delta j As follows:
wherein j ∈ (1, …, w-1), j and w are positive integers,representing a feature map corresponding to the position element summation;
the characteristic calibration part consists of w-1 characteristic calibration units, and for the ith branch of the characteristic calibration part, the output of the characteristic calibration unit is recorded as pi i The following are described:
π i =FC(δ i );
wherein i ∈ (1, …, w-1), FC (·) represents the output of the feature calibration unit;
the outputs of the characteristic calibration portion FC are connected together on the channel and the result is noted as φ out As the 1 st output of the cloud layer thickness estimation module, the following:
φ out =concat(π 1 ,…,π w-1 );
where concat (. cndot.) represents cascading feature maps over the channels;
the feature map phi out Feeding convolution and ReLU activation function to obtain predicted cloud layer thicknessAs the 2 nd output of the cloud layer thickness estimation module, the following:
where conv (-) represents a convolution with a convolution kernel size of l × l and a step size of d 3 ReLU (. cndot.) denotes the ReLU activation function, l and d 3 Is a positive integer.
9. The remote sensing image thin cloud removing method based on multi-path perceptual gradient according to claim 8, wherein in the step 4), the constructing of the remote sensing image thin cloud removing network specifically comprises:
constructing a remote sensing image thin cloud removal network based on multi-path perception gradients by adopting the perception gradient extraction module in the step 2), the cloud layer thickness estimation module in the step 3), the residual error feature extraction unit and the Tanh activation function, wherein the remote sensing image thin cloud removal network is used for converting a single thin cloud remote sensing image into a clear remote sensing image; the input of the remote sensing image thin cloud removing network based on the multi-path perception gradient is a thin cloud remote sensing image C, and the output is a predicted clear remote sensing imageAnd predicting cloud layer thickness images
10. The remote sensing image thin cloud removing method based on multi-path perception gradient according to claim 9, wherein in the step 5), the characteristic loss function L is F The method specifically comprises the following steps:
in the formula, theta (-) represents an output characteristic diagram of the VGG19 network, u represents a convolutional layer number of the VGG19 network, q, t and y represent a characteristic diagram length, width and channel number, O represents the number of layers of the VGG19 used, W, H and C represent a characteristic diagram length, width and channel size, R represents a clear remote sensing image,and the predicted clear remote sensing image is represented, u belongs to (1, …, O), q belongs to (1, …, W), t belongs to (1, …, H), y belongs to (1, …, C), u, q, t, y, O, W, H and C are positive integers.
11. The remote sensing image thin cloud removing method based on multi-path perception gradient according to claim 10, wherein in the step 5), a gradient loss function L is adopted G The method comprises the following specific steps:
in the formula (I), the compound is shown in the specification,representing the gradient of the image extracted by using a Prewitt operator, q, t and y represent the length, width and channel sequence number of the feature map, W, H and C represent the length, width and channel size of the feature map, q belongs to (1, …, W), t belongs to (1, …, H), y belongs to (1, …, C), and q, t, y, W, H and C are all positive integers.
12. The method for removing the thin cloud of the remote sensing image based on the multi-path perceptual gradient as claimed in claim 11, wherein in the step 5), a cloud layer thickness loss function L R The method specifically comprises the following steps:
13. The remote sensing image thin cloud removing method based on multi-path perceptual gradient according to claim 12, wherein in the step 6), the thin cloud removing method specifically comprises: the remote sensing image thin cloud removal data set in claim 2 is adopted to train a remote sensing image thin cloud removal network, loss functions used in training comprise characteristic loss functions, gradient loss functions and cloud layer thickness loss functions, training is performed for b generations, model parameters obtained after training are led into the remote sensing image thin cloud removal network, and a single thin cloud remote sensing image is input to complete remote sensing image thin cloud removal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210357921.8A CN114936972A (en) | 2022-04-06 | 2022-04-06 | Remote sensing image thin cloud removing method based on multi-path perception gradient |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210357921.8A CN114936972A (en) | 2022-04-06 | 2022-04-06 | Remote sensing image thin cloud removing method based on multi-path perception gradient |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114936972A true CN114936972A (en) | 2022-08-23 |
Family
ID=82861747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210357921.8A Pending CN114936972A (en) | 2022-04-06 | 2022-04-06 | Remote sensing image thin cloud removing method based on multi-path perception gradient |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114936972A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460739A (en) * | 2018-03-02 | 2018-08-28 | 北京航空航天大学 | A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network |
CN108931825A (en) * | 2018-05-18 | 2018-12-04 | 北京航空航天大学 | A kind of remote sensing image clouds thickness detecting method based on atural object clarity |
CN109191400A (en) * | 2018-08-30 | 2019-01-11 | 中国科学院遥感与数字地球研究所 | A method of network, which is generated, using confrontation type removes thin cloud in remote sensing image |
US20210174149A1 (en) * | 2018-11-20 | 2021-06-10 | Xidian University | Feature fusion and dense connection-based method for infrared plane object detection |
-
2022
- 2022-04-06 CN CN202210357921.8A patent/CN114936972A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460739A (en) * | 2018-03-02 | 2018-08-28 | 北京航空航天大学 | A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network |
CN108931825A (en) * | 2018-05-18 | 2018-12-04 | 北京航空航天大学 | A kind of remote sensing image clouds thickness detecting method based on atural object clarity |
CN109191400A (en) * | 2018-08-30 | 2019-01-11 | 中国科学院遥感与数字地球研究所 | A method of network, which is generated, using confrontation type removes thin cloud in remote sensing image |
US20210174149A1 (en) * | 2018-11-20 | 2021-06-10 | Xidian University | Feature fusion and dense connection-based method for infrared plane object detection |
Non-Patent Citations (1)
Title |
---|
裴傲;陈桂芬;李昊玥;王兵;: "改进CGAN网络的光学遥感图像云去除方法", 农业工程学报, no. 14, 23 July 2020 (2020-07-23) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tian et al. | Deep learning on image denoising: An overview | |
CN108171701B (en) | Significance detection method based on U network and counterstudy | |
CN110992275B (en) | Refined single image rain removing method based on generation of countermeasure network | |
CN108475415B (en) | Method and system for image processing | |
CN105701508A (en) | Global-local optimization model based on multistage convolution neural network and significant detection algorithm | |
CN110969124A (en) | Two-dimensional human body posture estimation method and system based on lightweight multi-branch network | |
CN110443761B (en) | Single image rain removing method based on multi-scale aggregation characteristics | |
CN113344806A (en) | Image defogging method and system based on global feature fusion attention network | |
Kim et al. | Deeply aggregated alternating minimization for image restoration | |
CN109559315B (en) | Water surface segmentation method based on multipath deep neural network | |
Chen et al. | Densely connected convolutional neural network for multi-purpose image forensics under anti-forensic attacks | |
CN110503613A (en) | Based on the empty convolutional neural networks of cascade towards removing rain based on single image method | |
CN114746895A (en) | Noise reconstruction for image denoising | |
CN111179196B (en) | Multi-resolution depth network image highlight removing method based on divide-and-conquer | |
CN104103052A (en) | Sparse representation-based image super-resolution reconstruction method | |
CN112365414A (en) | Image defogging method based on double-path residual convolution neural network | |
CN113379618B (en) | Optical remote sensing image cloud removing method based on residual dense connection and feature fusion | |
CN115861833A (en) | Real-time remote sensing image cloud detection method based on double-branch structure | |
Vu et al. | Unrolling of deep graph total variation for image denoising | |
CN116052016A (en) | Fine segmentation detection method for remote sensing image cloud and cloud shadow based on deep learning | |
CN110503608B (en) | Image denoising method based on multi-view convolutional neural network | |
CN115601236A (en) | Remote sensing image super-resolution reconstruction method based on characteristic information distillation network | |
CN112257741A (en) | Method for detecting generative anti-false picture based on complex neural network | |
CN114663303A (en) | Neural network-based remote sensing image cloud layer distinguishing and removing method | |
CN105957025A (en) | Inconsistent image blind restoration method based on sparse representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |