CN115809970A - Deep learning cloud removing method based on SAR-optical remote sensing image combination - Google Patents

Deep learning cloud removing method based on SAR-optical remote sensing image combination Download PDF

Info

Publication number
CN115809970A
CN115809970A CN202211651396.7A CN202211651396A CN115809970A CN 115809970 A CN115809970 A CN 115809970A CN 202211651396 A CN202211651396 A CN 202211651396A CN 115809970 A CN115809970 A CN 115809970A
Authority
CN
China
Prior art keywords
sar
cloud
optical
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211651396.7A
Other languages
Chinese (zh)
Inventor
刘润东
黄友菊
罗恒
龙超俊
吴慧
韩广萍
农志铣
祖琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Institute Of Natural Resources Remote Sensing
Original Assignee
Guangxi Institute Of Natural Resources Remote Sensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Institute Of Natural Resources Remote Sensing filed Critical Guangxi Institute Of Natural Resources Remote Sensing
Priority to CN202211651396.7A priority Critical patent/CN115809970A/en
Publication of CN115809970A publication Critical patent/CN115809970A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a deep learning cloud removing method based on SAR-optical remote sensing image combination, which comprises the following specific steps: manufacturing an SAR and a corresponding optical remote sensing image data set, wherein the SAR comprises a training sample data set, a test sample data set and a test sample data set; building an SAR-optical migration model, training and optimizing the migration model to be convergent, inputting an SAR image with concentrated test sample data into the migration model to generate a pseudo-optical image, and finishing SAR-optical migration; and (3) building an optical image cloud area reconstruction model, training and optimizing the reconstruction model to be convergent, inputting the pseudo-optical and cloud coverage images in the test sample data set to the reconstruction model in pairs, and generating a cloud-free optical image. The method can fully utilize the penetrating characteristic of the SAR image to the cloud layer, provides more real reference information for the cloud removing method, improves the reliability of the reconstruction of the network in the cloud coverage area, and completes the task of removing the thick cloud of the remote sensing image with high efficiency and high precision.

Description

Deep learning cloud removing method based on SAR-optical remote sensing image combination
[ technical field ] A method for producing a semiconductor device
The invention relates to a deep learning remote sensing image thick cloud removing method based on SAR-optical remote sensing image combination.
[ background of the invention ]
With the development of scientific technology, remote sensing technology as a long-distance and wide-range earth observation means plays an important role in geography, land survey and most of the geoscience disciplines. However, since cloud layer occlusion often inevitably occurs during the observation process, a large amount of missing information exists in the optical remote sensing image data. The existence of clouds (particularly thick clouds) greatly reduces the effective utilization rate of remote sensing image data, and further influences the downstream industrial application such as image mosaic, change detection, terrain classification and the like. In view of this, image cloud removal is the basis and key for subsequent application of remote sensing data, and the method for removing thick clouds in the remote sensing images has great significance.
The traditional remote sensing image cloud removing method usually takes multi-temporal optical auxiliary data as reference to obtain the correlation between an auxiliary image and a target cloud coverage image. However, in practical production application, the acquisition of optical auxiliary data is difficult, auxiliary data with a time interval close to that of a target image are usually covered by cloud, and data with a longer time interval have a larger difference with the target image in information such as land types and spectra. Researchers introduce SAR data into a cloud removal method, and although the defects of the traditional optical imaging method are overcome, the SAR data bring extra interference information such as noise.
The deep learning is a typical machine learning framework developed from a traditional neural network, can extract high-level features of the remote sensing image, and is suitable for a task of removing thick clouds from the remote sensing image. The generation of the countermeasure network is an advanced and effective network in deep learning, has obvious advantages in the field of image processing, can directly generate image information by utilizing a countermeasure learning mode, and effectively reduces image distortion and noise. Meanwhile, the robustness and the generalization capability of the network structure are strong, and the method has a good research prospect in the field of remote sensing image thick cloud removal.
At present, the thick cloud removing method based on the generation of the countermeasure network has high precision, but still has some problems, and the defects are mainly expressed in the following two aspects:
on one hand, the network fitting capability is not enough to be applied to the remote sensing image with complicated ground features, so that the image after cloud removal has the problems of ground feature distortion, noise and the like; on the other hand, the current method has a large dependence on the optical image quality, and the spatial and spectral information of the image is not fully utilized, so that the cloud-removed image has low precision and is difficult to serve for downstream production application.
[ summary of the invention ]
Aiming at the problems, the invention provides a deep learning cloud removing method based on SAR-optical remote sensing image combination, which is used for solving the problem that the dependence of the cloud removing method based on the optical image on the image quality is too large, fully combines the advantages of the SAR image and the optical image, provides sufficient auxiliary information for a thick cloud removing network model, and improves the efficiency and the precision of thick cloud removing of the remote sensing image.
The invention is realized by the following technical scheme, and provides a deep learning cloud removing method based on SAR-optical remote sensing image combination, which comprises the following steps:
s1, data preprocessing: inputting SAR and optical remote sensing images to be processed, carrying out geographic coordinate registration, data enhancement and normalization pretreatment, and obtaining an SAR-optical remote sensing image data set corresponding to geographic coordinates;
s2, constructing a SAR-optical migration sample data set: on the basis of the SAR-optical remote sensing image data set constructed in the S1, three types of SAR-optical paired migration sample data sets are constructed, wherein the three types of SAR-optical paired migration sample data sets are respectively as follows: training a sample data set, verifying the sample data set and testing the sample data set;
s3, building, training and adjusting an SAR-optical image migration model: constructing an SAR-optical image migration model, inputting SAR training samples into the SAR-optical image migration model in batches, training the migration model by taking paired optical training samples as guidance and adopting an adaptive moment estimation optimization algorithm, estimating migration model precision on a verification sample data set in the training process, adjusting optimization model weight, and completing model convergence after multiple times of complete training;
s4, generating a pseudo-optical image of SAR-optical migration: inputting the SAR image in the test sample data set of the S2 based on the transfer model which completes training in the S3, transferring the SAR image into a pseudo-optical image with three channels of RGB, and using the pseudo-optical image as auxiliary data for reconstructing information of the cloud coverage area;
s5, constructing a cloud region information reconstruction sample data set: the reconstruction data set comprises a real optical image, a pseudo optical image obtained in S4 and a random simulated cloud coverage image, and is divided into the following parts in proportion: a cloud region training sample data set, a cloud region verification sample data set and a cloud region test sample data set;
s6, building, training, adjusting and training a cloud area information reconstruction model: building a cloud region information reconstruction model, inputting pseudo optical images and simulated cloud coverage images in pairs in batches into the reconstruction model, training a migration model by using a self-adaptive moment estimation optimization algorithm and taking a real optical image as a guide, evaluating the precision of the migration model on a verification sample data set in the training process, adjusting the weight of the optimization model, and finishing model convergence after multiple times of complete training;
s7, removing thick clouds in the test sample data set: and (4) inputting the pseudo-optical images and cloud coverage images which are paired in a cloud area test sample data set in batch based on the reconstruction model which is trained in the S6, generating cloud-free optical images, and completing a task of removing thick clouds.
In particular, the S1 is specifically performed according to the following method:
s11, inputting SAR and optical remote sensing images in the same region and different time phases, performing geographic registration processing, and generating a position-matched SAR-optical image pair;
s12, inputting SAR-optical image data generated in S11 in pairs, cutting the SAR-optical image data into image blocks with the size of P multiplied by P, and enhancing the data through random angle rotation, equal scaling and Gaussian noise increasing;
s13, inputting the SAR-optical image data subjected to the enhancement processing in pairs S12, processing the SAR-optical image data by adopting a normalization method, normalizing the image pixel value to a range of 0-1, and obtaining an SAR-optical remote sensing image data set corresponding to the geographic coordinates.
Specifically, the migrating the sample data set in S2 includes: the SAR image and the optical image matched with the geographic position are divided into three types of migration sample data sets matched with SAR-optical according to the proportion: 60% of training sample data set, 10% of verification sample data set and 30% of test sample data set.
Specifically, the S3 is specifically performed according to the following method:
s31, building an SAR-optical image migration model by adopting an encoding-decoding structure, wherein the encoding-decoding structure comprises: five encoders and corresponding decoders thereof, and four residual connecting modules are additionally added at the network bottleneck, wherein the encoders are the combination of a convolution layer, an example normalization layer and a leakage correction linear unit which perform down-sampling operation, the decoders comprise an anti-convolution layer, an example normalization layer and a correction linear unit which perform up-sampling operation, and each encoder is associated with the corresponding decoder through jump connection;
s32, selecting a SAR-optical matched remote sensing image training sample data set, inputting the SAR-optical matched remote sensing image training sample data set into a built SAR-optical image migration model in batches, and calculating an output value of the model in a forward direction, wherein the batch size is set as B;
s33, calculating a loss function of the network model and performing back propagation, wherein the loss function is calculated according to the following formula:
Figure BDA0004010823060000041
in the formula (1), L mae1 Is the mean absolute value error loss value, N is the number of samples, I fake For the generation of pseudo-optical images, I real Is a real optical image;
s34, the optimizer uses a self-adaptive moment estimation gradient descent algorithm to minimize network loss and update and optimize various parameters in a network model;
s35, after each training iteration, the performance of the migration model in the verification sample data set is evaluated accurately, the parameters of the network model are adjusted according to the change of the model migration accuracy, and the network model convergence is realized after multiple times of adjustment and training.
Specifically, the S5 is specifically performed according to the following method:
s51, generating an island-shaped cloud mask in a random mode, covering the cloud mask into the real optical image, and generating a simulated cloud covering image;
s52 divides the reconstructed data set into: the cloud region training sample data set comprises a 60% cloud region training sample data set, a 10% cloud region verification sample data set and a 30% cloud region testing sample data set.
Specifically, the S6 is specifically performed according to the following method:
s61, a cloud area information reconstruction model is built, the reconstruction model adopts a generation countermeasure network framework, the generation countermeasure network framework is composed of a generator and a discriminator, the generator adopts a coding-decoding structure the same as that of the SAR-optical migration model, the cloud coverage image is reconstructed into a cloud-free image, the discriminator model adopts a five-layer convolution network structure, the first layer to the fourth layer are combinations of convolution layers and example normalization and correction linear units and are used for extracting the characteristics of the image, and the last module of the discriminator is a single-layer convolution layer and is used for outputting an identification result;
s62, selecting a cloud area information reconstruction training sample data set, inputting the cloud area information reconstruction training sample data set into a built cloud area information reconstruction model in batches, and calculating an output value of the model in a forward direction, wherein the batch size is set to be B;
s63, calculating a loss function of the network model and performing back propagation, wherein the loss function is calculated according to the following formula:
Loss=λ 1 L mae22 L perc3 L style4 L tv (2),
in equation (2), loss is the total Loss function, L mae2 ,L perc ,L style ,L tv Respectively mean absolute error loss, perceptual characteristic loss, perceptual style loss, total variation loss value and lambda 1 ,λ 2 ,λ 3 ,λ 4 The weight super-parameters are the average absolute error loss, the perception characteristic loss, the perception style loss and the total variation loss of the cloud area respectively;
s64, training is carried out alternately by the generator and the discriminator, and the generator and the discriminator continuously improve the performance of the generator and the discriminator in the counterstudy process along with the continuous increase of the training times until Nash equilibrium is reached; the optimizer uses a self-adaptive moment estimation gradient descent algorithm to minimize network loss and update various parameters in an optimized network model;
s65, evaluating the precision of the verification model, after each training iteration, precisely evaluating the performance of the cloud region information reconstruction model in the verification sample data set, adjusting network model parameters according to the change of the model reconstruction precision, and realizing network model convergence after multiple times of training.
In particular, the average absolute error loss L of the cloud zone in S63 mae2 Expressed as true value I gt And a predicted value I pred Is minimized, which is calculated as follows:
Figure BDA0004010823060000051
in formula (3), M is the number of samples, I gt For true cloudless images, L pred The cloud removal image generated by the reconstruction model is a predicted value;
loss of perceptual features L of the cloud region perc Penalizing predictors that are not similar in perceptual features to labels by defining distance measures between pre-trained network activation maps as followsCalculating by the formula:
Figure BDA0004010823060000061
in equation (4), φ corresponds to the open VGG-19 pre-training network, and the activation mapping is also applied to calculate the perceptual style loss of the predicted values and the true values in the cloud region of the image, wherein the perceptual style loss calculation equation of the cloud region is as follows:
Figure BDA0004010823060000062
in the formula (5), G φ The Graham matrix constructed by the activated mapping phi is used for restraining style characteristics among images and weakening the phenomenon of image spectrum distortion;
the total variation loss L tv The constraint regular term is a common constraint regular term, is used for maintaining the smoothness of an image in the training optimization process of a reconstruction model, can effectively weaken the generation of noise, and is calculated according to the following formula:
Figure BDA0004010823060000063
in the formula (6), x i,j The value of the pixel in the image at (i, j) is used to adjust the degree of total variation.
The invention provides a deep learning cloud removing method based on SAR-optical remote sensing image combination, which adopts an SAR-to-optical image migration technology to generate a pseudo-optical image, greatly weakens the noise interference of SAR data, increases the auxiliary information of the image and ensures the reliability of thick cloud removal; according to the method, an image reconstruction model is built, a rough pseudo-optical image is further processed on the basis of the pseudo-optical image to generate a fine cloud-free image, and the accuracy of removing thick clouds is improved; compared with the remote sensing image thick cloud removing method based on deep learning, the cloud removing method is higher in image accuracy and higher in usability.
[ description of the drawings ]
FIG. 1 is a flow chart of a deep learning cloud removing method based on SAR-optical remote sensing image combination of the present invention;
FIG. 2 is a structural diagram of an SAR-optical image migration model constructed by the method provided by the invention;
FIG. 3 is a graph of the results of processing a migration data set using a trained SAR-optical image migration model of the present invention and a comparative example of the results of processing other migration methods under the same data;
FIG. 4 is a structural diagram of a cloud region information reconstruction model constructed by the method provided by the invention;
fig. 5 is a comparative example diagram of the result of processing the reconstructed data set by using the trained cloud region information reconstruction model of the present invention and the processing result of other cloud removal methods under the same data.
[ detailed description ] embodiments
It should be noted that, in the national natural science youth fund project, "the knowledge representation and reuse is extracted from the remote sensing information based on the geographic ontology: take urban surface coverage as an example ", national committee on funding for natural sciences, 2021-2023, 42001331; the Guangxi research and development project 'Guangxi natural resource satellite remote sensing intelligent monitoring key technology and application demonstration' in Guangxi, guike AB22080080, both of which refer to remote sensing technology, and the remote sensing technology needs to be supported by a standardized cloud-free image, so that the importance of remote sensing cloud removal technology development can be seen. In view of this, the present invention provides a deep learning cloud removing method based on SAR-optical remote sensing image combination, and in order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings.
Referring to fig. 1, the invention provides a deep learning cloud removing method based on SAR-optical remote sensing image combination, which is applied on the premise that SAR image data in the same geographical area as a cloud coverage optical image exists, and the remote sensing image thick cloud removing method adopts an SAR-optical combined remote sensing image thick cloud removing network, and comprises two steps of SAR optical migration and cloud area reconstruction, so that the penetration characteristic of the SAR image to a cloud layer is fully utilized, more real reference information is provided for the cloud removing method, and thick clouds in the image are effectively removed, and the specific implementation mode is as follows:
s1, data preprocessing: selecting a sentinel No. 1 SAR image and a sentinel No. 2 optical image in the same region as a data source, carrying out geographic coordinate registration, data enhancement and normalization preprocessing, and obtaining an SAR-optical remote sensing image data set corresponding to geographic coordinates, wherein the method is implemented according to the following steps:
s11, inputting SAR and optical remote sensing images in the same region and different time phases, performing geographic registration processing, and generating a position-matched SAR-optical image pair;
s12, inputting SAR-optical image data generated in S11 in pairs, cutting the SAR-optical image data into image blocks with the size of P multiplied by P, and realizing data enhancement through random angle rotation with the probability of 20%, equal scaling and Gaussian noise increasing;
s13, inputting the SAR-optical remote sensing image data subjected to the enhancement processing in pairs S12, processing the SAR-optical remote sensing image data by adopting a maximum and minimum normalization method, normalizing the image pixel value to a range of 0-1, and generating a normalized SAR-optical remote sensing image data set corresponding to the geographic coordinates.
S2, constructing a SAR-optical migration sample data set: on the basis of the SAR-optical remote sensing image data set constructed in the S1, three types of SAR-optical paired migration sample data sets are constructed, wherein each migration sample data set comprises: the SAR image and the optical image matched with the geographic position are divided into three types of migration sample data sets matched with SAR-optical according to the proportion: 60% of training sample data set, 10% of verification sample data set and 30% of test sample data set.
S3, building, training and adjusting an SAR-optical image migration model: building an SAR-optical image migration model as shown in FIG. 2, inputting SAR training samples into the SAR-optical image migration model in batches, training the migration model by using paired optical training samples as guidance and adopting an adaptive moment estimation optimization algorithm, estimating the precision of the migration model by adopting a verification sample data set in the training process, adjusting the weight of the optimization model, and realizing model convergence after multiple times of complete training, which is implemented according to the following method:
s31, referring to fig. 2, an SAR-optical image migration model is built by using a coding-decoding structure, where the coding-decoding structure includes: five encoders and corresponding decoders thereof, and four residual connecting modules are additionally added at the network bottleneck, wherein the encoders are the combination of a convolution layer, an example normalization layer and a leakage correction linear unit which perform down-sampling operation, the decoders comprise an anti-convolution layer, an example normalization layer and a correction linear unit which perform up-sampling operation, and each encoder is associated with the corresponding decoder through jump connection;
s32, selecting a SAR-optical matched remote sensing image training sample data set, inputting the SAR-optical matched remote sensing image training sample data set into a built SAR-optical image migration model in batches, and calculating an output value of the model in a forward direction, wherein the batch size is set as B;
s33, calculating a loss function of the network model and performing back propagation, wherein the loss function is calculated according to the following formula:
Figure BDA0004010823060000091
in the formula (1), L mea1 Is the mean absolute value error loss value, N is the number of samples, I fake For the pseudo-optical image generated, I real Is a real optical image;
s34, the optimizer uses a self-adaptive moment estimation gradient descent algorithm to minimize network loss and update and optimize various parameters in the network model;
s35, evaluating and verifying model precision: after each training iteration, the performance of the migration model in the verification sample data set is evaluated accurately, the parameters of the network model are adjusted according to the change of the model migration accuracy, and the network model convergence is realized after multiple times of adjustment and training.
S4, generating a pseudo-optical image of SAR-optical migration: and (3) inputting the SAR image in the test sample data set of the S2 based on the transfer model which completes training in the S3, transferring the SAR image into a pseudo-optical image with three channels of RGB, and using the pseudo-optical image as auxiliary data for reconstructing the cloud coverage area information. Referring to fig. 3, fig. 3 shows the migration results of the currently popular deep learning migration method and the method of the present invention on images of buildings, farmlands, mountains, and the like, respectively, and as can be seen from fig. 3, because the coding-decoding structure and the residual connection module are introduced into the migration model provided by the present invention, compared with other migration methods, the noise interference of the SAR data itself is greatly reduced on images of various surface feature types, and meanwhile, the auxiliary information on the image space and spectrum is increased, which is beneficial to improving the reliability of the next thick cloud removal.
S5, constructing a cloud region information reconstruction sample data set: wherein reconstructing the data set comprises: the method comprises the following steps of (1) obtaining a real optical image, a pseudo optical image obtained in S4 and a random simulated cloud coverage image, and specifically implementing the following steps:
s51, generating an island-shaped cloud mask in a random mode, covering the cloud mask into the real optical image, and generating a simulated cloud covering image;
s52 divides the reconstructed data set into: the cloud region training sample data set is 60%, the cloud region verification sample data set is 10%, and the cloud region testing sample data set is 30%.
S6, building, training, adjusting and training a cloud area information reconstruction model: building a cloud region information reconstruction model as shown in fig. 4, inputting pseudo-optical images and simulated cloud coverage images in pairs in batches into a reconstruction model, training a migration model by using a self-adaptive moment estimation optimization algorithm under the guidance of real optical images, evaluating the precision of the migration model on a verification sample data set in the training process, adjusting the weight of the optimization model, and completing model convergence after multiple times of complete training, wherein the method is implemented according to the following steps:
s61, a cloud area information reconstruction model is constructed as shown in a figure 4, the reconstruction model adopts a generation countermeasure network framework, the generation countermeasure network framework consists of a generator and a discriminator, the generator adopts a coding-decoding structure the same as an SAR-optical migration model, the cloud coverage image is reconstructed into a cloud-free image, the discriminator model adopts a five-layer convolution network structure, the first layer to the fourth layer are combinations of convolution layers and example normalization and correction linear units and are used for extracting the characteristics of the image, and the last module of the discriminator is a single-layer convolution layer and is used for outputting an identification result;
s62, selecting a cloud area information reconstruction training sample data set, inputting the cloud area information reconstruction training sample data set into a built cloud area information reconstruction model in batches, and calculating an output value of the model in a forward direction, wherein the batch size is set to be B;
s63, calculating a loss function of the network model and performing back propagation, wherein the loss function is calculated according to the following formula:
Loss=λ 1 L mae22 L perc3 L style4 L tv (2),
in equation (2), loss is the total Loss function, L mae2 ,L perc ,L style ,L tv Respectively mean absolute error loss, perceptual characteristic loss, perceptual style loss, total variation loss value and lambda 1 ,λ 2 ,λ 3 ,λ 4 The weight super-parameters are the average absolute error loss, the perception characteristic loss, the perception style loss and the total variation loss of the cloud area respectively;
the cloud region average absolute error loss L mae2 Expressed as true value I gt And a predicted value I pred Is minimized, which is calculated as follows:
Figure BDA0004010823060000101
in formula (3), M is the number of samples, I gt For true cloudless images, L pred The cloud removal image generated by the reconstruction model is a predicted value;
loss of perceptual features L of the cloud region prec Penalizing predictive results that are dissimilar in perceptual characteristics to the label by defining distance measures between pre-trained network activation maps, which is calculated as follows:
Figure BDA0004010823060000111
in equation (4), φ corresponds to the open VGG-19 pre-training network, and the activation mapping is also applied to calculate the perceptual style loss of the predicted values and the true values of the images in the cloud region, wherein the perceptual style loss calculation equation is as follows:
Figure BDA0004010823060000112
in the formula (5), G φ The Graham matrix is constructed by activating mapping phi and is used for restricting style characteristics among images and weakening the phenomenon of image spectrum distortion;
the total variation loss L tv The method is a common constraint regular term, is used for maintaining the smoothness of an image in the training optimization process of a reconstruction model, can effectively weaken the generation of noise, and is calculated according to the following formula:
Figure BDA0004010823060000113
in the formula (6), x i,j Is the pixel value in (i, j) in the image, β is used to adjust the degree of total variation;
s64, training is carried out alternately by the generator and the discriminator, and the generator and the discriminator continuously improve the performance of the generator and the discriminator in the counterstudy process along with the continuous increase of the training times until Nash equilibrium is reached; the optimizer uses a self-adaptive moment estimation gradient descent algorithm to minimize network loss and update various parameters in an optimized network model;
s65, evaluating the precision of the verification model, after each training iteration, precisely evaluating the performance of the cloud region information reconstruction model in the verification sample data set, adjusting network model parameters according to the change of the model reconstruction precision, and realizing network model convergence after multiple times of training.
S7, removing thick clouds in the test sample data set: and (4) inputting the pseudo optical images and cloud coverage images which are matched in a concentrated mode according to the cloud area test sample data on the basis of the reconstruction model which is trained in the S6 in batches, generating cloud-free optical images, and completing the task of removing thick clouds.
Referring to fig. 5, fig. 5 is a cloud removal comparison graph of the deep learning cloud removal method and the present advanced deep learning cloud removal method, and as can be seen from fig. 5, in the method provided by the present invention, because the pseudo-optical image generated by SAR migration is introduced as reference information and the cloud region information reconstruction model is adopted, compared with the remote sensing image thick cloud removal method based on deep learning, the cloud region edge is excessive and more natural, the ground object reconstruction degree is higher, and the cloud removal image visual effect is the best.
In summary, the method for removing thick clouds from remote sensing images provided by the invention includes firstly, making an SAR and a corresponding optical remote sensing image data set, including a training sample data set, a test sample data set and a test sample data set; secondly, building an SAR-optical migration model, training and optimizing the migration model to be convergent, inputting an SAR image with concentrated test sample data into the migration model to generate a pseudo-optical image, and completing SAR-optical migration; and building an optical image cloud area reconstruction model again, training and optimizing the reconstruction model until convergence, inputting the pseudo-optical and cloud coverage images in the test sample data set into the reconstruction model in pairs, generating a cloud-free optical image, and finally finishing thick cloud removal.

Claims (7)

1. A deep learning cloud removing method based on SAR-optical remote sensing image combination is characterized by comprising the following steps:
s1, data preprocessing: inputting an SAR and an optical remote sensing image to be processed, carrying out geographic coordinate registration, data enhancement and normalization preprocessing, and obtaining an SAR-optical remote sensing image data set corresponding to geographic coordinates;
s2, constructing a SAR-optical migration sample data set: on the basis of the SAR-optical remote sensing image data set constructed in the S1, three types of SAR-optical paired migration sample data sets are constructed, wherein the three types of SAR-optical paired migration sample data sets are respectively as follows: training a sample data set, verifying the sample data set and testing the sample data set;
s3, building, training and adjusting an SAR-optical image migration model: constructing an SAR-optical image migration model, inputting SAR training samples into the SAR-optical image migration model in batches, training the migration model by taking paired optical training samples as guidance and adopting an adaptive moment estimation optimization algorithm, estimating the precision of the migration model on a verification sample data set in the training process, adjusting the weight of the optimization model, and finishing model convergence after multiple times of complete training;
s4, generating a pseudo-optical image of SAR-optical migration: inputting the SAR image in the test sample data set of the S2 based on the transfer model which completes training in the S3, transferring the SAR image into a pseudo-optical image with three channels of RGB, and using the pseudo-optical image as auxiliary data for reconstructing information of the cloud coverage area;
s5, constructing a cloud region information reconstruction sample data set: the reconstruction data set comprises a real optical image, a pseudo optical image obtained in S4 and a random simulated cloud coverage image, and is divided into the following parts in proportion: a cloud region training sample data set, a cloud region verification sample data set and a cloud region test sample data set;
s6, building, training, adjusting and training a cloud area information reconstruction model: building a cloud region information reconstruction model, inputting pseudo optical images and simulated cloud coverage images in pairs in batches into the reconstruction model, training a migration model by using a self-adaptive moment estimation optimization algorithm and taking a real optical image as a guide, evaluating the precision of the migration model on a verification sample data set in the training process, adjusting the weight of the optimization model, and finishing model convergence after multiple times of complete training;
s7, removing thick clouds in the test sample data set: and (4) inputting the pseudo optical images and cloud coverage images which are matched in a concentrated mode according to the cloud area test sample data on the basis of the reconstruction model which is trained in the S6 in batches, generating cloud-free optical images, and completing the task of removing thick clouds.
2. The SAR-optical remote sensing image combination-based deep learning cloud removing method according to claim 1, wherein S1 is implemented according to the following method:
s11, inputting SAR and optical remote sensing images in the same region and different time phases, performing geographic registration processing, and generating a position-matched SAR-optical image pair;
s12, inputting SAR-optical image data generated in S11 in pairs, cutting the SAR-optical image data into image blocks with the size of P multiplied by P, and enhancing the data through random angle rotation, equal scaling and Gaussian noise increasing treatment;
s13, inputting the SAR-optical image data subjected to the enhancement processing in pairs S12, processing the SAR-optical image data by adopting a normalization method, normalizing the image pixel value to a range of 0-1, and obtaining an SAR-optical remote sensing image data set corresponding to the geographic coordinates.
3. The SAR-optical remote sensing image combination-based deep learning cloud removing method according to claim 1, wherein the transferring of the sample data set in S2 comprises: the SAR image and the optical image matched with the geographic position are divided into three types of migration sample data sets matched with SAR-optical according to the proportion: 60% of training sample data set, 10% of verification sample data set and 30% of test sample data set.
4. The SAR-optical remote sensing image combination-based deep learning cloud removing method according to claim 1, wherein the S3 is implemented according to the following method:
s41, an SAR-optical image migration model is built by adopting an encoding-decoding structure, wherein the encoding-decoding structure comprises: five encoders and corresponding decoders thereof, and four residual connecting modules are additionally added at the network bottleneck, wherein the encoders are the combination of a convolution layer, an example normalization layer and a leakage correction linear unit which perform down-sampling operation, the decoders comprise an anti-convolution layer, an example normalization layer and a correction linear unit which perform up-sampling operation, and each encoder is associated with the corresponding decoder through jump connection;
s42, selecting a SAR-optical matched remote sensing image training sample data set, inputting the SAR-optical matched remote sensing image training sample data set into a built SAR-optical image migration model in batches, and calculating an output value of the model in a forward direction, wherein the batch size is set as B;
s43, calculating a loss function of the network model and performing back propagation, wherein the loss function is calculated according to the following formula:
Figure FDA0004010823050000031
in the formula (1), I mae1 Is flatMean absolute error loss, N is the number of samples, I fake For the generation of pseudo-optical images, I real Is a real optical image;
s44, the optimizer uses a self-adaptive moment estimation gradient descent algorithm to minimize network loss and update and optimize various parameters in the network model;
s45, after each training iteration, the performance of the migration model in the verification sample data set is evaluated accurately, network model parameters are adjusted according to the change of the model migration accuracy, and network model convergence is achieved after multiple times of adjustment and training.
5. The SAR-optical remote sensing image combination-based deep learning cloud removing method according to claim 1, wherein S5 is implemented according to the following method:
s51, generating an island-shaped cloud mask in a random mode, covering the cloud mask into the real optical image, and generating a simulated cloud covering image;
s52 divides the reconstructed data set into: 60% of cloud region training sample data set, 10% of cloud region verification sample data set and 30% of cloud region test sample data set.
6. The SAR-optical remote sensing image combination-based deep learning cloud removing method according to claim 4, wherein S6 is implemented according to the following method:
s61, a cloud area information reconstruction model is built, the reconstruction model adopts a generation countermeasure network framework, the generation countermeasure network framework is composed of a generator and a discriminator, the generator adopts a coding-decoding structure the same as that of the SAR-optical migration model, the cloud coverage image is reconstructed into a cloud-free image, the discriminator model adopts a five-layer convolution network structure, the first layer to the fourth layer are combinations of convolution layers and example normalization and correction linear units and are used for extracting the characteristics of the image, and the last module of the discriminator is a single-layer convolution layer and is used for outputting an identification result;
s62, selecting a cloud area information reconstruction training sample data set, inputting the cloud area information reconstruction training sample data set into the built cloud area information reconstruction model of S61 in batches, and calculating the output value of the model in a forward direction, wherein the batch size is set as B;
s63, calculating a loss function of the network model and performing back propagation, wherein the loss function is calculated according to the following formula:
Loss=λ 1 L mae22 L perc3 L style4 L tv (2),
in equation (2), loss is the total Loss function, L mae2 ,L perc ,L style ,L tv Respectively mean absolute error loss, perceptual characteristic loss, perceptual style loss, total variation loss value and lambda 1 ,λ 2 ,λ 3 ,λ 4 The weight super-parameters are the average absolute error loss, the perception characteristic loss, the perception style loss and the total variation loss of the cloud area respectively;
s64, training is carried out alternately by the generator and the discriminator, and the generator and the discriminator continuously improve the reconstruction and discrimination performance of the generator and the discriminator in the countercheck learning process along with the continuous increase of the training times until Nash equilibrium is reached; the optimizer uses a self-adaptive moment estimation gradient descent algorithm to minimize network loss and update various parameters in an optimized network model;
s65, evaluating the precision of the verification model, after each training iteration, precisely evaluating the performance of the cloud region information reconstruction model in the verification sample data set, adjusting network model parameters according to the change of model reconstruction precision, and realizing network model convergence after multiple times of training.
7. The SAR-optical remote sensing image combination-based deep learning cloud removing method according to claim 6, wherein the average absolute error loss L of the cloud area in S63 is L mae2 Expressed as true value I gt And a predicted value I pred Is minimized, which is calculated as follows:
Figure FDA0004010823050000041
in formula (3), M is the number of samples, I gt For true cloudless images, I pred In order to generate a prediction image, namely a cloud removal image generated by a reconstruction model;
the perceptual feature loss L perc Penalizing the prediction result dissimilar to the label in perceptual features by defining distance measurements between pre-trained network activation mappings, which is calculated as follows:
Figure FDA0004010823050000051
in formula (4), Φ corresponds to the public VGG-19 pre-training network, and the activation mapping is also applied to calculate the perceptual style loss of the predicted values and the real values of the images in the cloud area, wherein the perceptual style loss calculation formula of the cloud area is as follows:
Figure FDA0004010823050000052
in the formula (5), G φ The Graham matrix constructed by the activated mapping phi is used for restraining style characteristics among images and weakening the phenomenon of image spectrum distortion;
the total variation loss L tv The method is a common constraint regular term, is used for maintaining the smoothness of an image in the training optimization process of a reconstruction model, can effectively weaken the generation of noise, and is calculated according to the following formula:
Figure FDA0004010823050000053
in the formula (6), x i,j The value of the pixel in the image at (i, j) is used to adjust the degree of total variation.
CN202211651396.7A 2022-12-21 2022-12-21 Deep learning cloud removing method based on SAR-optical remote sensing image combination Pending CN115809970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211651396.7A CN115809970A (en) 2022-12-21 2022-12-21 Deep learning cloud removing method based on SAR-optical remote sensing image combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211651396.7A CN115809970A (en) 2022-12-21 2022-12-21 Deep learning cloud removing method based on SAR-optical remote sensing image combination

Publications (1)

Publication Number Publication Date
CN115809970A true CN115809970A (en) 2023-03-17

Family

ID=85486438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211651396.7A Pending CN115809970A (en) 2022-12-21 2022-12-21 Deep learning cloud removing method based on SAR-optical remote sensing image combination

Country Status (1)

Country Link
CN (1) CN115809970A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524374A (en) * 2023-07-03 2023-08-01 江苏省地质调查研究院 Satellite image real-time processing and distributing method and system
CN116823664A (en) * 2023-06-30 2023-09-29 中国地质大学(武汉) Remote sensing image cloud removal method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823664A (en) * 2023-06-30 2023-09-29 中国地质大学(武汉) Remote sensing image cloud removal method and system
CN116823664B (en) * 2023-06-30 2024-03-01 中国地质大学(武汉) Remote sensing image cloud removal method and system
CN116524374A (en) * 2023-07-03 2023-08-01 江苏省地质调查研究院 Satellite image real-time processing and distributing method and system
CN116524374B (en) * 2023-07-03 2023-09-26 江苏省地质调查研究院 Satellite image real-time processing and distributing method and system

Similar Documents

Publication Publication Date Title
CN115809970A (en) Deep learning cloud removing method based on SAR-optical remote sensing image combination
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN112836610B (en) Land use change and carbon reserve quantitative estimation method based on remote sensing data
CN113420662A (en) Remote sensing image change detection method based on twin multi-scale difference feature fusion
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
CN114283120B (en) Domain-adaptive-based end-to-end multisource heterogeneous remote sensing image change detection method
CN103745453B (en) Urban residential areas method based on Google Earth remote sensing image
CN113963117B (en) Multi-view three-dimensional reconstruction method and device based on variable convolution depth network
CN106067172A (en) A kind of underwater topography image based on suitability analysis slightly mates and mates, with essence, the method combined
CN112731522B (en) Intelligent recognition method, device and equipment for seismic stratum and storage medium
CN109977968A (en) A kind of SAR change detecting method of deep learning classification and predicting
CN114565594A (en) Image anomaly detection method based on soft mask contrast loss
CN116486255A (en) High-resolution remote sensing image semantic change detection method based on self-attention feature fusion
CN112766223A (en) Hyperspectral image target detection method based on sample mining and background reconstruction
CN114332070A (en) Meteor crater detection method based on intelligent learning network model compression
Altinel et al. Deep structured energy-based image inpainting
Liu et al. Navigability analysis of local gravity map with projection pursuit-based selection method by using gravitation field algorithm
CN114764880B (en) Multi-component GAN reconstructed remote sensing image scene classification method
Chen et al. Recovering fine details for neural implicit surface reconstruction
CN115689941A (en) SAR image compensation method for cross-domain generation countermeasure and computer readable medium
CN115909077A (en) Hyperspectral image change detection method based on unsupervised spectrum unmixing neural network
CN115482280A (en) Visual positioning method based on adaptive histogram equalization
CN116958800A (en) Remote sensing image change detection method based on hierarchical attention residual unet++
CN115082778A (en) Multi-branch learning-based homestead identification method and system
CN114066744A (en) Artistic image restoration method and system based on style constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination