CN114331882B - Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features - Google Patents

Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features Download PDF

Info

Publication number
CN114331882B
CN114331882B CN202111570455.3A CN202111570455A CN114331882B CN 114331882 B CN114331882 B CN 114331882B CN 202111570455 A CN202111570455 A CN 202111570455A CN 114331882 B CN114331882 B CN 114331882B
Authority
CN
China
Prior art keywords
resolution
network
cloud
input
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111570455.3A
Other languages
Chinese (zh)
Other versions
CN114331882A (en
Inventor
李俊
周名威
盛庆红
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202111570455.3A priority Critical patent/CN114331882B/en
Publication of CN114331882A publication Critical patent/CN114331882A/en
Application granted granted Critical
Publication of CN114331882B publication Critical patent/CN114331882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for removing thin clouds in a generated confrontation network remote sensing image fused with multispectral characteristics, which comprises the following steps: acquiring a cloud and non-cloud multispectral remote sensing image pair in the same region at short time intervals, establishing a multi-input and multi-output branch generation network, establishing a multi-input branch judgment network, respectively inputting wave bands with different resolutions of a cloud image to corresponding branches of the generation network to obtain cloud removing images with different resolutions, respectively inputting wave bands with different resolutions of the cloud removing image and the non-cloud image to corresponding branches of the judgment network, judging whether the cloud removing images are clear or not, wherein the trained generation network can be directly applied to multispectral remote sensing image thin cloud removal. The method can better remove the thin cloud in each wave band, and further improves the thin cloud removing capability of the generated network in different resolution wave bands.

Description

Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features
Technical Field
The invention relates to the technical field of image processing, in particular to a method for removing thin clouds in a generated anti-network remote sensing image fused with multispectral characteristics.
Background
The existence of the cloud is always a main factor influencing the usability of the optical remote sensing satellite image, the cloud can be divided into a thick cloud and a thin cloud, the thick cloud completely blocks an optical signal, the thick cloud cannot be removed only by depending on a single-scene image, the thin cloud can transmit partial ground object signals to be transmitted to the satellite sensor, and the signals of the ground objects can be recovered through the partial transmission signals. The thin cloud has different transmittances in different wave bands, so that the wave band with small influence of the cloud image in the multispectral remote sensing image can be used for helping to recover the wave band information with large influence of the cloud.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for removing thin clouds of generated anti-network remote sensing images fused with multispectral characteristics, which can better remove the thin clouds in each wave band and further improve the thin cloud removal capability of the generated network in different resolution wave bands.
In order to solve the technical problem, the invention provides a method for removing thin cloud of a generated countermeasure network remote sensing image fused with multispectral characteristics, which comprises the following steps:
s01: acquiring a plurality of pairs of cloud and cloud-free multispectral remote sensing images, respectively slicing wave bands with different resolutions according to resolution ratios by using sliding windows and sliding step lengths in the same ratio, and taking different resolution wave bands in the same region as a group, wherein the different resolution slices in the same group have the same number;
s02: dividing slices with the resolution ratio of a visible light wave band into cloud slices and non-cloud slices, then selecting and dividing slices with other resolution ratios into cloud slices and non-cloud slices according to the serial numbers of the slices, and dividing the data set into a training set and a test set according to the proportion of 4;
s03: constructing a multi-resolution input and output generation network;
s04: constructing a discrimination network of multi-resolution input;
s05: training a generating network and a judging network by using the training set manufactured in the step S02, and when the losses of the generating network and the judging network are converged to a smaller value, obtaining the generating network after the training is finished, namely removing the thin cloud in the multispectral remote sensing image;
s06: and (5) inputting the generation network trained in the step (S05) by using the test set manufactured in the step (S02) to test the thin cloud removal effect of the generation network.
Preferably, in step S02, when the image is sliced, the image is sliced by using windows with different sizes for different bands according to the ratio of the resolution, so as to obtain image sets with different resolutions, which completely correspond to the same region.
Preferably, in step S03, the generating network includes a plurality of input/output branches and a cascade feature fusion channel; the multi-input branch is used for processing wave bands with different resolutions, and the input of each branch is processed by a convolution layer firstly, so that the resolution is kept unchanged; the other branches except the lowest resolution branch use another convolution layer for resampling, the resolution is reduced to be consistent with the input of the next branch, and then the other branches are connected with the output of the next branch at a channel layer until the lowest resolution is processed; the cascade feature fusion channel comprises a plurality of convolution, deconvolution and jump connection convolution modules and is used for fully fusing the features of all resolutions; the multi-branch output employs deconvolution to boost the features to the resolution of the corresponding input branch, and then uses the convolution to output an output specifying the number of channels.
Preferably, in step S03, the multiresolution input and output generation network processes the high and medium resolution bands through convolution, fuses the high and medium resolution band features, extracts and fuses different hierarchy features by using a plurality of cascaded convolution deconvolution modules, obtains the high and medium resolution features by using deconvolution, and finally outputs a thin cloud removal result with a specified number of channels by using convolution.
Preferably, in step S04, the decision network includes a plurality of input branches and a continuous convolution feature extraction channel; the multi-input channel respectively processes the multi-resolution wave band after cloud removal, each branch is composed of a convolution layer, the resolution of the first branch is reduced in the convolution process and is the same as the input of the second branch, the convolution process of the two latter branches does not change the resolution, the output of the first branch can be output by the second branch and is jointly input to the convolution layer after the channel layers are connected, the resolution is reduced to be the same as the next branch, when all the branches are processed, the output is input to a feature extraction channel composed of three convolution layers, and finally, the judgment result of whether all the input resolution images have clouds or not is output.
Preferably, in step S04, the multiresolution input discrimination network can simultaneously determine whether clouds exist in different resolution bands in the same region, so as to monitor the cloud removal effect of different resolution bands and improve the thin cloud removal effect of different resolution bands.
Preferably, in step S05, the generation countermeasure network is trained by using thin cloud removal training data sets with different resolutions, so that the mimo generation network has thin cloud removal capabilities on different resolution bands.
Preferably, in step S06, the trained generation network is tested by using thin cloud removal test data sets with different resolutions, and thin cloud removal capabilities of the multiple-input multiple-output generation network on different resolution wave bands are tested, so as to adjust and optimize network training parameters.
The invention has the beneficial effects that: according to the invention, the spectrum characteristics of wave bands with different resolutions are automatically fused by using the multi-input and multi-output generation network, thin clouds in each wave band are better removed, meanwhile, the multi-input discrimination network is used for simultaneously monitoring the thin cloud removal effect of the wave bands with different resolutions and feeding back the result to the generation network, and the thin cloud removal capability of the generation network at the wave bands with different resolutions is further improved.
Drawings
Fig. 1 is a schematic diagram of a generating network structure according to the present invention.
Fig. 2 is a schematic diagram of a discrimination network structure according to the present invention.
Detailed Description
The method of the invention is implemented in a computer having the following configuration: 11th Gen Intel (R) Core (TM) i9-11900KF @3.50GHz 3.50GHz 16-Core processor, nvidia GeForce RTX3080TI graphics processor, dominant frequency 3.49GHz, computer memory of 64GB, operating system of windows11. The realization of the generation of the fusion multi-section spectral characteristics for resisting the network remote sensing image thin cloud removal network is based on a Tensorflow2.4 deep learning framework toolkit. The invention provides a method for removing thin clouds in a generated confrontation network remote sensing image fused with multispectral characteristics, which specifically comprises the following steps of:
s01: the method comprises the steps of obtaining a plurality of pairs of cloud and cloud-free multispectral remote sensing images, respectively slicing wave bands with different resolutions according to resolution ratios by using sliding windows and sliding step lengths with the same ratio, and taking the wave bands with different resolutions in the same area as a group, wherein the serial numbers of the slices with different resolutions in the same group are the same.
S02: the method comprises the steps of dividing slices with the resolution ratio of a visible light wave band into slices with cloud and slices without cloud, then selecting and dividing the slices with other resolution ratios into the slices with the cloud and the slices without the cloud according to the serial numbers of the slices, and enabling a data set to be divided into the following sections according to the ratio of 4: the scale of 1 is divided into a training set and a test set.
S03: constructing a multi-resolution input and output generation network, wherein the generation network consists of a plurality of input and output branches and a cascade characteristic fusion channel; the multi-input branch is used for processing wave bands with different resolutions, and the input of each branch is processed by a convolution layer firstly, so that the resolution is kept unchanged; the other branches except the lowest resolution branch use another convolution layer for resampling, the resolution is reduced to be consistent with the input of the next branch, and then the other branches are connected with the output of the next branch at a channel layer until the lowest resolution is processed; the cascade feature fusion channel comprises a plurality of convolution, deconvolution and jump connection convolution modules, and can fully fuse features of all resolutions; the multi-branch output employs deconvolution to raise the features to the resolution of the corresponding input branch, and then outputs the output of the specified number of channels using convolution, as shown in fig. 1.
S04: constructing a multi-resolution input discrimination network, wherein the discrimination network is composed of a plurality of input branches and a continuous convolution characteristic extraction channel; the multi-input channel respectively processes the cloud-removed multi-resolution wave band, each branch is composed of a convolution layer, the resolution of the first branch is reduced in the convolution process and is the same as the input of the second branch, the convolution process of the two latter branches does not change the resolution, the output of the first branch can be output by the second branch and is jointly input to the convolution layer after the channel layers are connected, the resolution is reduced to be the same as the next branch, when all the branches are processed, the output is input to the feature extraction channel composed of the three convolution layers, and finally the judgment result of whether all the input resolution images have clouds is output, as shown in fig. 2.
S05: and (4) training the generation network and the judgment network by using the training set manufactured in the step (S02), and when the losses of the generation network and the judgment network are converged to a small value, obtaining the generation network after the training is finished, namely removing the thin cloud in the multispectral remote sensing image.
S06: and (4) inputting the generated network trained in the step (S05) by using the test set manufactured in the step (S02) to test the thin cloud removal effect of the generated network.

Claims (7)

1. A method for removing thin clouds in a generated confrontation network remote sensing image fused with multispectral features is characterized by comprising the following steps:
s01: acquiring a plurality of pairs of cloud and cloud-free multispectral remote sensing images, respectively slicing wave bands with different resolutions according to resolution ratios by using sliding windows and sliding step lengths in the same ratio, and taking different resolution wave bands in the same region as a group, wherein the different resolution slices in the same group have the same number;
s02: dividing slices with the resolution ratio of a visible light wave band into cloud slices and non-cloud slices, then selecting and dividing slices with other resolution ratios into cloud slices and non-cloud slices according to the serial numbers of the slices, and dividing a data set into a training set and a test set according to the proportion of 4;
s03: constructing a multi-resolution input and output generation network; the generation network comprises a plurality of input and output branches and a cascade characteristic fusion channel; the multi-input branch is used for processing wave bands with different resolutions, and the input of each branch is processed by a convolution layer firstly, so that the resolution is kept unchanged; the other branches except the lowest resolution branch use another convolution layer for resampling, the resolution is reduced to be consistent with the input of the next branch, and then the other branches are connected with the output of the next branch in the channel layer until the lowest resolution is processed; the cascade feature fusion channel comprises a plurality of convolution, deconvolution and jump connection convolution modules and is used for fully fusing the features of all resolutions; the multi-branch output adopts deconvolution to improve the characteristics to the resolution of the corresponding input branch, and then convolution is used for outputting the output of the specified channel number;
s04: constructing a discrimination network of multi-resolution input;
s05: training a generating network and a judging network by using the training set manufactured in the step S02, and obtaining the generating network after training when the losses of the generating network and the judging network are converged to be not reduced any more, namely, the generating network can be used for removing the thin cloud in the multispectral remote sensing image;
s06: and (4) inputting the generated network trained in the step (S05) by using the test set manufactured in the step (S02) to test the thin cloud removal effect of the generated network.
2. The method for removing the thin cloud in the fusion multispectral feature generation anti-network remote sensing image as claimed in claim 1, wherein in step S02, when the image is sliced, windows with different sizes are used for different wave bands according to the resolution ratio, so as to obtain image sets with different resolutions completely corresponding to the same region.
3. The method for removing the thin cloud of the remote sensing image of the generation countermeasure network fused with the multispectral features as claimed in claim 1, wherein in step S03, the generation network of the multiresolution input and output processes the high and medium resolution bands through convolution, is fused with the low resolution band features, extracts and fuses the features of different levels by using a plurality of cascaded convolution deconvolution modules, obtains the high and medium resolution features by using deconvolution, and finally outputs the thin cloud removal result with the specified number of channels by using convolution.
4. The method for generating the thin cloud of the countermeasure network remote sensing image fused with the multispectral features according to claim 1, wherein in the step S04, the discrimination network comprises a plurality of input branches and a continuous convolution feature extraction channel; the multi-input channel respectively processes the multi-resolution wave band after cloud removal, each branch is composed of a convolution layer, the resolution of the first branch is reduced in the convolution process and is the same as the input of the second branch, the convolution process of the two latter branches does not change the resolution, the output of the first branch can be output to the convolution layers together after the output of the second branch is connected to the channel layers, the resolution is reduced to be the same as the resolution of the next branch, when all branches are processed, the output is input to a feature extraction channel composed of three convolution layers, and finally, the judgment result of whether all input resolution images have clouds or not is output.
5. The method for generating anti-network remote sensing image thin cloud with fusion of multispectral features as claimed in claim 4, wherein in step S04, the multiresolution input discrimination network can simultaneously determine whether clouds exist in different resolution bands in the same region, and supervise the cloud removal effect of different resolution bands to improve the thin cloud removal effect of different resolution bands.
6. The method for removing the thin cloud of the remote sensing image of the generation countermeasure network fused with the multispectral features as claimed in claim 1, wherein in step S05, the thin cloud removal training data sets with different resolutions are used for training the generation countermeasure network, so that the multiple-input multiple-output generation network has the thin cloud removal capability on different resolution wave bands.
7. The method for generating the anti-network remote sensing image thin cloud with the fusion of the multispectral features as claimed in claim 1, wherein in step S06, the trained generation network is tested by using thin cloud removal test data sets with different resolutions, the thin cloud removal capability of the multi-input and multi-output generation network on wave bands with different resolutions is tested, and the training parameters of the network are adjusted and optimized.
CN202111570455.3A 2021-12-21 2021-12-21 Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features Active CN114331882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111570455.3A CN114331882B (en) 2021-12-21 2021-12-21 Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111570455.3A CN114331882B (en) 2021-12-21 2021-12-21 Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features

Publications (2)

Publication Number Publication Date
CN114331882A CN114331882A (en) 2022-04-12
CN114331882B true CN114331882B (en) 2023-03-28

Family

ID=81054941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111570455.3A Active CN114331882B (en) 2021-12-21 2021-12-21 Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features

Country Status (1)

Country Link
CN (1) CN114331882B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294392B (en) * 2022-08-09 2023-05-09 安徽理工大学 Visible light remote sensing image cloud removal method and system based on network model generation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570353B (en) * 2019-08-27 2023-05-12 天津大学 Super-resolution reconstruction method for generating single image of countermeasure network by dense connection
CN113361466B (en) * 2021-06-30 2024-03-12 江南大学 Multispectral target detection method based on multi-mode cross guidance learning
CN113724149B (en) * 2021-07-20 2023-09-12 北京航空航天大学 Weak-supervision visible light remote sensing image thin cloud removing method

Also Published As

Publication number Publication date
CN114331882A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN111062892B (en) Single image rain removing method based on composite residual error network and deep supervision
CN114331882B (en) Method for removing thin cloud of generated confrontation network remote sensing image fused with multispectral features
CN109217923A (en) A kind of joint optical information networks and rate, modulation format recognition methods and system
CN111563275A (en) Data desensitization method based on generation countermeasure network
CN114066755B (en) Remote sensing image thin cloud removing method and system based on full-band feature fusion
US20230401833A1 (en) Method, computer device, and storage medium, for feature fusion model training and sample retrieval
CN112013966B (en) Power equipment infrared image processing method based on measured temperature
CN115205147A (en) Multi-scale optimization low-illumination image enhancement method based on Transformer
CN113705361A (en) Method and device for detecting model in living body and electronic equipment
US20230011823A1 (en) Method for converting image format, device, and storage medium
CN115239642A (en) Detection method, detection device and equipment for hardware defects in power transmission line
CN114372521A (en) SAR image classification method based on attention mechanism and residual error relation network
CN111563491B (en) Method, equipment and device for segmenting remote sensing image by using network model
CN110751201B (en) SAR equipment task failure cause reasoning method based on textural feature transformation
CN116630723A (en) Hyperspectral ground object classification method based on large-kernel attention mechanism and MLP (Multi-level particle swarm optimization) mixing
CN116485770A (en) Transmission line invasion state analysis system
Sun et al. Swin transformer and fusion for underwater image enhancement
CN111738310A (en) Material classification method and device, electronic equipment and storage medium
CN112861697B (en) Crowd counting method and device based on picture self-symmetry crowd counting network
CN108304805A (en) A kind of big data image recognition processing system
CN114998101A (en) Satellite image super-resolution method based on deep learning
CN112465736B (en) Infrared video image enhancement method for port ship monitoring
CN113486929B (en) Rock slice image identification method based on residual shrinkage module and attention mechanism
CN114694031A (en) Remote sensing image typical ground object extraction method based on multitask attention mechanism
CN115170831A (en) Gesture recognition method based on UWB radar technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant