CN117495723B - Unpaired data remote sensing image thin cloud removal method based on sub-band processing - Google Patents
Unpaired data remote sensing image thin cloud removal method based on sub-band processing Download PDFInfo
- Publication number
- CN117495723B CN117495723B CN202311839668.0A CN202311839668A CN117495723B CN 117495723 B CN117495723 B CN 117495723B CN 202311839668 A CN202311839668 A CN 202311839668A CN 117495723 B CN117495723 B CN 117495723B
- Authority
- CN
- China
- Prior art keywords
- cloud
- image
- wave band
- images
- band
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000012545 processing Methods 0.000 title claims abstract description 19
- 238000010586 diagram Methods 0.000 claims abstract description 31
- 238000011176 pooling Methods 0.000 claims abstract description 14
- 230000006870 function Effects 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 4
- 230000008569 process Effects 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 13
- 238000013135 deep learning Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000009469 supplementation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for removing thin clouds of unpaired data remote sensing images based on sub-band processing, which belongs to the technical field of image enhancement and is used for removing thin clouds of remote sensing images, and the method comprises the steps of respectively inputting visible light wave bands and infrared wave bands in multispectral images into a cloud removing network to generate independent cloud layer distribution diagrams and thickness coefficients according to the characteristics of each wave band, so that a finer cloud removing effect is achieved when the unpaired multispectral images are used for training; the space pooling pyramid module is used for learning the difference between the cloud cover part and the ground surface real information, so that the ground surface information of the thin cloud coverage area is better recovered; in addition, in order to improve the color fidelity and the structural similarity of the cloud-removed image, the invention also constructs a loss function comprising color loss and contrast loss items, so that the color and the structure of the cloud-removed image are more approximate to reality. The method provided by the invention has better cloud removal effect, and the peak signal-to-noise ratio and the structural similarity are better represented.
Description
Technical Field
The invention discloses an unpaired data remote sensing image thin cloud removal method based on sub-band processing, and belongs to the technical field of image enhancement.
Background
Because of limitations of remote sensing imaging principles and the like, there is a problem that a large number of remote sensing images are greatly reduced in image quality due to the existence of cloud layers, so in order to more fully utilize the remote sensing images in various fields, cloud removal becomes one of key problems in remote sensing image processing. At present, remote sensing image cloud removal methods can be roughly classified into cloud removal methods for thin clouds and cloud removal methods for thick clouds according to different coverage degrees of cloud layers on ground surface information. The existing thin cloud removing method can be roughly divided into a method based on spectrum analysis, a method based on statistics and a method for deep learning, and the thick cloud removing mostly needs supplementary information due to serious surface information loss caused by the coverage of the thick cloud, and according to the difference of the supplementary information, the method can be divided into a method based on space supplementation, a method based on time supplementation, a method for multi-sensor fusion and a mixed method. In the aspect of thin cloud removal, the existing deep learning-based method is mainly used for completing the thin cloud removal of RGB images and the thin cloud removal of multispectral images. Multispectral images provide richer surface information due to richer band information, and are more widely applied in many fields than RGB images. With the development of deep learning technology in recent years, more and more deep learning methods start to be applied to the task of removing thin clouds, but most of these methods input multispectral images into a cloud removing network as a whole. However, considering different characteristics of the visible and infrared bands, the wavelength of the infrared band is longer and less susceptible to contamination by thin clouds than the visible band. Therefore, the physical properties of the cloud layer are processed by the sub-bands, so that a more accurate cloud removing effect is realized, and the method has the advantage of being larger than that of uniformly processing all the bands.
Disclosure of Invention
The invention aims to provide an unpaired data remote sensing image thin cloud removing method based on sub-band processing, which is used for solving the problem of poor remote sensing image thin cloud removing effect in the prior art.
The unpaired data remote sensing image thin cloud removing method based on the sub-band processing comprises the following steps:
s1, respectively sending a visible light wave band and an infrared wave band in a multispectral image into a cloud removing network, respectively processing the visible light wave band and the infrared wave band to respectively obtain a cloud layer distribution diagram, a thickness coefficient and a cloud removing image of the visible light wave band and the infrared wave band;
s2, in the forward circulation, combining the images to be cloud-removed in the two wave bands after the images are processed by a cloud removing network to generate a total cloud-removed image of the forward circulation, combining the cloud layer distribution map, the thickness coefficient and the cloud-removed image in the two wave bands to form a simulated cloud image in the two wave bands, and combining the simulated cloud images in the two wave bands to form a total simulated cloud image of the forward circulation;
s3, in the reverse circulation, combining the cloud layer distribution diagrams, the thickness coefficients and the clear non-cloud images of the two wave bands to form a simulated cloud image, combining the simulated cloud images of the two wave bands to form a total simulated cloud image of the reverse circulation, respectively processing the visible light wave band and the infrared wave band through a cloud removing network to respectively obtain the cloud layer distribution diagrams, the thickness coefficients and the cloud removing images of the visible light wave band and the infrared wave band in the reverse circulation, and combining the cloud removing images of the two wave bands in the reverse circulation to generate a total cloud removing image of the reverse circulation;
s4, selecting unpaired images, and training the cloud removal network by using a loss function added with color loss and contrast loss items.
The cloud removal network of the S1 comprises six pyramid coding modules and six decoding modules, wherein the coding modules are combined with convolution by using a space pooling pyramid module, and the feature extraction of the ground object information of the thin cloud shielding part is concerned;
the network for generating the cloud layer distribution map comprises five coding modules and five decoding modules, and the network for generating the thickness coefficient connects output with the global average pooling layer and the complete connection layer after the five coding modules.
S2 comprises the following steps:
combining the two wave band cloud-removed images to generate a forward-circulated total cloud-removed image:
=concat(/>);
In the method, in the process of the invention,,/>the cloud-removed images obtained by the visible light wave band and the infrared wave band in the forward circulation are respectively, and concat represents combination;
combining the cloud layer distribution diagram, the thickness coefficient and the cloud removal image according to the two wave bands to form a simulation cloud image of the two wave bands comprises the following steps:
=/>+/>;/>=/>+/>;
in the method, in the process of the invention,and->Analog cloud pictures respectively representing visible light wave band and infrared wave band, ">And->Thickness coefficients of visible light band and infrared band are respectively expressed, +.>And->Cloud layer distribution map respectively representing visible light wave band and infrared wave band, +>Representing that the thickness coefficient and the cloud layer distribution diagram are subjected to superposition operation;
combining the simulated cloud images of the forward cycles of the two wave bands to generate a total simulated cloud image of the forward cycles:
=concat(/>,/>)。
S3 comprises the following steps:
combining the cloud layer distribution diagrams, the thickness coefficients and the clear cloud-free diagrams of the two wave bands to form a simulated cloud diagram comprises the following steps:
=/>+/>;/>=/>+/>;
in the method, in the process of the invention,and->Analog cloud pictures respectively representing visible light wave band and infrared wave band in reverse circulation>And->Respectively represent clear cloudless pictures of visible light wave band and infrared wave band in reverse circulation>And->Respectively representing the thickness coefficients of the visible light band and the infrared band in the reverse circulation, +.>And->Cloud layer distribution diagrams of visible light wave bands and infrared wave bands in reverse circulation are respectively shown;
combining the simulated clouds of the two wavebands to form a reverse-cycled total simulated cloudComprising the following steps:
=concat(/>);
merging to generate reverse-cycled total de-cloud imageComprising the following steps:
=concat(/>,/>);
in the method, in the process of the invention,and->Representing the cloud-removed image for the visible and infrared bands in the reverse cycle, respectively.
Loss function L in S4 total Comprising the following steps:
L total =L adv +λ 1 L color +λ 2 L cons +λ 3 L per +λ 4 L cyc +λ 5 L idt ;
wherein L is adv 、L color 、L cons 、L per 、L cyc 、L idt Respectively representing contrast loss, color loss, contrast learning loss, period perception consistency loss, cycle consistency loss and identity mapping loss, lambda 1 、λ 2 、λ 3 、λ 4 、λ 5 Representing five balance parameters, respectively.
The contrast learning penalty includes:
;
wherein D (I, J) represents the L1 norm distances of I and J.
Compared with the prior art, the invention has the following beneficial effects: the method provided by the invention has better cloud removal effect, and the peak signal-to-noise ratio and the structural similarity are better represented. Meanwhile, experiments also prove that the cloud removing effect can be improved by using the space pooling pyramid structure and the contrast loss term. The method has excellent effect and application potential in removing thin clouds from multispectral images, and can provide better results for the later analysis application of remote sensing images.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the present invention will be clearly and completely described below, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The unpaired data remote sensing image thin cloud removing method based on the sub-band processing comprises the following steps:
s1, respectively sending a visible light wave band and an infrared wave band in a multispectral image into a cloud removing network, respectively processing the visible light wave band and the infrared wave band to respectively obtain a cloud layer distribution diagram, a thickness coefficient and a cloud removing image of the visible light wave band and the infrared wave band;
s2, in the forward circulation, combining the images to be cloud-removed in the two wave bands after the images are processed by a cloud removing network to generate a total cloud-removed image of the forward circulation, combining the cloud layer distribution map, the thickness coefficient and the cloud-removed image in the two wave bands to form a simulated cloud image in the two wave bands, and combining the simulated cloud images in the two wave bands to form a total simulated cloud image of the forward circulation;
s3, in the reverse circulation, combining the cloud layer distribution diagrams, the thickness coefficients and the clear non-cloud images of the two wave bands to form a simulated cloud image, combining the simulated cloud images of the two wave bands to form a total simulated cloud image of the reverse circulation, respectively processing the visible light wave band and the infrared wave band through a cloud removing network to respectively obtain the cloud layer distribution diagrams, the thickness coefficients and the cloud removing images of the visible light wave band and the infrared wave band in the reverse circulation, and combining the cloud removing images of the two wave bands in the reverse circulation to generate a total cloud removing image of the reverse circulation;
s4, selecting unpaired images, and training the cloud removal network by using a loss function added with color loss and contrast loss items.
The cloud removal network of the S1 comprises six pyramid coding modules and six decoding modules, wherein the coding modules are combined with convolution by using a space pooling pyramid module, and the feature extraction of the ground object information of the thin cloud shielding part is concerned;
the network for generating the cloud layer distribution map comprises five coding modules and five decoding modules, and the network for generating the thickness coefficient connects output with the global average pooling layer and the complete connection layer after the five coding modules.
S2 comprises the following steps:
combining the two wave band cloud-removed images to generate a forward-circulated total cloud-removed image:
=concat(/>);
In the method, in the process of the invention,,/>the cloud-removed images obtained by the visible light wave band and the infrared wave band in the forward circulation are respectively, and concat represents combination;
combining the cloud layer distribution diagram, the thickness coefficient and the cloud removal image according to the two wave bands to form a simulation cloud image of the two wave bands comprises the following steps:
=/>+/>;/>=/>+/>;
in the method, in the process of the invention,and->Analog cloud pictures respectively representing visible light wave band and infrared wave band, ">And->Thickness coefficients of visible light band and infrared band are respectively expressed, +.>And->Cloud layer distribution map respectively representing visible light wave band and infrared wave band, +>Representing that the thickness coefficient and the cloud layer distribution diagram are subjected to superposition operation;
combining the simulated cloud images of the forward cycles of the two wave bands to generate a total simulated cloud image of the forward cycles:
=concat(/>,/>)。
S3 comprises the following steps:
combining the cloud layer distribution diagrams, the thickness coefficients and the clear cloud-free diagrams of the two wave bands to form a simulated cloud diagram comprises the following steps:
=/>+/>;/>=/>+/>;
in the method, in the process of the invention,and->Analog cloud pictures respectively representing visible light wave band and infrared wave band in reverse circulation>And->Respectively represent clear cloudless pictures of visible light wave band and infrared wave band in reverse circulation>And->Respectively representing the thickness coefficients of the visible light band and the infrared band in the reverse circulation, +.>And->Respectively represent visible light in reverse circulationCloud layer distribution diagrams of wave bands and infrared wave bands;
combining the simulated clouds of the two wavebands to form a reverse-cycled total simulated cloudComprising the following steps:
=concat(/>);
merging to generate reverse-cycled total de-cloud imageComprising the following steps:
=concat(/>,/>);
in the method, in the process of the invention,and->Representing the cloud-removed image for the visible and infrared bands in the reverse cycle, respectively.
Loss function L in S4 total Comprising the following steps:
L total =L adv +λ 1 L color +λ 2 L cons +λ 3 L per +λ 4 L cyc +λ 5 L idt ;
wherein L is adv 、L color 、L cons 、L per 、L cyc 、L idt Respectively show contrast loss, color loss and contrast learningLoss, cycle-aware consistency loss, cycle-consistency loss, and identity-mapping loss, lambda 1 、λ 2 、λ 3 、λ 4 、λ 5 Representing five balance parameters, respectively.
The contrast learning penalty includes:
;
wherein D (I, J) represents the L1 norm distances of I and J.
In the remote sensing image cloud removal task in the prior art, the cloud image cloud removal task can be divided into two categories of thick cloud removal and thin cloud removal according to the thickness of a cloud layer. Although there are no well-defined thick and thin cloud standards, images that have severely lost surface information due to cloud occlusion can be considered to be contaminated by thick clouds, while images that still retain some surface information are considered to be contaminated by thin clouds. Compared with thick cloud pollution, the image polluted by the thin cloud still keeps partial surface structure and color information, so that the focus of the task of removing the thin cloud is on how to realize fine recovery of the surface structure information and the color information; for images polluted by thick clouds, the surface information of a certain area is completely lost, so that the important point of the task of removing the thick clouds is how to restore the structure and the color characteristics similar to the original surface information as much as possible. Methods for removing thick clouds and thin clouds are different based on the different emphasis of the two tasks. For images contaminated with thick clouds, it is often necessary to effectively remove the thick clouds by means of supplemental information such as multi-temporal images, auxiliary sensor data (e.g., synthetic aperture radar), etc. This information may be used to compare images at different times, provide other band data, or assist in determining the thickness and location of the cloud. In contrast, for images contaminated with thin clouds, image processing-based techniques, such as deep learning-based feature extraction methods, may be employed to preserve surface information to a maximum extent.
With the rapid development of deep learning technology, the strong feature extraction and recovery capability of the deep learning technology enables the deep learning technology to achieve excellent performance in complex tasks in a plurality of fields. In recent years, the deep learning technology is widely applied to remote sensing image cloud removal tasks, and has a more excellent effect compared with the traditional method. With the proposal of various excellent models and structures, more color-emitting methods are emerging in the cloud removal field: the method based on the basic convolutional neural network belongs to a basic framework for removing cloud by a deep learning technology, and then a follow-up attention mechanism, a staged strategy and the like further optimize the cloud removing effect, so that the cloud removing effect with smaller color distortion, less obvious repairing trace and clearer repairing area is obtained; in addition, some cloud removing methods based on generation of an countermeasure network are developed, and the characteristics of the generation of the countermeasure network (GAN) are mainly utilized, so that the requirements for a large number of cloud-free image pairs are reduced, and the difficulty of data acquisition is reduced. Although these methods have achieved some remarkable effects at present, there is still room for further improvement.
The remote sensing technology has important significance for earth surface observation, but the quality and the utilization rate of the remote sensing image are greatly reduced due to the existence of the cloud layer, so that the cloud layer removing task of the remote sensing image has important practical significance. The existing thin cloud removing method is still deficient in detail and color recovery, and meanwhile lacks a deep learning cloud removing method aiming at the characteristics of different types of wave bands of multispectral images, and the existing cloud removing method cannot fully combine the advantages of multi-phase information and convolutional neural networks to recover surface information.
In the invention, the pyramid pooling module has the following specific structure: inputting a feature map extracted through a convolution layer, performing pooling operation on the feature map in 1 multiplied by 1,2 multiplied by 2,3 multiplied by 3 and 6 multiplied by 6 different scales to obtain a plurality of feature maps in different scales, performing up-sampling operation on the obtained feature map, recovering the size of the original feature map, and finally splicing the feature maps in the channel dimension to obtain a final composite feature map; the pyramid pooling module improves the perceptibility of the network to the whole structure and local details of the image through multi-scale pooling operation, so that the network can more comprehensively understand the image on the global and local layers, thereby realizing effective attention to the thin cloud shielding area; in the coding module, after pyramid pooling module, 1×1 convolution is carried out on the pooled result to reduce the channel number to 1/4 of the original channel number, and then a convolution module with the step length of 2 is used for completing downsampling.
When the method is implemented, the visible light wave band and the infrared wave band of the image are respectively input into the network model based on the characteristics of different interference caused by cloud layers due to different wave band ranges of the visible light wave band and the infrared wave band in the multispectral image. Through the process, cloud layer distribution diagrams and thickness coefficients aiming at different wave bands are generated, so that the subsequent cloud removal effect is finer. Meanwhile, the method takes unpaired images as input by referring to the idea of circularly generating a countermeasure network (cycleGAN), and reduces the requirement for cloud-cloud image pairs which consume manpower. In a cloud-removing network, the network may focus on restoration of surface information of an area covered by a thin cloud by spatially pooling the pyramid module. The cloud removing effect with finer structure and color characteristics is realized by adopting a loss function added with color loss and contrast loss items. The discriminator mainly comprises five-layer convolution, adopts a sensing domain size of 70×70 and an output size of 30×30, and is used for distinguishing a real cloud-free image from a cloud-free image.
The invention uses Landsat-8 image to carry out experimental verification, in the experiment, the wave band 2, the wave band 3 and the wave band 4 are divided into visible light wave band groups, and the wave band 5, the wave band 6 and the wave band 7 are divided into infrared wave band groups. And according to the observation of the result generated by the cloud layer distribution diagram network, the influence of cloud mist on the optical wave band group is larger compared with that on the infrared wave band group, and the cloud layer distribution diagram has wider cloud layer coverage range and thicker cloud layer thickness than that captured by the infrared wave band group.
The above embodiments are only for illustrating the technical aspects of the present invention, not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may be modified or some or all of the technical features may be replaced with other technical solutions, which do not depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (3)
1. The unpaired data remote sensing image thin cloud removing method based on the sub-band processing is characterized by comprising the following steps of:
s1, respectively sending a visible light wave band and an infrared wave band in a multispectral image into a cloud removing network, respectively processing the visible light wave band and the infrared wave band to respectively obtain a cloud layer distribution diagram, a thickness coefficient and a cloud removing image of the visible light wave band and the infrared wave band;
s2, in the forward circulation, combining the images to be cloud-removed in the two wave bands after the images are processed by a cloud removing network to generate a total cloud-removed image of the forward circulation, combining the cloud layer distribution map, the thickness coefficient and the cloud-removed image in the two wave bands to form a simulated cloud image in the two wave bands, and combining the simulated cloud images in the two wave bands to form a total simulated cloud image of the forward circulation;
s3, in the reverse circulation, combining the cloud layer distribution diagrams, the thickness coefficients and the clear non-cloud images of the two wave bands to form a simulated cloud image, combining the simulated cloud images of the two wave bands to form a total simulated cloud image of the reverse circulation, respectively processing the visible light wave band and the infrared wave band through a cloud removing network to respectively obtain the cloud layer distribution diagrams, the thickness coefficients and the cloud removing images of the visible light wave band and the infrared wave band in the reverse circulation, and combining the cloud removing images of the two wave bands in the reverse circulation to generate a total cloud removing image of the reverse circulation;
s4, selecting unpaired images, and training a cloud removal network by using a loss function added with color loss and contrast loss items;
the cloud removal network of the S1 comprises six pyramid coding modules and six decoding modules, wherein the coding modules are combined with convolution by using a space pooling pyramid module, and the feature extraction of the ground object information of the thin cloud shielding part is concerned;
the network for generating the cloud layer distribution map comprises five coding modules and five decoding modules, and the network for generating the thickness coefficient is connected with the global average pooling layer and the complete connection layer after the five coding modules;
loss function L in S4 total Comprising the following steps:
L total =L adv +λ 1 L color +λ 2 L cons +λ 3 L per +λ 4 L cyc +λ 5 L idt ;
in the middle of,L adv 、L color 、L cons 、L per 、L cyc 、L idt Respectively representing contrast loss, color loss, contrast learning loss, period perception consistency loss, cycle consistency loss and identity mapping loss, lambda 1 、λ 2 、λ 3 、λ 4 、λ 5 Respectively representing five balance parameters;
the contrast learning penalty includes:
;
wherein D (I, J) represents the L1 norm distances of I and J,and->Analog cloud pictures respectively representing visible light wave band and infrared wave band, ">,/>Cloud-removed images obtained in the visible light band and the infrared band in the forward circulation, respectively,/->And->Analog cloud pictures respectively representing visible light wave band and infrared wave band in reverse circulation>And->Respectively represent clear cloudless of visible light wave band and infrared wave band in reverse circulationFigure (S)>And->Representing the cloud-removed image for the visible and infrared bands in the reverse cycle, respectively.
2. The method for removing thin clouds of unpaired data remote sensing images based on subband processing according to claim 1, wherein S2 comprises:
combining the two wave band cloud-removed images to generate a forward-circulated total cloud-removed image:
=concat(/>);
Wherein concat represents a binding;
combining the cloud layer distribution diagram, the thickness coefficient and the cloud removal image according to the two wave bands to form a simulation cloud image of the two wave bands comprises the following steps:
=/>+/>;/>=/>+/>;
in the method, in the process of the invention,and->Thickness coefficients of visible light band and infrared band are respectively expressed, +.>And->Cloud layer distribution map respectively representing visible light wave band and infrared wave band, +>Representing that the thickness coefficient and the cloud layer distribution diagram are subjected to superposition operation;
combining the simulated cloud images of the forward cycles of the two wave bands to generate a total simulated cloud image of the forward cycles:
=concat(/>,/>)。
3. The method for removing thin clouds of unpaired data remote sensing images based on subband processing according to claim 2, wherein S3 comprises:
combining the cloud layer distribution diagrams, the thickness coefficients and the clear cloud-free diagrams of the two wave bands to form a simulated cloud diagram comprises the following steps:
=/>+/>;/>=/>+/>;
in the method, in the process of the invention,and->Respectively representing the thickness coefficients of the visible light band and the infrared band in the reverse circulation, +.>And->Cloud layer distribution diagrams of visible light wave bands and infrared wave bands in reverse circulation are respectively shown;
combining the simulated clouds of the two wavebands to form a reverse-cycled total simulated cloudComprising the following steps:
=concat(/>);
merging to generate reverse-cycled total de-cloud imageComprising the following steps:
=concat(/>,/>)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311839668.0A CN117495723B (en) | 2023-12-29 | 2023-12-29 | Unpaired data remote sensing image thin cloud removal method based on sub-band processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311839668.0A CN117495723B (en) | 2023-12-29 | 2023-12-29 | Unpaired data remote sensing image thin cloud removal method based on sub-band processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117495723A CN117495723A (en) | 2024-02-02 |
CN117495723B true CN117495723B (en) | 2024-03-19 |
Family
ID=89683249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311839668.0A Active CN117495723B (en) | 2023-12-29 | 2023-12-29 | Unpaired data remote sensing image thin cloud removal method based on sub-band processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117495723B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529788A (en) * | 2020-11-13 | 2021-03-19 | 北京航空航天大学 | Multispectral remote sensing image thin cloud removing method based on thin cloud thickness map estimation |
CN112766229A (en) * | 2021-02-08 | 2021-05-07 | 南京林业大学 | Human face point cloud image intelligent identification system and method based on attention mechanism |
CN113724149A (en) * | 2021-07-20 | 2021-11-30 | 北京航空航天大学 | Weak supervision visible light remote sensing image thin cloud removing method |
CN114549891A (en) * | 2022-01-06 | 2022-05-27 | 中国人民解放军国防科技大学 | Foundation cloud picture cloud identification method based on contrast self-supervision learning |
CN116778354A (en) * | 2023-08-08 | 2023-09-19 | 南京信息工程大学 | Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method |
CN116993598A (en) * | 2023-05-15 | 2023-11-03 | 西北工业大学 | Remote sensing image cloud removing method based on synthetic aperture radar and visible light fusion |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11017507B2 (en) * | 2016-12-20 | 2021-05-25 | Nec Corporation | Image processing device for detection and correction of cloud cover, image processing method and storage medium |
US11164291B2 (en) * | 2020-01-14 | 2021-11-02 | International Business Machines Corporation | Under water image color correction |
-
2023
- 2023-12-29 CN CN202311839668.0A patent/CN117495723B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529788A (en) * | 2020-11-13 | 2021-03-19 | 北京航空航天大学 | Multispectral remote sensing image thin cloud removing method based on thin cloud thickness map estimation |
CN112766229A (en) * | 2021-02-08 | 2021-05-07 | 南京林业大学 | Human face point cloud image intelligent identification system and method based on attention mechanism |
CN113724149A (en) * | 2021-07-20 | 2021-11-30 | 北京航空航天大学 | Weak supervision visible light remote sensing image thin cloud removing method |
CN114549891A (en) * | 2022-01-06 | 2022-05-27 | 中国人民解放军国防科技大学 | Foundation cloud picture cloud identification method based on contrast self-supervision learning |
CN116993598A (en) * | 2023-05-15 | 2023-11-03 | 西北工业大学 | Remote sensing image cloud removing method based on synthetic aperture radar and visible light fusion |
CN116778354A (en) * | 2023-08-08 | 2023-09-19 | 南京信息工程大学 | Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method |
Non-Patent Citations (4)
Title |
---|
Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets;Kenji Enomoto等;《2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)》;20170824;第1533-1541页 * |
Semi-supervised thin cloud removal with mutually beneficial guides;Zunxiao Xu;《ISPRS Journal of Photogrammetry and Remote Sensing》;20221031;第192卷;第327-343页 * |
基于改进CycleGAN的水下图像颜色校正与增强;李庆忠等;《自动化学报》;20230430;第49卷(第4期);第820-829页 * |
多光谱遥感图像数据去云方法研究;沈炀;《信息科技辑》;20170215(第02期);第I140-1629页 * |
Also Published As
Publication number | Publication date |
---|---|
CN117495723A (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110992275B (en) | Refined single image rain removing method based on generation of countermeasure network | |
CN112507997B (en) | Face super-resolution system based on multi-scale convolution and receptive field feature fusion | |
CN110517203B (en) | Defogging method based on reference image reconstruction | |
CN108269244B (en) | Image defogging system based on deep learning and prior constraint | |
CN109993804A (en) | A kind of road scene defogging method generating confrontation network based on condition | |
CN111553869B (en) | Method for complementing generated confrontation network image under space-based view angle | |
CN109035146A (en) | A kind of low-quality image oversubscription method based on deep learning | |
CN116797488A (en) | Low-illumination image enhancement method based on feature fusion and attention embedding | |
CN111986108A (en) | Complex sea-air scene image defogging method based on generation countermeasure network | |
CN112241939B (en) | Multi-scale and non-local-based light rain removal method | |
CN112634163A (en) | Method for removing image motion blur based on improved cycle generation countermeasure network | |
Fan et al. | Multiscale cross-connected dehazing network with scene depth fusion | |
CN116152120A (en) | Low-light image enhancement method and device integrating high-low frequency characteristic information | |
Bansal et al. | A review of image restoration based image defogging algorithms | |
CN111414988B (en) | Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network | |
CN114266957A (en) | Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation | |
Shen et al. | AFFNet: attention mechanism network based on fusion feature for image cloud removal | |
Pham et al. | Low-light image enhancement for autonomous driving systems using DriveRetinex-Net | |
CN113962905B (en) | Single image rain removing method based on multi-stage characteristic complementary network | |
Hsu et al. | Object detection using structure-preserving wavelet pyramid reflection removal network | |
CN115953311A (en) | Image defogging method based on multi-scale feature representation of Transformer | |
Sun et al. | Remote sensing images dehazing algorithm based on cascade generative adversarial networks | |
CN114764752B (en) | Night image defogging algorithm based on deep learning | |
Zhang et al. | Enhanced visual perception for underwater images based on multistage generative adversarial network | |
CN117495723B (en) | Unpaired data remote sensing image thin cloud removal method based on sub-band processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |