CN114240796A - Remote sensing image cloud and fog removing method and device based on GAN and storage medium - Google Patents

Remote sensing image cloud and fog removing method and device based on GAN and storage medium Download PDF

Info

Publication number
CN114240796A
CN114240796A CN202111578218.1A CN202111578218A CN114240796A CN 114240796 A CN114240796 A CN 114240796A CN 202111578218 A CN202111578218 A CN 202111578218A CN 114240796 A CN114240796 A CN 114240796A
Authority
CN
China
Prior art keywords
remote sensing
cloud
fog
sensing image
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111578218.1A
Other languages
Chinese (zh)
Other versions
CN114240796B (en
Inventor
罗清彩
孙善宝
蒋梦梦
张晖
张鑫
于�玲
于晓艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Science Research Institute Co Ltd
Original Assignee
Shandong Inspur Science Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Science Research Institute Co Ltd filed Critical Shandong Inspur Science Research Institute Co Ltd
Priority to CN202111578218.1A priority Critical patent/CN114240796B/en
Publication of CN114240796A publication Critical patent/CN114240796A/en
Priority to PCT/CN2022/105319 priority patent/WO2023115915A1/en
Application granted granted Critical
Publication of CN114240796B publication Critical patent/CN114240796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a remote sensing image defogging method, equipment and a storage medium based on GAN, wherein the method comprises the following steps: dividing the visibility level of the obtained remote sensing data with cloud and fog; training the training set of each visibility level in sequence from high visibility to low visibility according to the training set of visibility levels divided with cloud and fog remote sensing data, fixing model parameters of a discriminator, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image; inputting the real and clear remote sensing image and the generated defogged remote sensing image into a discriminator, so that the discriminator can distinguish the real and clear remote sensing image from the defogged remote sensing image; training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level; and interacting the cloud and fog removing CAN model with the remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.

Description

Remote sensing image cloud and fog removing method and device based on GAN and storage medium
Technical Field
The application relates to the technical field of remote sensing, in particular to a remote sensing image defogging method, equipment and storage medium based on GAN.
Background
Generation of a countermeasure network (GAN) is one of the most important methods for unsupervised learning in complex distribution in recent years. The generation countermeasure network consists of a generation network (Generator) and a discrimination network (Discriminator), high-quality output is generated through mutual game learning, and finally, through the mutual countermeasure learning, sampling is carried out from complex probability distribution, and the training of two neural networks is completed. Currently, the technology of creating an antagonistic network is widely applied in many fields.
In recent years, the remote sensing technology is more widely applied, multispectral images and full-color images obtained by satellite shooting form remote sensing images with higher spatial resolution and spectral resolution through image fusion, and the remote sensing image has more advantages than other technical means in the aspects of obtaining basic geographic data, resource information and emergency disaster data and is widely applied to the fields of national economy and military. However, the remote sensing image is very easily interfered by cloud and fog in the imaging process, and remote sensing information of an area shielded by the cloud and fog is lost or deviated, so that the accuracy of remote sensing data is greatly influenced, and the use efficiency of remote sensing application is reduced.
Cloud and fog can be eliminated by a remote sensing image spectral feature-based method, such as a fog optimization transformation method (HOT), a background suppression fog thickness index method (BSHTI) and the like, but a satisfactory result cannot be given.
Disclosure of Invention
The application provides a remote sensing image defogging method based on GAN, which solves the technical problem of inaccurate collected remote sensing images caused by the shielding of cloud fog.
A remote-sensing image defogging method based on generation of an antagonistic network GAN, the GAN including a generator and an identifier, comprising:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with cloud and fog;
the training set of the cloud and fog remote sensing data is divided according to the visibility levels, the training set of each visibility level is trained sequentially according to the sequence of visibility from high to low, during training, model parameters of the discriminator are fixed, the training set corresponding to each visibility level is input into the generator, and a cloud and fog removing remote sensing image is generated;
inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, training and updating parameters of the discriminator, and enabling the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with a remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
In an embodiment of the application, the dividing of the visibility levels according to the training set of the cloud and fog remote sensing data sequentially trains the training set of each visibility level from high to low according to the visibility level, when training, the model parameters of the discriminator are fixed, the training set corresponding to each visibility level is input into the generator, and the cloud and fog remote sensing image is generated, which specifically includes: the visibility levels are divided into levels from L1 to Ln, and the visibility levels from L1 to Ln are reduced in sequence; and sequentially acquiring cloud and fog remote sensing data training sets corresponding to each visibility level from L1 to Ln levels, fixing model parameters of the discriminator, inputting the training sets RSData-TD in each visibility level into the generator for training, and generating the cloud and fog removing remote sensing image RSImg-TD corresponding to each visibility level.
In an embodiment of the present application, after generating the defogged remote sensing image, the method further includes: generating (RSData-TD, RSImg-TD) data pairs according to the training set RSData-TD and the cloud-removing remote sensing image RSImg-TD; generating (RSData-TD, RSShal-TD) data pairs according to the training set RSData-TD and the real and clear remote sensing image RSShal-TD; determining that a negative value is output after the (RSData-TD, RSImg-TD) data pair is input into the discriminator, and determining that a positive value is output after the (RSData-TD, RSReal-TD) data pair is input into the discriminator.
In one embodiment of the present application, the method further comprises: and updating the network parameters of the generator according to a gradient descent method, training, and outputting (RSData-TD, RSImg-TD) data pairs until the discriminator cannot distinguish the (RSData-TD, RSImg-TD) data pairs from the (RSData-TD, RSReal-TD) data pairs.
In an embodiment of the application, the real and clear remote sensing image and the cloud and fog removing remote sensing image generated are input into the discriminator, parameters of the discriminator are trained and updated, so that the discriminator can distinguish the real and clear remote sensing image and the cloud and fog removing remote sensing image, and the method specifically comprises the following steps: inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, fixing network parameters of the generator, training the discriminator, and obtaining an error between the real and clear remote sensing image and the generated cloud and fog removing remote sensing image according to a loss function; and (D) reversely propagating the error, and updating network parameters of the discriminator D, so that the discriminator D outputs a negative low score of the (RSData-TD, RSImg-TD) data pair and a positive high score of the (RSData-TD, RSReal-TD) data pair, so that the discriminator can effectively distinguish the (RSData-TD, RSReal-TD) data pair from the (RSData-TD, RSImg-TD) data pair.
In an embodiment of the present application, before obtaining the remote sensing data with cloud, the method further includes: collecting remote sensing data, carrying out data annotation on the remote sensing data, and dividing out a cloud and fog area, thickness and visibility level; performing model training according to the remote sensing data and the labeling result to generate a cloud and fog region detection model; and inputting the result output by the cloud and fog area detection model and the cloud and fog-free remote sensing data into a cloud and fog coverage model for training so as to enable the cloud and fog-free remote sensing data to output cloud and fog-free remote sensing data after passing through the cloud and fog coverage model.
In one embodiment of the present application, the method further comprises: judging the visibility level of the cloud and fog area according to the cloud and fog area detection model; selecting a corresponding defogging GAN model according to the visibility level; generating a cloud and fog removing remote sensing image according to the cloud and fog removing GAN model; and intercepting the cloud and fog area, and filling the identified cloud and fog area according to the cloud and fog removing remote sensing image to generate a final remote sensing image.
In one embodiment of the present application, the method further comprises: continuously acquiring remote sensing data, and optimizing the cloud and fog area detection model and the cloud and fog coverage model to generate a more accurate data set to train the cloud and fog removing GAN model; subdividing visibility levels according to feedback of the remote sensing image application system, continuously optimizing the cloud and fog removing GAN model, and generating a more reasonable and accurate cloud and fog removing remote sensing image; and adjusting the algorithm of the remote sensing image application system according to the generated defogged remote sensing image, and further optimizing a service system based on remote sensing image analysis.
A remote sensing image defogging device based on GAN comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with cloud and fog;
the training set of the cloud and fog remote sensing data is divided according to the visibility levels, the training set of each visibility level is trained sequentially according to the sequence of visibility from high to low, during training, model parameters of the discriminator are fixed, the training set corresponding to each visibility level is input into the generator, and a cloud and fog removing remote sensing image is generated;
inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, training and updating parameters of the discriminator, and enabling the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with a remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
A non-volatile storage medium storing computer-executable instructions configured to:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with cloud and fog;
the training set of the cloud and fog remote sensing data is divided according to the visibility levels, the training set of each visibility level is trained sequentially according to the sequence of visibility from high to low, during training, model parameters of the discriminator are fixed, the training set corresponding to each visibility level is input into the generator, and a cloud and fog removing remote sensing image is generated;
inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, training and updating parameters of the discriminator, and enabling the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with a remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
The application provides a remote sensing image defogging method, equipment and a storage medium based on GAN, which at least have the following beneficial effects: by utilizing the GAN network and the deep learning technology, a cloud and fog removing GAN model of the remote sensing data is constructed, the model fully considers the characteristics of the remote sensing image, effectively utilizes the correlation among multiple bands of the remote sensing data and combines the characteristics of different cloud and fog thicknesses to generate the remote sensing image with more accurate cloud and fog removing effect. Compared with the traditional cloud and fog eliminating mode technology, the GAN network can better discover deep connection between cloud and fog shielding and ground facilities, generate more reasonably accurate remote sensing images and eliminate remote sensing information deviation brought by cloud and fog shielding areas; the cloud and fog target detection model is used for identifying a specific cloud and fog area and determining the thickness and visibility grade of the cloud and fog, so that on one hand, a defogging generation area is reduced, the accuracy of a non-cloud and fog shielding area is ensured, on the other hand, a plurality of types of models with pertinence are formed according to different visibility grades, and a more appropriate model is selected according to different cloud and fog conditions, so that a remote sensing image has a better cloud and fog elimination effect; the model with gradually-lowered cloud and fog visibility is selected in sequence for training, the model at the previous level is used as an initial network parameter, and a generator and a discriminator in the model are alternately trained, so that convergence can be realized more quickly, and the training efficiency is improved. In addition, the butt joint application system is jointly trained to form a more accurate and reasonable personalized model, so that the actual business requirements of the remote sensing image application are met; feedback data are continuously collected to optimize the model, so that the accuracy of the model is further improved, and meanwhile, an actual service application system can be optimized to form an overall optimal service system based on remote sensing image analysis.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic step view of a GAN-based remote sensing image defogging method according to an embodiment of the present disclosure;
fig. 2 is a training diagram of a demisting GAN model provided in an embodiment of the present application;
fig. 3 is a structural diagram of an apparatus composition of a GAN-based remote sensing image cloud removal method according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in detail and completely with reference to the following specific embodiments. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In one embodiment of the application, a remote sensing image defogging model based on a generation countermeasure network is designed by fully considering the correlation among multiple bands of remote sensing data and combining the characteristics of different cloud haze thicknesses, basic models of different levels are formed by alternately training generators and discriminators in the model, interactive retraining is realized on a remote sensing image practical application system, a more accurate and reasonable personalized model is formed, and the practical business requirements of remote sensing image application are met. By utilizing the generation countermeasure network technology and combining the spectral characteristic characteristics of the remote sensing image, the cloud and fog area can be effectively identified, the remote sensing image defogging model is designed by utilizing the generation countermeasure network technology, the remote sensing image with a defogging effect is generated, the remote sensing information deviation of the cloud and fog shielding area is eliminated, and the remote sensing data application efficiency is improved. The following is a detailed description.
Fig. 1 is a schematic step view of a remote sensing image defogging method based on GAN provided in an embodiment of the present application, which may include the following steps:
s101: and acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with cloud and fog.
In an embodiment of the present application, to train the countermeasure network, a training set is first generated, and then the training set is trained by generating the countermeasure network, and then the docking application system performs joint training to generate remote sensing images for different services, as shown in fig. 2.
In an embodiment of the application, the remote sensing Data is collected before the remote sensing Data with cloud and fog is obtained, the remote sensing Data RS-Data is multi-Channel Data formed based on multispectral sensing collection, and the remote sensing image is formed by combining visible part Data with a full-color image. Carrying out data annotation on the remote sensing data to divide a cloud and fog area, thickness and visibility level, and meanwhile, carrying out data annotation on the remote sensing data in the same area under different weather conditions; performing model training according to the remote sensing data and the labeling result to generate a cloud and fog region detection model; the cloud region detection model CL-Det is responsible for carrying out target detection on a cloud service region of the remote sensing data, identifying a specific cloud region and determining the thickness and visibility grade of the cloud.
And inputting the result output by the cloud and fog area detection model and the cloud and fog-free remote sensing data into a cloud and fog coverage model for training so as to output the cloud and fog-free remote sensing data after the cloud and fog-free remote sensing data passes through the cloud and fog coverage model. The cloud coverage model CL-Cov is used for increasing cloud coverage on the cloud-free remote sensing data based on the set cloud area and thickness level according to the remote sensing data under the cloud-free shielding.
Judging the accuracy of cloud remote sensing data generated by the cloud coverage model CL-Cov by using the cloud region detection model CL-Det, and optimizing and adjusting the cloud coverage model CL-Cov; and constructing a training set TD by using the labeled data and the generated data.
S102: and training the training sets of all visibility levels in sequence from high visibility to low visibility according to the training sets of visibility levels divided with cloud and fog remote sensing data, fixing model parameters of the discriminator during training, and inputting the training sets corresponding to all visibility levels into a generator to generate a cloud and fog removing remote sensing image.
In one embodiment of the present application, the visibility levels are divided into levels L1 to Ln, and the visibility levels L1 to Ln are sequentially reduced; and sequentially acquiring a cloud and fog remote sensing data training set corresponding to each visibility level from L1 to Ln levels.
The core of a remote sensing image cloud fog removal generation countermeasure network basic model RS-M-TD (L1-Ln) is a GAN generation countermeasure network, which comprises a generator G and a discriminator D, and forms a remote sensing image cloud fog removal GAN model with a plurality of visibility levels according to different cloud fog thicknesses and visibility.
And fixing model parameters of the discriminator, inputting the training set RSData-TD in each visibility level into a generator G for training, and generating a defogging remote sensing image RSImg-TD corresponding to each visibility level. The generator G core of the cloud and fog removing GAN model is a CNN convolutional neural network, and a clear remote sensing image after cloud and fog removal is generated by inputting remote sensing data under the condition of cloud and fog.
In one embodiment of the application, after the cloud removing remote sensing image is generated, a (RSData-TD, RSImg-TD) data pair is generated according to a training set RSData-TD and the cloud removing remote sensing image RSImg-TD; generating (RSData-TD, RSShal-TD) data pairs according to the training set RSData-TD and the real and clear remote sensing image RSShal-TD; a negative value is output after determining (RSData-TD, RSImg-TD) data pairs are input into the discriminator, and a positive value is output after determining (RSData-TD, RSReal-TD) data pairs are input into the discriminator. By continuously distinguishing the real remote sensing image from the generated defogged remote sensing image, the GAN model can be continuously optimized.
In one embodiment of the application, the network parameters of the generator are updated according to a gradient descent method to train and output the (RSData-TD, RSImg-TD) data pair until the discriminator is made unable to distinguish the (RSData-TD, RSImg-TD) data pair from the (RSData-TD, RSReal-TD) data pair.
S103: inputting the real and clear remote sensing image and the generated defogging remote sensing image into the discriminator, training and updating parameters of the discriminator, so that the discriminator can distinguish the real and clear remote sensing image from the defogging remote sensing image.
In an embodiment of the application, the core of the defogging GAN model discriminator D is a binary classifier for discriminating a real clear remote sensing image from a defogged remote sensing image generated by the generator G, inputting the remote sensing image in the presence of cloud and the remote sensing image after cloud removal, and outputting a discrimination value to effectively discriminate whether the remote sensing image is the real remote sensing image or the remote sensing image generated by the generator G.
Inputting a real and clear remote sensing image and a generated defogging remote sensing image into a discriminator, namely inputting a (RSData-TD, RSImg-TD) data pair and a (RSData-TD, RSReal-TD) data pair into the discriminator, fixing network parameters of the generator, training the discriminator, and obtaining an error between the real and clear remote sensing image and the generated defogging remote sensing image according to a loss function; the error is propagated backwards and the discriminator D network parameters are updated so that the (RSData-TD, RSImg-TD) data pair outputs a negative low score, say-1 score, through discriminator D. Let the (RSData-TD, RSReal-TD) data be positive for the output high score, say 1. So that the discriminator can effectively distinguish (RSData-TD, RSReal-TD) data pairs from (RSData-TD, RSImg-TD) data pairs.
S104: and training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level.
S105: and interacting the cloud and fog removing CAN model with the remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
In an embodiment of the application, the visibility level of a cloud and fog area is judged according to a cloud and fog area detection model; selecting a cloud fog removing GAN model at a corresponding level according to the visibility level; generating a cloud and fog removing remote sensing image according to the cloud and fog removing GAN model; and intercepting the cloud and fog area, and filling the identified cloud and fog area according to the cloud and fog removing remote sensing image to generate a final remote sensing image.
In one embodiment of the present application, remote sensing data is continuously collected, and the cloud and mist region detection model and the cloud and mist coverage model are optimized to generate a more accurate data set training cloud and mist removal GAN model.
Providing an initial value for a joint application system joint training module CUST-M through a cloud-removing GAN model, enabling the CUST-M to generate a cloud-removing remote sensing image which accords with the service, providing the cloud-removing remote sensing image to a remote sensing image application system, subdividing the visibility level according to the feedback of the remote sensing image application system, continuously optimizing the cloud-removing GAN model, and generating a more reasonable and accurate cloud-removing remote sensing image; and feeding back the generated defogged remote sensing image to a remote sensing image defogging and defogging generation antagonistic network basic model recognizer in CUST-M, feeding back a result to a remote sensing image defogging and generation antagonistic network basic model generator G by the remote sensing image defogging and generation antagonistic network basic model recognizer, adjusting an algorithm of a remote sensing image application system, and further optimizing a service system based on remote sensing image analysis.
Based on the same inventive concept, the embodiment of the present application further provides a corresponding remote sensing image cloud defogging device based on the GAN, as shown in fig. 3.
The embodiment provides a remote sensing image defogging equipment based on GAN, includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with the cloud and fog;
training the training set of each visibility level in sequence from high visibility to low visibility according to the training set of visibility levels divided with cloud and fog remote sensing data, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
inputting the real and clear remote sensing image and the generated defogged remote sensing image into a discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the defogged remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with the remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
Based on the same idea, some embodiments of the present application further provide media corresponding to the above method.
Some embodiments of the present application provide a GAN-based remote sensing image defogging storage medium, which stores computer executable instructions configured to:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with the cloud and fog;
training the training set of each visibility level in sequence from high visibility to low visibility according to the training set of visibility levels divided with cloud and fog remote sensing data, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
inputting the real and clear remote sensing image and the generated defogged remote sensing image into a discriminator, and training and updating parameters of the discriminator to enable the discriminator to distinguish the real and clear remote sensing image from the defogged remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with the remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as to the method and media embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference may be made to some of the descriptions of the method embodiments for relevant points.
The method and the medium provided by the embodiment of the application correspond to the method one to one, so the method and the medium also have the beneficial technical effects similar to the corresponding method.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process method article or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process method article or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process method article or method in which the element is included.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A remote sensing image defogging method based on generation of antagonistic network GAN is characterized in that the GAN comprises a generator and a discriminator, and comprises the following steps:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with cloud and fog;
the training set of the cloud and fog remote sensing data is divided according to the visibility levels, the training set of each visibility level is trained sequentially according to the sequence of visibility from high to low, during training, model parameters of the discriminator are fixed, the training set corresponding to each visibility level is input into the generator, and a cloud and fog removing remote sensing image is generated;
inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, training and updating parameters of the discriminator, and enabling the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with a remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
2. The method as claimed in claim 1, wherein the dividing of the training set of the cloud and fog remote sensing data according to the visibility levels, the training set of each visibility level is trained sequentially according to the sequence of visibility from high to low, during training, model parameters of the discriminator are fixed, the training set corresponding to each visibility level is input into the generator, and a cloud and fog removing remote sensing image is generated, specifically comprising:
the visibility levels are divided into levels from L1 to Ln, and the visibility levels from L1 to Ln are reduced in sequence;
and sequentially acquiring cloud and fog remote sensing data training sets corresponding to each visibility level from L1 to Ln levels, fixing model parameters of the discriminator, inputting the training sets RSData-TD in each visibility level into the generator for training, and generating the cloud and fog removing remote sensing image RSImg-TD corresponding to each visibility level.
3. The method of claim 2, wherein after generating the defogged remote sensing image, the method further comprises:
generating (RSData-TD, RSImg-TD) data pairs according to the training set RSData-TD and the cloud-removing remote sensing image RSImg-TD;
generating (RSData-TD, RSShal-TD) data pairs according to the training set RSData-TD and the real and clear remote sensing image RSShal-TD;
determining that a negative value is output after the (RSData-TD, RSImg-TD) data pair is input into the discriminator, and determining that a positive value is output after the (RSData-TD, RSReal-TD) data pair is input into the discriminator.
4. The method of claim 3, further comprising:
and updating the network parameters of the generator according to a gradient descent method, training, and outputting (RSData-TD, RSImg-TD) data pairs until the discriminator cannot distinguish the (RSData-TD, RSImg-TD) data pairs from the (RSData-TD, RSReal-TD) data pairs.
5. The method according to claim 1, wherein the inputting of the real clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, and the training of updating parameters of the discriminator enable the discriminator to distinguish the real clear remote sensing image from the cloud and fog removing remote sensing image specifically comprises:
inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, fixing network parameters of the generator, training the discriminator, and obtaining an error between the real and clear remote sensing image and the generated cloud and fog removing remote sensing image according to a loss function;
and (D) reversely propagating the error, and updating network parameters of the discriminator D, so that the discriminator D outputs a negative low score of the (RSData-TD, RSImg-TD) data pair and a positive high score of the (RSData-TD, RSReal-TD) data pair, so that the discriminator can effectively distinguish the (RSData-TD, RSReal-TD) data pair from the (RSData-TD, RSImg-TD) data pair.
6. The method of claim 1, wherein prior to obtaining the cloud-based remote sensing data, the method further comprises:
collecting remote sensing data, carrying out data annotation on the remote sensing data, and dividing out a cloud and fog area, thickness and visibility level;
performing model training according to the remote sensing data and the labeling result to generate a cloud and fog region detection model;
and inputting the result output by the cloud and fog area detection model and the cloud and fog-free remote sensing data into a cloud and fog coverage model for training so as to enable the cloud and fog-free remote sensing data to output cloud and fog-free remote sensing data after passing through the cloud and fog coverage model.
7. The method of claim 6, further comprising:
judging the visibility level of the cloud and fog area according to the cloud and fog area detection model;
selecting a corresponding defogging GAN model according to the visibility level;
generating a cloud and fog removing remote sensing image according to the cloud and fog removing GAN model;
and intercepting the cloud and fog area, and filling the identified cloud and fog area according to the cloud and fog removing remote sensing image to generate a final remote sensing image.
8. The method of claim 6, further comprising:
continuously acquiring remote sensing data, and optimizing the cloud and fog area detection model and the cloud and fog coverage model to generate a more accurate data set to train the cloud and fog removing GAN model;
subdividing visibility levels according to feedback of the remote sensing image application system, continuously optimizing the cloud and fog removing GAN model, and generating a more reasonable and accurate cloud and fog removing remote sensing image;
and adjusting the algorithm of the remote sensing image application system according to the generated defogged remote sensing image, and further optimizing a service system based on remote sensing image analysis.
9. The utility model provides a remote sensing image defogging equipment based on GAN which characterized in that includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with cloud and fog;
dividing the training set of the cloud and fog remote sensing data according to the visibility levels, sequentially training the training set of each visibility level according to the sequence of visibility from high to low, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, training and updating parameters of the discriminator, and enabling the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with a remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
10. A non-volatile storage medium storing computer-executable instructions, the computer-executable instructions configured to:
acquiring remote sensing data with cloud and fog, and dividing the visibility level of the remote sensing data with cloud and fog;
dividing the training set of the cloud and fog remote sensing data according to the visibility levels, sequentially training the training set of each visibility level according to the sequence of visibility from high to low, fixing model parameters of a discriminator during training, inputting the training set corresponding to each visibility level into a generator, and generating a cloud and fog removing remote sensing image;
inputting a real and clear remote sensing image and the generated cloud and fog removing remote sensing image into the discriminator, training and updating parameters of the discriminator, and enabling the discriminator to distinguish the real and clear remote sensing image from the cloud and fog removing remote sensing image;
training the generator and the discriminator alternately to generate a defogging GAN model corresponding to each visibility level;
and interacting the cloud and fog removing CAN model with a remote sensing image application system, updating generator parameters after feedback is obtained, and generating an individualized cloud and fog removing GAN model corresponding to the remote sensing image application system.
CN202111578218.1A 2021-12-22 2021-12-22 Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN Active CN114240796B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111578218.1A CN114240796B (en) 2021-12-22 2021-12-22 Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN
PCT/CN2022/105319 WO2023115915A1 (en) 2021-12-22 2022-07-13 Gan-based remote sensing image cloud removal method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111578218.1A CN114240796B (en) 2021-12-22 2021-12-22 Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN

Publications (2)

Publication Number Publication Date
CN114240796A true CN114240796A (en) 2022-03-25
CN114240796B CN114240796B (en) 2024-05-31

Family

ID=80761094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111578218.1A Active CN114240796B (en) 2021-12-22 2021-12-22 Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN

Country Status (2)

Country Link
CN (1) CN114240796B (en)
WO (1) WO2023115915A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115915A1 (en) * 2021-12-22 2023-06-29 山东浪潮科学研究院有限公司 Gan-based remote sensing image cloud removal method and device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252785B (en) * 2023-11-16 2024-03-12 安徽省测绘档案资料馆(安徽省基础测绘信息中心) Cloud removing method based on combination of multisource SAR and optical image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN111667431A (en) * 2020-06-09 2020-09-15 云南电网有限责任公司电力科学研究院 Method and device for manufacturing cloud and fog removing training set based on image conversion
CN113450261A (en) * 2020-03-25 2021-09-28 江苏翼视智能科技有限公司 Single image defogging method based on condition generation countermeasure network
WO2021248938A1 (en) * 2020-06-10 2021-12-16 南京邮电大学 Image defogging method based on generative adversarial network fused with feature pyramid

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552714B2 (en) * 2018-03-16 2020-02-04 Ebay Inc. Generating a digital image using a generative adversarial network
CN109191400A (en) * 2018-08-30 2019-01-11 中国科学院遥感与数字地球研究所 A method of network, which is generated, using confrontation type removes thin cloud in remote sensing image
CN110322419B (en) * 2019-07-11 2022-10-21 广东工业大学 Remote sensing image defogging method and system
CN111383192B (en) * 2020-02-18 2022-10-18 清华大学 Visible light remote sensing image defogging method fusing SAR
CN113724149B (en) * 2021-07-20 2023-09-12 北京航空航天大学 Weak-supervision visible light remote sensing image thin cloud removing method
CN113744159B (en) * 2021-09-09 2023-10-24 青海大学 Defogging method and device for remote sensing image and electronic equipment
CN114240796B (en) * 2021-12-22 2024-05-31 山东浪潮科学研究院有限公司 Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN113450261A (en) * 2020-03-25 2021-09-28 江苏翼视智能科技有限公司 Single image defogging method based on condition generation countermeasure network
CN111667431A (en) * 2020-06-09 2020-09-15 云南电网有限责任公司电力科学研究院 Method and device for manufacturing cloud and fog removing training set based on image conversion
WO2021248938A1 (en) * 2020-06-10 2021-12-16 南京邮电大学 Image defogging method based on generative adversarial network fused with feature pyramid

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115915A1 (en) * 2021-12-22 2023-06-29 山东浪潮科学研究院有限公司 Gan-based remote sensing image cloud removal method and device, and storage medium

Also Published As

Publication number Publication date
WO2023115915A1 (en) 2023-06-29
CN114240796B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN109977921B (en) Method for detecting hidden danger of power transmission line
CN112380921A (en) Road detection method based on Internet of vehicles
CN114240796A (en) Remote sensing image cloud and fog removing method and device based on GAN and storage medium
CN112837315B (en) Deep learning-based transmission line insulator defect detection method
CN110969166A (en) Small target identification method and system in inspection scene
CN103679674A (en) Method and system for splicing images of unmanned aircrafts in real time
CN111581966A (en) Context feature fusion aspect level emotion classification method and device
CN109598238A (en) Information processing method and device, storage medium and electronic equipment
CN112364699A (en) Remote sensing image segmentation method, device and medium based on weighted loss fusion network
CN109919252A (en) The method for generating classifier using a small number of mark images
CN113569672A (en) Lightweight target detection and fault identification method, device and system
CN112668375B (en) Tourist distribution analysis system and method in scenic spot
CN114758337A (en) Semantic instance reconstruction method, device, equipment and medium
CN110147837B (en) Method, system and equipment for detecting dense target in any direction based on feature focusing
Hegde et al. Uncertainty-aware mean teacher for source-free unsupervised domain adaptive 3d object detection
CN105376563A (en) No-reference three-dimensional image quality evaluation method based on binocular fusion feature similarity
Liao et al. Fusion of infrared-visible images in UE-IoT for fault point detection based on GAN
CN104881684A (en) Stereo image quality objective evaluate method
CN115331012B (en) Joint generation type image instance segmentation method and system based on zero sample learning
CN112288700A (en) Rail defect detection method
CN111402156B (en) Restoration method and device for smear image, storage medium and terminal equipment
Gao et al. Road extraction using a dual attention dilated-linknet based on satellite images and floating vehicle trajectory data
Alshehhi et al. Detection of Martian dust storms using mask regional convolutional neural networks
Ali et al. A novel transfer learning approach to detect the location of transformers in distribution network
CN114494893B (en) Remote sensing image feature extraction method based on semantic reuse context feature pyramid

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant