WO2023115915A1 - Procédé et dispositif d'élimination de nuage d'images de détection à distance à base de gan, et support de stockage - Google Patents

Procédé et dispositif d'élimination de nuage d'images de détection à distance à base de gan, et support de stockage Download PDF

Info

Publication number
WO2023115915A1
WO2023115915A1 PCT/CN2022/105319 CN2022105319W WO2023115915A1 WO 2023115915 A1 WO2023115915 A1 WO 2023115915A1 CN 2022105319 W CN2022105319 W CN 2022105319W WO 2023115915 A1 WO2023115915 A1 WO 2023115915A1
Authority
WO
WIPO (PCT)
Prior art keywords
remote sensing
cloud
sensing image
discriminator
fog
Prior art date
Application number
PCT/CN2022/105319
Other languages
English (en)
Chinese (zh)
Inventor
罗清彩
孙善宝
蒋梦梦
张晖
张鑫
于�玲
于晓艳
Original Assignee
山东浪潮科学研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东浪潮科学研究院有限公司 filed Critical 山东浪潮科学研究院有限公司
Publication of WO2023115915A1 publication Critical patent/WO2023115915A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present application relates to the field of remote sensing technology, and in particular to a GAN-based remote sensing image cloud removal method, device, and storage medium.
  • Generative Adversarial Networks is one of the most important methods for unsupervised learning on complex distributions in recent years.
  • the generation confrontation network is composed of a generator network (Generator) and a discriminant network (Discriminator). It generates high-quality output through mutual game learning, and finally through its mutual confrontation learning, samples from complex probability distributions, and completes the training of two neural networks. .
  • generative confrontation network technology has been widely used in many fields.
  • remote sensing technology has been more widely used.
  • Multispectral images and panchromatic images captured by satellites are fused to form remote sensing images with higher spatial and spectral resolutions, which can be used to obtain basic geographic data, resource information and emergency disaster data.
  • it has more advantages than other technical means and has been widely used in the national economy and military fields.
  • remote sensing images are easily disturbed by clouds and fog during the imaging process, and the remote sensing information in areas covered by clouds and fog will be lost or deviated, which greatly affects the accuracy of remote sensing data and reduces the efficiency of remote sensing applications.
  • Methods based on the spectral characteristics of remote sensing images can eliminate clouds and fog, such as fog optimization transform (HOT), background suppression fog thickness index (BSHTI) and other methods, but often cannot give satisfactory results.
  • HAT fog optimization transform
  • BSHTI background suppression fog thickness index
  • This application provides a GAN-based method for removing clouds and fog from remote sensing images, which solves the technical problem of inaccurate remote sensing images collected due to the occlusion of clouds and fog.
  • a remote sensing image declouding method based on generating confrontation network GAN said GAN includes a generator and a discriminator, including:
  • the training set of each visibility level is sequentially trained according to the order of visibility from high to low, and the model of the discriminator is fixed during training. Parameters, input the training set corresponding to each visibility level into the generator to generate cloud and fog remote sensing images;
  • the generator and the discriminator are alternately trained to generate a cloud-removing GAN model corresponding to each visibility level;
  • the training set of each visibility level is sequentially trained according to the order of visibility from high to low.
  • the model parameters of the discriminator are fixed, the training set corresponding to each visibility level is input into the generator, and the cloud and fog remote sensing image is generated, which specifically includes: the visibility level is divided into L1 to The visibility of Ln level, L1 level to Ln level decreases in turn; obtain the cloud and fog remote sensing data training set corresponding to each visibility level in L1 to Ln level in turn, fix the model parameters of the discriminator, and use the training data in each visibility level
  • the set RSData-TD is input to the generator for training, and the declouding remote sensing image RSImg-TD corresponding to each visibility level is generated.
  • the method further includes: generating (RSData-TD, RSImg-TD) data pair according to the training set RSData-TD and the remote sensing image for removing fog and fog RSImg-TD ; Generate (RSData-TD, RSReal-TD) data pairs according to the training set RSData-TD and the real and clear remote sensing images RSReal-TD; determine the (RSData-TD, RSImg-TD) data pairs to input to the discriminator Then output a negative value, and determine that the (RSData-TD, RSReal-TD) data pair is a positive value after being input to the discriminator.
  • the method further includes: updating the network parameters of the generator according to the gradient descent method for training, and outputting (RSData-TD, RSImg-TD) data pairs until the discrimination
  • the compiler cannot distinguish (RSData-TD, RSImg-TD) data pair from (RSData-TD, RSReal-TD) data pair.
  • the real and clear remote sensing image and the generated remote sensing image with de-clouding are input into the discriminator, and the parameters of the discriminator are updated for training so that the discriminator can distinguish
  • the real and clear remote sensing image and the de-cloud remote sensing image specifically include: inputting the real and clear remote sensing image and the generated de-cloud remote sensing image into the discriminator, fixing the network parameters of the generator, and training
  • the discriminator obtains the error between the real and clear remote sensing image and the generated de-clouded remote sensing image according to the loss function; the error is backpropagated, and the network parameters of the discriminator D are updated, so that after Discriminator D, make (RSData-TD, RSImg-TD) data pair output negative low score, make (RSData-TD, RSReal-TD) data pair output positive high score, so that the discriminator can effectively distinguish (RSData-TD, RSReal-TD) data pair and (RSData-TD, RSImg-TD) data pair.
  • the method before obtaining remote sensing data with clouds and fog, the method further includes: collecting remote sensing data, performing data labeling on the remote sensing data, and dividing the cloud and fog area, thickness and visibility level; according to the Remote sensing data and labeling results, carry out model training, generate cloud area detection model; The result of described cloud area detection model output and cloud-free remote sensing data are input in the cloud cover model and train, so that described cloud-free remote sensing data After passing the cloud and fog coverage model, output the remote sensing data with clouds and fog.
  • the method further includes: judging the visibility level of the cloud area according to the cloud area detection model; selecting the corresponding cloud removal GAN model according to the visibility level; The model generates cloud and fog remote sensing images; intercepts cloud and fog areas, fills the identified cloud and fog areas according to the cloud and fog remote sensing images, and generates final remote sensing images.
  • the method further includes: continuously collecting remote sensing data, optimizing the cloud area detection model and the cloud coverage model to generate a more accurate data set to train the cloud removal GAN model; According to the feedback from the remote sensing image application system, subdivide the visibility level, continuously optimize the cloud and fog GAN model, and generate more reasonable and accurate cloud and fog remote sensing images; adjust the remote sensing image application system algorithm according to the generated cloud and fog remote sensing images , to further optimize the business system based on remote sensing image analysis.
  • a GAN-based remote sensing image cloud removal device including:
  • the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to:
  • the training set of each visibility level is sequentially trained according to the order of visibility from high to low, and the model of the discriminator is fixed during training. Parameters, input the training set corresponding to each visibility level into the generator to generate cloud and fog remote sensing images;
  • the generator and the discriminator are alternately trained to generate a cloud-removing GAN model corresponding to each visibility level;
  • a non-volatile storage medium storing computer-executable instructions, wherein the computer-executable instructions are set to:
  • the training set of each visibility level is sequentially trained according to the order of visibility from high to low, and the model of the discriminator is fixed during training. Parameters, input the training set corresponding to each visibility level into the generator to generate cloud and fog remote sensing images;
  • the generator and the discriminator are alternately trained to generate a cloud-removing GAN model corresponding to each visibility level;
  • This application provides a method, device, and storage medium for removing clouds and fog from remote sensing images based on GAN, at least including the following beneficial effects: using GAN network and deep learning technology, a GAN model for removing clouds and fog from remote sensing data is constructed, and the model fully takes into account the remote sensing images.
  • GAN network and deep learning technology a GAN model for removing clouds and fog from remote sensing data is constructed, and the model fully takes into account the remote sensing images.
  • the use of GAN network can better discover the deep connection between cloud and fog occlusion and ground facilities, generate more reasonable and accurate remote sensing images, and eliminate the remote sensing information deviation caused by cloud and fog occlusion areas;
  • the detection model identifies the specific cloud area and determines the cloud thickness and visibility level. On the one hand, it reduces the area where the fog is generated and ensures the accuracy of the cloud-free area. On the other hand, it forms targeted multi-type models according to different visibility levels. Select a more suitable model for different cloud and fog conditions, so that the remote sensing image has a better effect of eliminating clouds and fog; sequentially select the model with gradually lower cloud and fog visibility for training, and use the model of the previous level as the initial network parameter.
  • the generator and discriminator can achieve faster convergence and improve training efficiency.
  • the joint training of the docking application system forms a more accurate and reasonable personalized model to meet the actual business needs of remote sensing image applications; continuously collects feedback data to optimize the model, further improves the accuracy of the model, and optimizes the actual business application system.
  • the overall optimal business system based on remote sensing image analysis.
  • Fig. 1 is a schematic diagram of the steps of a GAN-based remote sensing image cloud removal method provided by the embodiment of the present application;
  • Fig. 2 is the GAN model training diagram for removing clouds and fog provided by the embodiment of the present application
  • FIG. 3 is a device composition diagram of a GAN-based remote sensing image cloud removal method provided by an embodiment of the present application.
  • a remote sensing image cloud removal model based on generative adversarial networks is designed, and the generator in the training model is alternately Different levels of basic models are formed with the discriminator, and the remote sensing image application system is connected to realize interactive retraining to form a more accurate and reasonable personalized model to meet the actual business needs of remote sensing image applications.
  • generative adversarial network technology combined with the spectral characteristics of remote sensing images, it can effectively identify cloud and fog areas, and use generative adversarial network technology to design remote sensing image cloud and fog models, generate remote sensing images with cloud and fog effects, and eliminate remote sensing information deviation in cloud and fog occlusion areas. Improve the efficiency of remote sensing data application. A detailed description will be given below.
  • FIG. 1 is a schematic diagram of the steps of a GAN-based remote sensing image cloud removal method provided in the embodiment of the present application, which may include the following steps:
  • S101 Acquiring remote sensing data with clouds and fog, and classifying the visibility levels of the remote sensing data with clouds and fog.
  • in order to train the GAN it is necessary to first generate a training set, and then train the training set through the GAN, and then connect the application system for joint training to generate remote sensing images for different businesses. as shown in picture 2.
  • the remote sensing data before obtaining remote sensing data with clouds and mist, the remote sensing data is collected, and the remote sensing data RS-Data is multi-channel (Channel) data formed based on multi-spectral sensing collection, and its visible light part data is combined with panchromatic images to form remote sensing images.
  • RS-Data is multi-channel (Channel) data formed based on multi-spectral sensing collection, and its visible light part data is combined with panchromatic images to form remote sensing images.
  • Carry out data labeling on the remote sensing data divide the cloud area, thickness and visibility level, and also carry out data labeling on the remote sensing data under different weather conditions in the same area; perform model training according to the remote sensing data and labeling results, and generate a cloud area detection model;
  • the cloud area detection model CL-Det is responsible for the target detection of the cloud service area of the remote sensing data, identifying the specific cloud area and determining the cloud thickness and visibility level.
  • the output results of the cloud area detection model and the cloud-free remote sensing data are input into the cloud coverage model for training, so that the cloud-free remote sensing data can output the cloudy remote sensing data after passing through the cloud coverage model.
  • the cloud coverage model CL-Cov is based on the remote sensing data without cloud cover, based on the set cloud area and thickness level, and adds cloud coverage to the remote sensing data without cloud.
  • the cloud area detection model CL-Det is used to judge the accuracy of the cloudy remote sensing data generated by the cloud coverage model CL-Cov, and the cloud coverage model CL-Cov is optimized and adjusted; the training set TD is constructed by using the labeled data and generated data.
  • the training set of each visibility level is sequentially trained according to the order of visibility from high to low.
  • the model parameters of the fixed discriminator will correspond to each
  • the training set of visibility levels is input into the generator to generate declouded remote sensing images.
  • the visibility levels are divided into L1 to Ln levels, and the visibility from L1 to Ln levels decreases sequentially; and the cloud and fog remote sensing data training sets corresponding to each visibility level in L1 to Ln levels are sequentially acquired.
  • the core of RS-M-TD (L1-Ln) is the GAN generation confrontation network, which includes two parts: generator G and discriminator D, and forms multiple visibility according to different cloud thickness and visibility Level remote sensing image to cloud and fog GAN model.
  • the model parameters of the discriminator are fixed, the training set RSData-TD in each visibility level is input into the generator G for training, and the declouding remote sensing image RSImg-TD corresponding to each visibility level is generated.
  • the core of the generator G of the cloud removal GAN model is a CNN convolutional neural network, which generates clear remote sensing images after cloud removal by inputting remote sensing data in the case of clouds and fog.
  • (RSData-TD, RSImg-TD) data pairs are generated according to the training set RSData-TD and the cloud and fog remote sensing images RSImg-TD; according to the training set RSData- TD and the real and clear remote sensing image RSReal-TD generate (RSData-TD, RSReal-TD) data pair; determine (RSData-TD, RSImg-TD) data pair input to the discriminator and output a negative value, determine (RSData-TD, RSReal -TD) The data pair is positive after input to the discriminator.
  • the network parameters of the generator are updated according to the gradient descent method for training, and the (RSData-TD, RSImg-TD) data pair is output until the discriminator cannot distinguish (RSData-TD, RSImg-TD) TD) data pair and (RSData-TD, RSReal-TD) data pair.
  • S103 Input the real and clear remote sensing image and the generated de-cloud remote sensing image into the discriminator, train and update the parameters of the discriminator, so that the discriminator can distinguish the real and clear remote sensing image from the de-cloud remote sensing image.
  • the core of the de-clouding GAN model discriminator D is a binary classifier, which is used to distinguish the real clear remote sensing image from the de-clouding remote sensing image generated by the generator G.
  • the remote sensing image and the cloud-removed remote sensing image output a discriminant value to effectively distinguish whether it is a real remote sensing image or a remote sensing image generated by the generator G.
  • S104 Alternately train the generator and the discriminator to generate a cloud-removing GAN model corresponding to each visibility level.
  • S105 Interact the cloud-removing CAN model with the remote sensing image application system, update the parameters of the generator after getting feedback, and generate a personalized cloud-removing GAN model corresponding to the remote sensing image application system.
  • the visibility level of the cloud area is judged; according to the visibility level, the cloud removal GAN model of the corresponding level is selected; the cloud remote sensing image is generated according to the cloud removal GAN model; the cloud area is intercepted, According to the cloud and fog remote sensing image, the identified cloud and fog area is filled to generate the final remote sensing image.
  • the remote sensing data is continuously collected, and the cloud area detection model and the cloud coverage model are optimized to generate a more accurate data set to train the cloud removal GAN model.
  • the initial value is provided for the joint training module CUST-M of the docking application system, so that CUST-M can generate cloud and fog remote sensing images in line with the business, and provide them to the remote sensing image application system, and then according to the feedback of the remote sensing image application system, refine According to the visibility level, continuously optimize the de-clouding GAN model to generate more reasonable and accurate de-clouding remote sensing images; feed back the generated de-clouding remote sensing images to the remote sensing image de-clouding generation confrontation network basic model discriminator in CUST-M, and then the remote sensing
  • the image declouding generation confrontation network basic model discriminator feeds back the results to the remote sensing image declouding generation confrontation network basic model generator G, adjusts the remote sensing image application system algorithm, and further optimizes the business system based on remote sensing image analysis.
  • the present application embodiment also provides a corresponding GAN-based remote sensing image cloud removal device, as shown in Figure 3 .
  • This embodiment provides a GAN-based remote sensing image cloud removal device, including:
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:
  • the training set of cloud and fog remote sensing data is divided according to the visibility level, and the training set of each visibility level is sequentially trained according to the order of visibility from high to low.
  • the model parameters of the fixed discriminator will correspond to each visibility level.
  • the training set of is input into the generator to generate cloud-removed remote sensing images;
  • some embodiments of the present application also provide media corresponding to the above method.
  • Some embodiments of the present application provide a GAN-based remote sensing image cloud removal storage medium, which stores computer-executable instructions, and the computer-executable instructions are set to:
  • the training set of cloud and fog remote sensing data is divided according to the visibility level, and the training set of each visibility level is sequentially trained according to the order of visibility from high to low.
  • the model parameters of the fixed discriminator will correspond to each visibility level.
  • the training set of is input into the generator to generate cloud-removed remote sensing images;
  • each embodiment in the present application is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to the descriptions of the method embodiments.
  • the methods and media provided in the embodiments of the present application correspond to the methods one by one, therefore, the methods and media also have beneficial technical effects similar to their corresponding methods. Since the beneficial technical effects of the methods have been described in detail above, therefore, The beneficial technical effects of the method and medium will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Sont divulgués dans la présente demande un procédé et un dispositif d'élimination de nuage d'images de détection à distance à base de GAN, et un support de stockage. Le procédé consiste : à diviser des niveaux de visibilité correspondant à des données de détection à distance nuageuse acquises ; en fonction des niveaux de visibilité, à diviser des ensembles de formation correspondant aux données de détection à distance nuageuses, à former successivement l'ensemble de formation de chaque niveau de visibilité en fonction de l'ordre de visibilité allant de haut à bas, à fixer des paramètres de modèle d'un discriminateur, et à entrer, dans un générateur, l'ensemble de formation correspondant à chaque niveau de visibilité, de manière à générer une image de détection à distance éliminée du nuage ; à entrer, dans le discriminateur, une image de détection à distance réelle et claire et l'image de détection à distance éliminée du nuage générée, de telle sorte que le discriminateur peut distinguer l'image de détection à distance réelle et claire de l'image de détection à distance éliminée du nuage ; à former alternativement le générateur et le discriminateur, de manière à générer un modèle GAN d'élimination de nuage correspondant à chaque niveau de visibilité ; et à réaliser une interaction entre le modèle GAN d'élimination de nuage et un système d'application d'image de détection à distance, de manière à obtenir une rétroaction, puis à mettre à jour des paramètres du générateur, et à générer un modèle GAN d'élimination de nuage personnalisé qui correspond au système d'application d'image de détection à distance.
PCT/CN2022/105319 2021-12-22 2022-07-13 Procédé et dispositif d'élimination de nuage d'images de détection à distance à base de gan, et support de stockage WO2023115915A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111578218.1A CN114240796B (zh) 2021-12-22 2021-12-22 一种基于gan的遥感影像去云雾方法、设备、存储介质
CN202111578218.1 2021-12-22

Publications (1)

Publication Number Publication Date
WO2023115915A1 true WO2023115915A1 (fr) 2023-06-29

Family

ID=80761094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105319 WO2023115915A1 (fr) 2021-12-22 2022-07-13 Procédé et dispositif d'élimination de nuage d'images de détection à distance à base de gan, et support de stockage

Country Status (2)

Country Link
CN (1) CN114240796B (fr)
WO (1) WO2023115915A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252785A (zh) * 2023-11-16 2023-12-19 安徽省测绘档案资料馆(安徽省基础测绘信息中心) 一种基于多源sar与光学影像联合的去云方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240796B (zh) * 2021-12-22 2024-05-31 山东浪潮科学研究院有限公司 一种基于gan的遥感影像去云雾方法、设备、存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191400A (zh) * 2018-08-30 2019-01-11 中国科学院遥感与数字地球研究所 一种使用对抗式生成网络去除遥感图像薄云的方法
US20190286950A1 (en) * 2018-03-16 2019-09-19 Ebay Inc. Generating a digital image using a generative adversarial network
CN110322419A (zh) * 2019-07-11 2019-10-11 广东工业大学 一种遥感图像去雾方法及系统
CN111383192A (zh) * 2020-02-18 2020-07-07 清华大学 一种融合sar的可见光遥感图像去雾方法
CN113724149A (zh) * 2021-07-20 2021-11-30 北京航空航天大学 一种弱监督的可见光遥感图像薄云去除方法
CN113744159A (zh) * 2021-09-09 2021-12-03 青海大学 一种遥感图像去雾方法、装置及电子设备
CN114240796A (zh) * 2021-12-22 2022-03-25 山东浪潮科学研究院有限公司 一种基于gan的遥感影像去云雾方法、设备、存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493303B (zh) * 2018-05-30 2021-08-17 湘潭大学 一种基于生成对抗网络的图像去雾方法
CN113450261A (zh) * 2020-03-25 2021-09-28 江苏翼视智能科技有限公司 一种基于条件生成对抗网络的单幅图像去雾方法
CN111667431B (zh) * 2020-06-09 2023-04-14 云南电网有限责任公司电力科学研究院 一种基于图像转换制作去云雾训练集的方法及装置
CN111738942A (zh) * 2020-06-10 2020-10-02 南京邮电大学 一种融合特征金字塔的生成对抗网络图像去雾方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190286950A1 (en) * 2018-03-16 2019-09-19 Ebay Inc. Generating a digital image using a generative adversarial network
CN109191400A (zh) * 2018-08-30 2019-01-11 中国科学院遥感与数字地球研究所 一种使用对抗式生成网络去除遥感图像薄云的方法
CN110322419A (zh) * 2019-07-11 2019-10-11 广东工业大学 一种遥感图像去雾方法及系统
CN111383192A (zh) * 2020-02-18 2020-07-07 清华大学 一种融合sar的可见光遥感图像去雾方法
CN113724149A (zh) * 2021-07-20 2021-11-30 北京航空航天大学 一种弱监督的可见光遥感图像薄云去除方法
CN113744159A (zh) * 2021-09-09 2021-12-03 青海大学 一种遥感图像去雾方法、装置及电子设备
CN114240796A (zh) * 2021-12-22 2022-03-25 山东浪潮科学研究院有限公司 一种基于gan的遥感影像去云雾方法、设备、存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252785A (zh) * 2023-11-16 2023-12-19 安徽省测绘档案资料馆(安徽省基础测绘信息中心) 一种基于多源sar与光学影像联合的去云方法
CN117252785B (zh) * 2023-11-16 2024-03-12 安徽省测绘档案资料馆(安徽省基础测绘信息中心) 一种基于多源sar与光学影像联合的去云方法

Also Published As

Publication number Publication date
CN114240796B (zh) 2024-05-31
CN114240796A (zh) 2022-03-25

Similar Documents

Publication Publication Date Title
WO2023115915A1 (fr) Procédé et dispositif d'élimination de nuage d'images de détection à distance à base de gan, et support de stockage
CN110163110B (zh) 一种基于迁移学习和深度特征融合的行人重识别方法
Sameen et al. Classification of very high resolution aerial photos using spectral‐spatial convolutional neural networks
Yang et al. St3d++: Denoised self-training for unsupervised domain adaptation on 3d object detection
US11593610B2 (en) Airport noise classification method and system
CN110379020B (zh) 一种基于生成对抗网络的激光点云上色方法和装置
CN110555390A (zh) 基于半监督训练方式的行人重识别方法、装置及介质
US11928957B2 (en) Audiovisual secondary haptic signal reconstruction method based on cloud-edge collaboration
Jaus et al. Panoramic panoptic segmentation: Towards complete surrounding understanding via unsupervised contrastive learning
CN109919252A (zh) 利用少数标注图像生成分类器的方法
CN114758337B (zh) 一种语义实例重建方法、装置、设备及介质
CN114612835A (zh) 一种基于YOLOv5网络的无人机目标检测模型
CN111310821A (zh) 多视图特征融合方法、系统、计算机设备及存储介质
KR20200075940A (ko) 실시간 데이터 셋 확대 생성 시스템, 실시간 데이터 셋 확대 생성 방법, 및 이를 실행시키기 위한 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체
WO2023097944A1 (fr) Procédé et appareil de détermination de position de bronchoscope, système, dispositif et support
CN115331012A (zh) 基于零样本学习的联合生成式图像实例分割方法及系统
Choi Traffic map prediction using UNet based deep convolutional neural network
CN115019163A (zh) 基于多源大数据的城市要素识别方法
CN116362318B (zh) 基于自适应深度修正的纯视觉三维目标检测方法和系统
KR102014288B1 (ko) 드론을 이용한 인공지능 기반 개발압력 예측방법
CN115909255B (zh) 图像生成、图像分割方法、装置、设备、车载终端及介质
Liu et al. A novel deep transfer learning method for sar and optical fusion imagery semantic segmentation
CN113505834A (zh) 训练检测模型、确定图像更新信息和更新高精地图的方法
Kalitsios et al. Enhancing power line segmentation for uav inspection utilizing synthetic data
JP2023528530A (ja) 訓練装置、制御方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909246

Country of ref document: EP

Kind code of ref document: A1