CN114973009A - Cloud removing method and device suitable for global remote sensing image and computer equipment - Google Patents

Cloud removing method and device suitable for global remote sensing image and computer equipment Download PDF

Info

Publication number
CN114973009A
CN114973009A CN202210514667.8A CN202210514667A CN114973009A CN 114973009 A CN114973009 A CN 114973009A CN 202210514667 A CN202210514667 A CN 202210514667A CN 114973009 A CN114973009 A CN 114973009A
Authority
CN
China
Prior art keywords
cloud
remote sensing
sensing image
image data
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210514667.8A
Other languages
Chinese (zh)
Inventor
陶益康
黄建华
宋杰
胡辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruhr Technology Co Ltd
Original Assignee
Hangzhou Ruhr Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruhr Technology Co Ltd filed Critical Hangzhou Ruhr Technology Co Ltd
Priority to CN202210514667.8A priority Critical patent/CN114973009A/en
Publication of CN114973009A publication Critical patent/CN114973009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a cloud removing method, a cloud removing device and computer equipment suitable for global remote sensing images, wherein the method comprises the following steps: training and generating a local cloud removing model based on the local area remote sensing image; acquiring remote sensing image data of a to-be-cloud-removed area, and processing the remote sensing image data of the to-be-cloud-removed area to generate first image data; equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set; generating a training data set for each subregion based on the first image data according to a model structure; and fine-tuning the local cloud removing model based on the training data set of each sub-region to generate a cloud removing model suitable for each sub-region. By implementing the method provided by the embodiment of the invention, the global remote sensing image can be cloud-removed, so that the global remote sensing image is suitable for environments in different areas, and the cloud-removing accuracy of the global remote sensing image is improved.

Description

Cloud removing method and device suitable for global remote sensing image and computer equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a cloud removing method and device suitable for a global remote sensing image and computer equipment.
Background
With the development of remote sensing technology, remote sensing images with high resolution are applied more and more widely. The remote sensing image is extremely susceptible to the influence of the weather environment, wherein cloud layer shielding is one of the common influences. The cloud removal of the remote sensing image is the basis for accurate interpretation of the remote sensing image, and the cloud removal effect of the remote sensing image directly influences the usability of the remote sensing image. Therefore, the method has very important significance for removing cloud from the remote sensing image.
At present, a large number of technical methods have been developed for the cloud removal problem of remote sensing images, wherein deep learning models are excellent in performance, for example, image generation is performed on a cloud-containing region by combining a multi-source data utilization generation countermeasure network (GAN) method, the cloud-containing region is recovered by using a residual error network ResNet, and the like, and related methods mainly focus on continuously optimizing a network structure on a fixed data set so as to achieve the purpose of precision improvement.
However, the existing deep learning model cloud removing usually adopts a training data set with a single space, and in addition, the imaging error of a remote sensing instrument, the complexity of the earth surface and the like are added, so that the deep learning model developed by the existing method can only be suitable for local cloud removing and cannot effectively play a role in different areas or areas with significant differences in ground object types.
Therefore, it is necessary to design a new method suitable for global cloud removal of remote sensing images, which is a problem to be solved in the art.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method, a device and computer equipment suitable for cloud removal of global remote sensing images.
In order to achieve the purpose, the invention adopts the following technical scheme: the cloud removing method suitable for the global remote sensing image comprises the following steps:
training and generating a local cloud removing model based on the local area remote sensing image;
acquiring remote sensing image data of a to-be-cloud-removed area, and processing the remote sensing image data of the to-be-cloud-removed area to generate first image data;
equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set;
generating a training data set for each subregion based on the first image data according to a model structure;
and fine-tuning the local cloud removing model based on the training data set of each sub-region to generate a cloud removing model suitable for each sub-region.
Further, the acquiring remote sensing image data of the to-be-cloud-removed area, and processing the remote sensing image data of the to-be-cloud-removed area to generate first image data includes:
acquiring remote sensing image data of a to-be-cloud-removed area, and identifying cloud-containing remote sensing images in the remote sensing image data of the to-be-cloud-removed area;
masking a cloud-containing region in the cloud-containing remote sensing image to obtain second image data;
generating a first cloud-free remote sensing image based on the second image data;
and forming first image data by the remote sensing image data of the to-be-cloud-removed area and the first non-cloud remote sensing image.
Furthermore, the homogeneous area parameters correspond to the region environment under the area and are relatively consistent, and the remote sensing image has no obvious hue or radiant energy measurement difference.
Further, a first cloud-free remote sensing image is generated based on the second image data through a convolutional neural network.
The invention also provides a cloud removing device suitable for the global remote sensing image, which comprises the following components:
the local model training unit is used for generating a local cloud removing model based on local area remote sensing image training;
the device comprises a preprocessing unit, a data processing unit and a data processing unit, wherein the preprocessing unit is used for acquiring remote sensing image data of an area to be cloud removed and processing the remote sensing image data of the area to be cloud removed to generate first image data;
equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set;
generating a training data set for each subregion based on the first image data according to a model structure;
and fine-tuning the local cloud removing model based on the training data set of each sub-region to generate a cloud removing model suitable for each sub-region.
Further, the preprocessing unit includes:
the identification unit is used for acquiring remote sensing image data of an area to be cloud removed and identifying a cloud-containing remote sensing image in the remote sensing image data of the area to be cloud removed;
the mask unit is used for masking a cloud-containing region in the cloud-containing remote sensing image to obtain second image data;
the generating unit is used for generating a first cloud-free remote sensing image based on the second image data;
and the integration unit is used for combining the remote sensing image data of the to-be-cloud-removed area and the first non-cloud remote sensing image into first image data.
Furthermore, the homogeneous area parameters correspond to the region environment under the area and are relatively consistent, and the remote sensing image has no obvious hue or radiant energy measurement difference.
And further, generating a first cloud-free remote sensing image based on the second image data through a convolutional neural network.
The invention also provides computer equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the global remote sensing image cloud removing method.
The invention also provides a storage medium, wherein the storage medium stores a computer program, and the computer program is executed by a processor to realize the cloud removing method suitable for the global remote sensing image.
Compared with the prior art, the invention has the beneficial effects that: according to the method, the regions to be cloud-removed are equally divided through the homogeneity area parameters, so that the training data set is generated for each subregion, and the cloud removal model suitable for each subregion is generated, so that the global remote sensing image is cloud-removed, the global remote sensing image is suitable for environments of different regions, and the cloud removal accuracy of the global remote sensing image is improved. Meanwhile, the local cloud removing model is generated based on the local area remote sensing image training, and only parameter fine adjustment is needed when the cloud removing model of each sub-area is generated, so that the cloud removing model generation efficiency of each sub-area is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a global remote sensing image cloud removing method according to an embodiment of the present invention;
fig. 2 is a schematic sub-flow diagram of a global remote sensing image cloud removing method according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a global remote sensing image cloud removing device according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a processing unit suitable for a global remote sensing image cloud removing device according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Fig. 1 is a schematic flow chart of a cloud removing method for a global remote sensing image according to an embodiment of the present invention. As shown in fig. 1, the method includes the following steps S110 to S150.
S110, training and generating a local cloud removing model based on the remote sensing image of the local area;
in the present embodiment, a local cloud removal model is first generated based on remote sensing data related to a local area. The local area refers to the same area with a single space, and the remote sensing data corresponding to the small-range area are used for training the deep learning model to obtain the cloud removing model suitable for the local area.
Specifically, the remote sensing image needs to be acquired first. The remote sensing image is multi-channel data formed based on multispectral sensing collection, and the remote sensing image is formed by combining visible part data with a full-color image. In the process of collecting remote sensing images of local areas, the remote sensing data of the same area under different weather conditions are mainly collected and marked. The data based on the annotations are further processed through cloud detection and the like to train the cloud removal model so as to generate a local cloud removal model meeting certain precision.
The local cloud removing model may be a GAN model, a ResNet model, or other existing local cloud removing models, and the present invention is not limited thereto.
S120, obtaining remote sensing image data of an area to be cloud removed, and processing the remote sensing image data of the area to be cloud removed to generate first image data;
the invention aims to generate a model suitable for global remote sensing image cloud removal so as to overcome the influence of factors such as earth surface complexity and the like. For the local cloud removing model, the remote sensing image acquired by the local small area has a good cloud removing effect, but for other areas with large ground object type difference, the effect is poor. Therefore, when the areas with large ground object type difference are subjected to cloud removal, the remote sensing image data of the areas to be cloud removed are collected firstly. And processing the acquired remote sensing image data of the to-be-cloud-removed area to obtain first image data.
In an embodiment, referring to fig. 2, the step S120 may include steps S121 to S123.
S121, obtaining remote sensing image data of an area to be cloud removed, and identifying a cloud-containing remote sensing image in the remote sensing image data of the area to be cloud removed;
and for the obtained remote sensing image data of the to-be-cloud-removed area, the remote sensing image data comprises a cloud-containing remote sensing image and a non-cloud remote sensing image. Therefore, the cloud-containing remote sensing image is identified by detecting the remote sensing image data of the to-be-cloud-removed area. Cloud detection methods such as Fmask and Tmask are adopted for cloud-containing remote sensing image recognition, and the cloud detection methods are not limited herein.
S122, masking a cloud-containing region in the cloud-containing remote sensing image to obtain second image data;
in this embodiment, for the identified cloud-containing remote sensing image, a cloud-containing region needs to be masked. The masking is to use the selected image, graph or object to shield the cloud-containing area in the processed cloud-containing remote sensing image. And generating second image data corresponding to the cloud-containing remote sensing image after masking the cloud-containing area.
S123, generating a first cloud-free remote sensing image based on the second image data;
specifically, for the masked second image data, a corresponding first cloud-free remote sensing image is generated through a neural network. The generation network can be a CNN convolutional neural network, and the cloud-free remote sensing image is generated by inputting second image data.
And S124, forming first image data by the remote sensing image data of the to-be-cloud-removed area and the first non-cloud remote sensing image.
As described above, for remote sensing image data of an area to be cloud-removed, which is not identified to contain cloud, no further processing is required. When the first non-cloud remote sensing image corresponding to the cloud-containing remote sensing image is obtained, the first non-cloud remote sensing image is integrated with original remote sensing image data of the to-be-removed area including the cloud-containing remote sensing image and the non-cloud remote sensing image to form first image data together.
S130, equally dividing the region to be subjected to cloud removal based on the homogeneity area parameter to obtain a sub-region set;
since the cloud removing effect of the remote sensing image is closely related to the regional environment, the regional environment difference possibly included in the global region to be cloud removed is huge. Therefore, the cloud area to be removed is divided equally to obtain a plurality of sub-areas, the environment in each sub-area is relatively consistent, and the remote sensing image has no obvious hue or radiant energy measurement difference. Therefore, the cloud area to be removed is divided equally by the area parameter through setting the homogeneity area parameter, so that the area environment under the area is relatively consistent, and the remote sensing image has no obvious hue or radiant energy measurement difference. The homogeneity area parameter may be set empirically and is not limited herein. And dividing the region to be cloud-removed into a plurality of sub-regions with the same area through the set homogeneity area parameter.
S140, generating a training data set for each sub-region based on the first image data according to a model structure;
in order to generate a cloud removal model suitable for each sub-region, the invention first survives a training data set corresponding to each sub-region. Specifically, the first image data is divided according to the range of each sub-region. The first image data is cut into a plurality of small-sized images.
In addition, the training data required by different deep learning models are not completely the same, so the invention further preprocesses the small-size images corresponding to each sub-region according to the structure of the deep learning models so as to adapt to different model structures.
S150, fine adjustment is carried out on the local cloud removing model based on the training data set of each sub-area, and a cloud removing model suitable for each sub-area is generated.
In this embodiment, the training data sets of the sub-regions are respectively input into the local cloud removing model generated in advance, the cloud removing model is further trained to improve the cloud removing effect on the remote sensing images of the corresponding sub-regions, and the cloud removing model adapted to the sub-regions is generated to meet the cloud removing requirement of the global remote sensing image.
Taking the GAN model as an example, the GAN model includes a generator and an arbiter. Firstly, inputting the cloud remote sensing images in the training data set of each subregion into a generator to generate a second non-cloud remote sensing image. And inputting the second cloud-free remote sensing image and the real cloud-free remote sensing image as a sample pair into a discriminator to judge the second cloud-free remote sensing image and the real cloud-free remote sensing image. And obtaining an error between the second cloud-free remote sensing image and the real cloud-free remote sensing image through a loss function, and reversely spreading the error so as to update the network parameters of the discriminator. Further, the second cloud-free remote sensing image training generator is used for prompting the generator to generate the quality of the sample. And finally, training alternation is carried out on the two sub-modules of the generator and the discriminator, so that the generator can generate a vivid second cloud-free remote sensing image, and the discriminator can also distinguish the authenticity of input.
Because the local cloud removing model is generated in advance, the local cloud removing model only needs to be finely adjusted through the training data set of each sub-region, and the training efficiency of the global cloud removing model is improved.
According to the cloud removing method suitable for the global remote sensing image, the cloud area to be removed is divided equally through the homogeneity area parameters, then the training data set is generated for each subregion, and the cloud removing model suitable for each subregion is generated, so that the global remote sensing image is removed, the global remote sensing image is suitable for environments of different regions, and the cloud removing accuracy of the global remote sensing image is improved. Meanwhile, the local cloud removing model is generated based on the local area remote sensing image training, and only parameter fine adjustment is needed when the cloud removing model of each sub-area is generated, so that the cloud removing model generation efficiency of each sub-area is improved.
Fig. 3 is a schematic block diagram of a global remote sensing image cloud removing device according to an embodiment of the present invention. As shown in fig. 3, the present invention also provides a cloud removing device suitable for global remote sensing images, corresponding to the above cloud removing method suitable for global remote sensing images. The global remote sensing image cloud removing device comprises a unit for executing the global remote sensing image cloud removing method, and the device can be configured in a server. Specifically, referring to fig. 3, the cloud removing device for global remote sensing images includes a local model training unit 301, a preprocessing unit 302, a dividing unit 303, a training data unit 304, and an adjusting unit 305.
The local model training unit is used for generating a local cloud removing model based on local area remote sensing image training;
in the present embodiment, a local cloud removal model is first generated based on remote sensing data related to a local area. The local area refers to the same area with a single space, and the remote sensing data corresponding to the small-range area are used for training the deep learning model to obtain the cloud removing model suitable for the local area.
Specifically, the remote sensing image needs to be acquired first. The remote sensing image is multi-channel data formed based on multispectral sensing collection, and the remote sensing image is formed by combining visible part data with a full-color image. In the process of collecting remote sensing images of local areas, the remote sensing data of the same area under different weather conditions are mainly collected and marked. The data based on the annotations are further processed through cloud detection and the like to train the cloud removal model so as to generate a local cloud removal model meeting certain precision.
The local cloud removing model may be a GAN model, a ResNet model, or other existing local cloud removing models, and the present invention is not limited thereto.
The device comprises a preprocessing unit, a data processing unit and a data processing unit, wherein the preprocessing unit is used for acquiring remote sensing image data of an area to be cloud removed and processing the remote sensing image data of the area to be cloud removed to generate first image data;
the invention aims to generate a model suitable for global remote sensing image cloud removal so as to overcome the influence of factors such as earth surface complexity and the like. For the local cloud removing model, the remote sensing image acquired by the local small area has a good cloud removing effect, but for other areas with large ground object type difference, the effect is poor. Therefore, when the areas with large ground object type difference are subjected to cloud removal, the remote sensing image data of the areas to be cloud removed are collected firstly. And processing the acquired remote sensing image data of the to-be-cloud-removed area to obtain first image data.
In an embodiment, referring to fig. 4, the preprocessing unit may include an identification unit 401, a masking unit 402, a generation unit 403, and an integration unit 404.
The identification unit is used for acquiring remote sensing image data of an area to be cloud removed and identifying a cloud-containing remote sensing image in the remote sensing image data of the area to be cloud removed;
and for the obtained remote sensing image data of the to-be-cloud-removed area, the remote sensing image data comprises a cloud-containing remote sensing image and a non-cloud remote sensing image. Therefore, the cloud-containing remote sensing image is identified by detecting the remote sensing image data of the to-be-cloud-removed area. Cloud remote sensing image identification adopts cloud detection methods such as Fmask and Tmask, and the like, and is not limited herein.
The mask unit is used for masking a cloud-containing region in the cloud-containing remote sensing image to obtain second image data;
in this embodiment, for the identified cloud-containing remote sensing image, a cloud-containing region needs to be masked. The masking is to use the selected image, graph or object to shield the cloud-containing area in the processed cloud-containing remote sensing image. And generating second image data corresponding to the cloud-containing remote sensing image after masking the cloud-containing area.
The generating unit is used for generating a first cloud-free remote sensing image based on the second image data;
specifically, for the masked second image data, a corresponding first cloud-free remote sensing image is generated through a neural network. The generation network can be a CNN convolutional neural network, and the cloud-free remote sensing image is generated by inputting second image data.
And the integration unit is used for combining the remote sensing image data of the to-be-cloud-removed area and the first non-cloud remote sensing image into first image data.
As described above, for the remote sensing image data of the to-be-cloud-removed area in which the cloud is not recognized, no further processing is required. When the first non-cloud remote sensing image corresponding to the cloud-containing remote sensing image is obtained, the first non-cloud remote sensing image is integrated with original remote sensing image data of the to-be-removed area including the cloud-containing remote sensing image and the non-cloud remote sensing image to form first image data together.
The dividing unit is used for equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set;
because the cloud removing effect of the remote sensing image is closely related to the regional environment, the regional environment possibly included in the global region to be cloud removed is greatly different. Therefore, the cloud area to be removed is divided equally to obtain a plurality of sub-areas, the environment in each sub-area is relatively consistent, and the remote sensing image has no obvious hue or radiant energy measurement difference. Therefore, the cloud area to be removed is divided equally by the area parameter through setting the homogeneity area parameter, so that the area environment under the area is relatively consistent, and the remote sensing image has no obvious hue or radiant energy measurement difference. The homogeneity area parameter may be set empirically and is not limited herein. And dividing the region to be cloud-removed into a plurality of sub-regions with the same area through the set homogeneity area parameter.
A training data unit for generating a training data set for each sub-region based on the first image data according to a model structure;
in order to generate a cloud removal model suitable for each sub-region, the invention first survives a training data set corresponding to each sub-region. Specifically, the first image data is divided according to the range of each sub-region. The first image data is cut into a plurality of small-sized images.
In addition, the training data required by different deep learning models are not completely the same, so the invention further preprocesses the small-size images corresponding to each sub-region according to the structure of the deep learning models so as to adapt to different model structures.
And the adjusting unit is used for finely adjusting the local cloud removing model based on the training data set of each sub-region to generate a cloud removing model suitable for each sub-region.
In this embodiment, the training data sets of the sub-regions are respectively input into the local cloud removing model generated in advance, the cloud removing model is further trained to improve the cloud removing effect on the remote sensing images of the corresponding sub-regions, and the cloud removing model adapted to the sub-regions is generated to meet the cloud removing requirement of the global remote sensing image.
Taking the GAN model as an example, the GAN model includes a generator and an arbiter. Firstly, inputting the cloud remote sensing images in the training data set of each subregion into a generator to generate a second non-cloud remote sensing image. And inputting the second cloud-free remote sensing image and the real cloud-free remote sensing image as a sample pair into a discriminator to judge the second cloud-free remote sensing image and the real cloud-free remote sensing image. And obtaining an error between the second cloud-free remote sensing image and the real cloud-free remote sensing image through a loss function, and reversely spreading the error so as to update the network parameters of the discriminator. Further, the second cloud-free remote sensing image training generator is used for prompting the generator to generate the quality of the sample. And finally, training alternation is carried out on the two sub-modules of the generator and the discriminator, so that the generator can generate a vivid second cloud-free remote sensing image, and the discriminator can also distinguish the authenticity of input.
Because the local cloud removing model is generated in advance, the local cloud removing model only needs to be finely adjusted through the training data set of each sub-region, and the training efficiency of the global cloud removing model is improved.
The above-mentioned global remote sensing image cloud removing device 300 can be implemented in the form of a computer program, and the computer program can be run on a computer device as shown in fig. 5.
Referring to fig. 5, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, wherein the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 5, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 comprises program instructions that, when executed, cause the processor 502 to perform a method for cloud removal of global remote sensing images.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be enabled to execute a method suitable for cloud removal of global remote sensing images.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the computer device 500 to which the present application may be applied, and that a particular computer device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following steps:
training and generating a local cloud removing model based on the remote sensing image of the local area; acquiring remote sensing image data of a to-be-cloud-removed area, and processing the remote sensing image data of the to-be-cloud-removed area to generate first image data; equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set; generating a training data set for each subregion based on the first image data according to a model structure; and fine-tuning the local cloud removing model based on the training data set of each sub-region to generate a cloud removing model suitable for each sub-region.
It should be understood that, in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit 302 (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program, when executed by a processor, causes the processor to perform the steps of:
training and generating a local cloud removing model based on the remote sensing image of the local area; acquiring remote sensing image data of a to-be-cloud-removed area, and processing the remote sensing image data of the to-be-cloud-removed area to generate first image data; equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set; generating a training data set for each subregion based on the first image data according to a model structure; and fine-tuning the local cloud removing model based on the training data set of each subregion to generate a cloud removing model suitable for each subregion.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit 302, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The cloud removing method suitable for the global remote sensing image is characterized by comprising the following steps:
training and generating a local cloud removing model based on the local area remote sensing image;
acquiring remote sensing image data of a to-be-cloud-removed area, and processing the remote sensing image data of the to-be-cloud-removed area to generate first image data;
equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set;
generating a training data set for each subregion based on the first image data according to a model structure;
and fine-tuning the local cloud removing model based on the training data set of each subregion to generate a cloud removing model suitable for each subregion.
2. The global remote sensing image cloud removing method according to claim 1, wherein the acquiring remote sensing image data of the to-be-removed area, and processing the remote sensing image data of the to-be-removed area to generate first image data comprises:
acquiring remote sensing image data of a to-be-cloud-removed area, and identifying cloud-containing remote sensing images in the remote sensing image data of the to-be-cloud-removed area;
masking a cloud-containing region in the cloud-containing remote sensing image to obtain second image data;
generating a first cloud-free remote sensing image based on the second image data;
and forming first image data by the remote sensing image data of the to-be-cloud-removed area and the first non-cloud remote sensing image.
3. The global remote sensing image cloud removing method according to claim 1, wherein the homogeneity area parameter corresponds to that the area environment under the area is relatively consistent, and the remote sensing image has no significant hue or radiant energy measurement difference.
4. The global remote sensing image cloud removing method according to claim 2, wherein a first cloud-free remote sensing image is generated based on the second image data through a convolutional neural network.
5. Be applicable to global remote sensing image cloud device that goes, its characterized in that includes:
the local model training unit is used for generating a local cloud removing model based on local region remote sensing image training;
the device comprises a preprocessing unit, a data processing unit and a data processing unit, wherein the preprocessing unit is used for acquiring remote sensing image data of an area to be cloud removed and processing the remote sensing image data of the area to be cloud removed to generate first image data;
equally dividing the region to be cloud-removed based on the homogeneity area parameter to obtain a sub-region set;
generating a training data set for each subregion based on the first image data according to a model structure;
and fine-tuning the local cloud removing model based on the training data set of each sub-region to generate a cloud removing model suitable for each sub-region.
6. The global remote sensing image cloud removing device according to claim 5, wherein the preprocessing unit includes:
the identification unit is used for acquiring remote sensing image data of a to-be-cloud-removed area and identifying a cloud-containing remote sensing image in the remote sensing image data of the to-be-cloud-removed area;
the mask unit is used for masking the cloud-containing region in the cloud-containing remote sensing image to obtain second image data;
the generating unit is used for generating a first cloud-free remote sensing image based on the second image data;
and the integration unit is used for combining the remote sensing image data of the to-be-cloud-removed area and the first non-cloud remote sensing image into first image data.
7. The global remote sensing image cloud removing device according to claim 5, wherein the homogeneity area parameter corresponds to that the area environment under the area is relatively consistent and the remote sensing image has no significant hue or radiant energy measurement difference.
8. The global remote sensing image cloud removing device according to claim 6, wherein a first cloud-free remote sensing image is generated based on the second image data through a convolutional neural network.
9. A computer arrangement, characterized in that the computer arrangement comprises a memory having stored thereon a computer program and a processor implementing the method according to any of claims 1-4 when executing the computer program.
10. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 4.
CN202210514667.8A 2022-05-12 2022-05-12 Cloud removing method and device suitable for global remote sensing image and computer equipment Pending CN114973009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210514667.8A CN114973009A (en) 2022-05-12 2022-05-12 Cloud removing method and device suitable for global remote sensing image and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210514667.8A CN114973009A (en) 2022-05-12 2022-05-12 Cloud removing method and device suitable for global remote sensing image and computer equipment

Publications (1)

Publication Number Publication Date
CN114973009A true CN114973009A (en) 2022-08-30

Family

ID=82981895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210514667.8A Pending CN114973009A (en) 2022-05-12 2022-05-12 Cloud removing method and device suitable for global remote sensing image and computer equipment

Country Status (1)

Country Link
CN (1) CN114973009A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118172291A (en) * 2024-05-14 2024-06-11 浙江国遥地理信息技术有限公司 Image cloud removing method and device for remote sensing image and electronic equipment
CN118172291B (en) * 2024-05-14 2024-08-13 浙江国遥地理信息技术有限公司 Image cloud removing method and device for remote sensing image and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118172291A (en) * 2024-05-14 2024-06-11 浙江国遥地理信息技术有限公司 Image cloud removing method and device for remote sensing image and electronic equipment
CN118172291B (en) * 2024-05-14 2024-08-13 浙江国遥地理信息技术有限公司 Image cloud removing method and device for remote sensing image and electronic equipment

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN110309706B (en) Face key point detection method and device, computer equipment and storage medium
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
CN111768411B (en) Coronary centerline extraction method, device, computer equipment and storage medium
JP5506785B2 (en) Fingerprint representation using gradient histogram
Sollazzo et al. Hybrid procedure for automated detection of cracking with 3D pavement data
CN112336342B (en) Hand key point detection method and device and terminal equipment
Xu et al. Pixel-level non-local image smoothing with objective evaluation
CN111383232B (en) Matting method, matting device, terminal equipment and computer readable storage medium
CN110245600B (en) Unmanned aerial vehicle road detection method for self-adaptive initial quick stroke width
CN110738247A (en) fine-grained image classification method based on selective sparse sampling
CN112785591B (en) Method and device for detecting and segmenting rib fracture in CT image
CN103679167A (en) Method for processing CCD images
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
CN112183212A (en) Weed identification method and device, terminal equipment and readable storage medium
CN112102201A (en) Image shadow reflection eliminating method and device, computer equipment and storage medium
CN109461133A (en) Bridge bolt dropping detection method and terminal device
CN106683105B (en) Image segmentation method and image segmentation device
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN112037173A (en) Chromosome detection method and device and electronic equipment
CN114973009A (en) Cloud removing method and device suitable for global remote sensing image and computer equipment
CN112132845A (en) Three-dimensional model unitization method and device, electronic equipment and readable medium
CN111476129A (en) Soil impurity detection method based on deep learning
US11881016B2 (en) Method and system for processing an image and performing instance segmentation using affinity graphs
CN109472766B (en) Bridge bolt area positioning method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination