CN114677590A - Remote sensing image processing method, device, equipment and medium - Google Patents

Remote sensing image processing method, device, equipment and medium Download PDF

Info

Publication number
CN114677590A
CN114677590A CN202210305227.1A CN202210305227A CN114677590A CN 114677590 A CN114677590 A CN 114677590A CN 202210305227 A CN202210305227 A CN 202210305227A CN 114677590 A CN114677590 A CN 114677590A
Authority
CN
China
Prior art keywords
cloud
image
updated
matrix
mask matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210305227.1A
Other languages
Chinese (zh)
Inventor
张晓娟
邱一晋
张佳颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210305227.1A priority Critical patent/CN114677590A/en
Publication of CN114677590A publication Critical patent/CN114677590A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a remote sensing image processing method which can be applied to the field of artificial intelligence. The remote sensing image processing model comprises an image updating module and a matrix updating module, and the method comprises the following steps: acquiring a cloud image to be cloud-removed and a cloud layer mask matrix of the cloud image; inputting the cloud image and the cloud layer mask matrix into an image updating module, eliminating a cloud layer in a preset area in the cloud image based on the cloud layer mask matrix, and outputting an updated cloud image; inputting the updated cloud image into a matrix updating module, updating the cloud layer mask matrix, and outputting the updated cloud layer mask matrix; and obtaining the image with the cloud layer removed according to the updated image with the cloud and the updated cloud layer mask matrix. The present disclosure also provides a remote sensing image processing apparatus, a device, a storage medium, and a program product.

Description

Remote sensing image processing method, device, equipment and medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a method and an apparatus for processing a remote sensing image, an electronic device, a computer-readable storage medium, and a computer program product.
Background
At present, remote sensing images with high spatial resolution and high spectral resolution can be obtained through a remote sensing technology. However, the remote sensing image is very susceptible to the influence of climate factors, cloud blocking is one of the main reasons for ground feature information loss, and ground information acquired by the remote sensing image is degraded to different degrees due to sparse or completely fuzzy cloud coverage. The thick cloud in the cloud shielding is generally in a cloud state, and the thick cloud in the remote sensing image is difficult to remove and the information shielded by the thick cloud is completely invalid.
In the prior art, the algorithm for removing the thick cloud in the remote sensing image comprises a traditional cloud removing method and a high-resolution remote sensing image thick cloud removing method based on deep learning. The traditional cloud removing method, such as a homomorphic filtering method and a multispectral information method, is effective when aiming at a thin cloud or a cloud area with a small area, but has certain limitation in the thick cloud removing aspect; for the method for removing the thick cloud of the high-resolution remote sensing image based on the deep learning, under the condition that the cloud remote sensing image input by the whole network is traversed by using standard convolution, the output is obtained by calculation according to effective pixels and covered pixels, and because the covered pixels belong to interference information, the output image generates visual artifacts such as distortion, blurring, color distortion and the like. In addition, when a MASK (MASK) is used to MASK interference information, as the number of network layers increases, the network ignores effective information generated by a cloud boundary, and then final loss is affected.
In the prior art, various image distortion information in the process of recovering information covered by thick clouds cannot be eliminated for the remote sensing image covered by the thick clouds, and the covered ground surface information cannot be accurately and effectively recovered.
Disclosure of Invention
In view of the above problems, the present disclosure provides a method, an apparatus, a device, a medium, and a program product for processing a cloud layer in a remote sensing image, eliminating various artifacts that may occur when a U-Net network is used as a reconstruction network, and recovering covered surface information.
According to a first aspect of the present disclosure, there is provided a method for processing a remote sensing image, the remote sensing image processing model including an image updating module and a matrix updating module, the method including: acquiring a cloud image to be cloud-removed and a cloud layer mask matrix of the cloud image; inputting the cloud image and the cloud layer mask matrix into an image updating module, eliminating a cloud layer in a preset area in the cloud image based on the cloud layer mask matrix, and outputting an updated cloud image; inputting the updated cloud image into a matrix updating module, updating the cloud layer mask matrix, and outputting the updated cloud layer mask matrix; and obtaining the image with the cloud layer removed according to the updated image with the cloud and the updated cloud layer mask matrix.
According to an embodiment of the present disclosure, wherein inputting the cloud image and the cloud layer mask matrix into the image updating module, eliminating a cloud layer of a preset region in the cloud image based on the cloud layer mask matrix, and outputting the updated cloud image includes: determining first characteristic information according to the cloud image and the cloud layer mask matrix; adjusting the weight of a convolution kernel for carrying out feature extraction on the cloud image based on the first feature information, and determining second feature information; and adjusting the information quantity of the second characteristic information by using the scaling factor, and outputting the updated cloud image.
According to an embodiment of the present disclosure, the inputting the updated cloud image into the matrix updating module, updating the cloud layer mask matrix, and outputting the updated cloud layer mask matrix includes: determining a first region to be convolved in the cloud layer mask matrix; determining a second area corresponding to the first area in the updated cloud image; under the condition that the second area has a part which is not covered by the cloud layer, updating a numerical value corresponding to the central position of the mask convolution kernel to be 1 to obtain a first area mask convolution kernel; convolving the first area by using a first area mask convolution kernel to update the first area of the cloud layer mask matrix; and under the condition that all areas of the cloud layer mask matrix complete convolution operation, outputting the updated cloud layer mask matrix.
According to an embodiment of the present disclosure, obtaining the cloud layer eliminated image according to the updated cloud image and the updated cloud layer mask matrix includes: performing E-time feature extraction on the updated cloud image and the updated cloud layer mask matrix by using an image updating module until all information covered by a cloud layer in the cloud image is recovered, and outputting and eliminating the cloud layer image, wherein the cloud image for recovering preset area information is generated after each feature extraction, and E is an integer greater than or equal to 2; and after the cloud image for restoring the preset area information is obtained and before next feature extraction is carried out, updating the updated cloud image by using a matrix updating module to obtain a cloud layer mask matrix corresponding to the updated cloud image.
According to the embodiment of the disclosure, the cloud image comprises remote sensing images with N wave bands, and the input of the remote sensing image processing model is the remote sensing images with the N wave bands and a cloud layer mask matrix, wherein N is an integer greater than or equal to 2.
According to the embodiment of the disclosure, the training sample set comprises a plurality of data pairs, each data pair comprises a cloud training image, a non-cloud training image and a cloud layer mask training matrix, and the determining of the remote sensing image processing model comprises the following steps: rotating the cloud training image and the cloud layer mask training matrix according to a preset angle to obtain expanded cloud training data and a corresponding expanded cloud layer mask training matrix; and taking the extended cloud training data and the corresponding extended cloud layer mask training matrix as the input of the remote sensing image processing model to be trained to obtain the remote sensing image processing model.
According to an embodiment of the present disclosure, the obtaining of the cloud image and the cloud layer mask matrix of the cloud image to be cloud-removed includes: based on the registered cloud image, segmenting the registered cloud image according to a preset step length to obtain a cloud image to be subjected to cloud removal, wherein the cloud image to be subjected to cloud removal comprises a plurality of image blocks with the preset step length; and determining a cloud layer mask matrix corresponding to the cloud image according to the cloud image to be subjected to cloud removal, wherein the cloud area in the cloud layer mask matrix is marked as 0, and other areas are marked as 1.
According to a second aspect of the present disclosure, there is provided a remote sensing image processing apparatus, the remote sensing image processing model including an image updating module and a matrix updating module, the apparatus comprising: the device comprises an acquisition module, a cloud layer mask matrix generation module and a cloud layer mask matrix generation module, wherein the acquisition module is used for acquiring a cloud image to be cloud-removed and the cloud layer mask matrix of the cloud image; the first output module is used for inputting the cloud image and the cloud layer mask matrix into the image updating module, eliminating a cloud layer in a preset area in the cloud image based on the cloud layer mask matrix and outputting an updated cloud image; the second output module is used for inputting the updated cloud image into the matrix updating module, updating the cloud layer mask matrix and outputting the updated cloud layer mask matrix; and the elimination module is used for obtaining an eliminated cloud layer image according to the updated cloud image and the updated cloud layer mask matrix.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; a memory for storing one or more instructions, wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of remote sensing image processing described above.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the above-described remote sensing image processing method.
According to a fifth aspect of the disclosure, a computer program product is provided, comprising computer executable instructions for implementing the remote sensing image processing method described above when executed.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates a system architecture of a method of remote sensing image processing according to an embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of a method of processing remote sensing images according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method of outputting an updated cloudy image according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow diagram of a method of outputting an updated cloud mask matrix according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a flow chart of a method of obtaining an eliminated cloud image according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a remote sensing image processing model according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a flow chart of a method of determining a remote sensing image processing model according to an embodiment of the disclosure;
FIG. 8 schematically illustrates a flow chart of a method of obtaining a cloudy image and a cloud layer mask matrix to be cloud removed according to an embodiment of the present disclosure;
fig. 9 schematically shows a block diagram of the configuration of a remote sensing image processing apparatus according to an embodiment of the present disclosure; and
FIG. 10 schematically shows a block diagram of an electronic device suitable for a method of remote sensing image processing according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a training method of a remote sensing image processing model, which comprises the following steps: determining a training sample set based on original cloud layer image data, wherein the training sample set comprises a plurality of data pairs, and each data pair comprises a cloud image, a non-cloud image and a cloud layer mask matrix; determining an updated cloud image according to the cloud image and the cloud layer mask matrix; updating the cloud layer mask matrix based on the updated cloud image to obtain an updated cloud layer mask matrix; obtaining an eliminated cloud layer image based on the updated cloud image and the updated cloud layer mask matrix; and adjusting parameters of the remote sensing image processing model according to the cloud layer image and the cloud-free image, so as to obtain the trained remote sensing image processing model.
Fig. 1 schematically shows a system architecture of a remote sensing image processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 101, 102, 103 to interact with a server 105 over a network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as an image processing type application, a video processing type application, an image editing type application, and the like (for example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the training method of the abnormal behavior classification model provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the training device of the abnormal behavior classification model provided by the embodiment of the present disclosure may be generally disposed in the server 105. The training method of the abnormal behavior classification model provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and can communicate with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the training device for the abnormal behavior classification model provided in the embodiment of the present disclosure may also be disposed in a server or a server cluster that is different from the server 105 and can communicate with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The method for training the remote sensing image processing model according to the disclosed embodiment will be described in detail with reference to fig. 2 to 7 based on the system architecture described in fig. 1.
FIG. 2 schematically shows a flow chart of a method of processing remote sensing images according to an embodiment of the disclosure.
As shown in fig. 2, the method includes operations S201 to S204.
In operation S201, a cloud image to be cloud-removed and a cloud layer mask matrix of the cloud image are acquired.
According to the embodiment of the disclosure, since the remote sensing image photographed by the remote sensing satellite may be affected by the cloud layer, there may be a portion covered by the cloud layer on the photographed remote sensing image. After the cloud image of the cloud to be removed is obtained, the cloud image of the cloud to be removed is processed to obtain corresponding cloud layer data in the cloud image, and a cloud layer mask matrix is determined according to the corresponding cloud layer data.
According to the embodiment of the disclosure, the cloud image comprises a covered part covered by the cloud layer and an uncovered part uncovered by the cloud layer, the cloud layer mask matrix corresponding to the cloud image can represent initial cloud layer data in the cloud image, and partial information of a connection part of the covered part and the uncovered part of the cloud layer can be recovered by using the cloud layer mask matrix.
In operation S202, the cloud image and the cloud layer mask matrix are input to the image updating module, the cloud layer in the preset region in the cloud image is eliminated based on the cloud layer mask matrix, and the updated cloud image is output.
According to the embodiment of the disclosure, after the cloud image and the cloud layer mask matrix are input into the image updating module, the image updating module can reduce the cloud layer in the cloud image from outside to inside according to the input cloud layer mask matrix and output the updated cloud image. Specifically, the joint of the cloud layer covering part and the non-covered part contains part hidden information, and under the condition that the cloud layer mask matrix is used for carrying out feature extraction on the cloud image, the area of the part covered by the cloud layer in the cloud image can be reduced, and the information of the part covered by the cloud layer is recovered. Because the connection part of the covered part and the uncovered part of the cloud layer is irregular, the area recovered each time is the partial area of the connection part, and the cloud layer is further caused to show a decreasing trend from outside to inside.
In operation S203, the updated cloud image is input to the matrix update module, the cloud mask matrix is updated, and the updated cloud mask matrix is output.
According to the embodiment of the disclosure, since the cloud layer covering area in the updated cloud image is changed, the cloud layer information included in the cloud layer mask matrix at this time is different from the cloud layer included in the updated cloud image, the updated cloud image needs to be input into the matrix updating module, the cloud layer mask matrix is updated by using the updated cloud image, and the updated cloud layer mask matrix is output. The updated cloud mask matrix corresponds to cloud information contained in the updated cloud image.
According to the embodiment of the disclosure, the matrix updating module can store the cloud mask matrix updated each time so as to realize the next updating of the cloud mask matrix by using the input updated cloud image.
In operation S204, an eliminated cloud image is obtained according to the updated cloud image and the updated cloud mask matrix.
According to the embodiment of the disclosure, the updated cloud image recovers part of information covered by the cloud layer, and cannot recover all information covered by the cloud layer, so that the updated cloud image and the updated cloud mask matrix need to be subjected to feature extraction for many times continuously until all information covered by the cloud layer in the cloud image is recovered, and the cloud layer image is eliminated.
According to the embodiment of the disclosure, the remote sensing image processing model is a model which is trained in advance by using remote sensing image data included in the open source database.
According to the method, the cloud image and the cloud layer mask matrix are subjected to feature extraction, so that the influence of interference information on the output result of the remote sensing image processing model is reduced, various visual artifacts are eliminated, and the reconstruction accuracy of the network is improved; by automatically updating the cloud layer mask matrix, the influence of loss caused by non-updating of the cloud layer mask matrix is avoided, the processing accuracy of the remote sensing image processing model is improved, and thick clouds in the remote sensing image are removed.
According to the embodiment of the disclosure, the remote sensing image processing method can be applied to the field of remote sensing image processing, accurate elimination of cloud layer images is determined, and accuracy of remote sensing image processing is improved. In addition, the remote sensing image processing method can also be applied to the financial field, is beneficial to financial institutions and enterprises to realize accurate asset assessment, and reduces asset assessment loss caused by image processing distortion.
For example, for agriculture, when the vegetation greening degree is evaluated, the remote sensing image processing method disclosed by the invention can be used for obtaining an accurate image for eliminating cloud layers, so that the greening degree of vegetation is accurately evaluated; for financial evaluation, when an asset organization evaluates an entity construction project, the asset organization can determine the accurate progress of the entity construction project according to the accurate image for eliminating the cloud layer, and then evaluate the risk condition of the entity project.
For example, building a needs to loan bank B due to the shortage of funds during construction, and bank B needs to evaluate the information of the location, construction progress, and the like of building a. In evaluating the building a, a plurality of remote sensing images of the building a are used.
Due to the fact that the aerial cloud layer is covered, part of information of the building A is covered by the cloud layer, the obtained cloud image cannot display specific information of the covered part of the building A, and the bank B cannot accurately evaluate the building A according to the cloud remote sensing image. When the remote sensing image covered by thick cloud is processed by utilizing the existing cloud layer eliminating technology, various image distortion information appearing in the process of recovering the information covered by thick cloud cannot be eliminated, and the covered ground surface information cannot be accurately and effectively recovered. This causes a deviation in the evaluation result of the item by bank B, resulting in a loss of funds.
The cloud layer image elimination method based on the remote sensing image processing can help banks or other asset organizations to obtain accurate information of the cloud layer image elimination, facilitates accurate asset evaluation of the banks or other asset organizations, and reduces risk of capital loss.
Fig. 3 schematically illustrates a flow chart of a method of outputting an updated cloudy image according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S303.
In operation S301, first feature information is determined according to the cloud image and the cloud layer mask matrix.
According to the embodiment of the disclosure, according to a cloud image and a cloud layer mask matrix of an input remote sensing image processing model, the cloud image and the cloud layer mask matrix are subjected to dot multiplication to obtain a first characteristic information matrix for subsequent characteristic extraction.
In operation S302, weights of convolution kernels used for feature extraction of the cloud image are adjusted based on the first feature information, and second feature information is determined.
According to the embodiment of the disclosure, the feature extraction is performed on the first feature information by using the convolution kernel, but the weight of the convolution kernel is adjusted to process the first feature information to obtain the second feature information due to the difference of the extracted feature information.
In operation S303, the information amount of the second feature information is adjusted by the scaling factor, and the updated cloudy image is output.
According to the embodiment of the disclosure, since the first characteristic information is obtained according to the cloud image and the cloud layer mask matrix, with the update of the cloud layer mask matrix, known elements in a corresponding region of the convolution kernel are increased, and the input quantity of different numbers of effective values can be adjusted through the scaling factor to obtain the updated cloud image.
According to the embodiment of the disclosure, the feature extraction performed on the input cloud image and the cloud layer mask matrix can satisfy the following formula:
Figure BDA0003564556770000091
where W represents the weight of the convolution kernel and b represents the corresponding bias. X represents the input of the current convolution sliding window, X' represents the output of the current input after feature extraction, M represents the corresponding binary cloud layer mask matrix,
Figure BDA0003564556770000092
representing a scaling factor.
Fig. 4 schematically illustrates a flow chart of a method of outputting an updated cloud mask matrix according to an embodiment of the present disclosure.
As shown in fig. 4, the method includes operations S401 to S405.
In operation S401, a first region to be convolved in the cloud mask matrix is determined.
According to the embodiment of the disclosure, after the feature extraction is performed on the cloud image and the cloud layer mask matrix, the cloud layer mask matrix needs to be updated, and the updated cloud layer information is matched into the cloud layer mask matrix. Each update reduces the area of the cloud covered portion, so it is necessary to determine the first region of the cloud mask matrix to be convolved that is not updated. In practical application, information cannot be extracted by convolving the part, which does not contain cloud layer information, in the cloud layer mask matrix, so that the first region to be convolved generally refers to the part containing the cloud layer information.
In operation S402, a second region corresponding to the first region is determined in the updated cloudy image.
According to the embodiment of the disclosure, the cloud layer information contained in the updated cloud image is also updated correspondingly, and after the first region to be convolved in the cloud mask matrix is determined, the second region corresponding to the first region is determined in the updated cloud image, so that the updated cloud layer information is determined according to the updated cloud image.
In operation S403, in the case that there is a portion of the second region that is not covered by the cloud layer, the value corresponding to the center position of the mask convolution kernel is updated to 1, so as to obtain a first region mask convolution kernel.
According to the embodiment of the disclosure, the existence of the part of the second region which is not covered by the cloud layer indicates that the current second region has the uncovered ground surface information, and at this time, the value corresponding to the central position of the mask convolution kernel for updating the cloud layer mask matrix is updated to be 1, so that the first region mask convolution kernel is obtained.
In operation S404, the first region is convolved with the first region mask convolution kernel to update the first region of the cloud mask matrix.
According to the embodiment of the disclosure, since the central position of the first area mask convolution kernel is 1, when the first area of the cloud layer mask matrix is convolved by using the first area mask convolution kernel, the first area of the cloud layer mask matrix is updated to be the matrix containing the earth surface information. The cloud layer mask matrix is used as a part of forward transmission, if the updated information is guaranteed to be applied, the cloud layer mask matrix finally becomes a matrix of all 1, and then the purpose of completing missing information completion is completed.
In operation S405, in case that all regions of the cloud mask matrix complete the convolution operation, the updated cloud mask matrix is output.
According to the embodiment of the disclosure, after all the first areas to be convolved of the cloud layer mask matrix are updated according to the updated cloud image, the cloud layer mask matrix containing the updated cloud layer information can be obtained.
According to the embodiment of the disclosure, updating the cloud layer mask matrix can satisfy the formula:
Figure BDA0003564556770000101
wherein M represents a corresponding binary cloud mask matrix, and M' represents an output after convolution of a current input.
Fig. 5 schematically shows a flowchart of a method of obtaining an eliminated cloud image according to an embodiment of the present disclosure.
As shown in fig. 5, the method includes operations S501 to S502.
In operation S501, the image update module is used to perform E times of feature extraction on the updated cloud image and the updated cloud layer mask matrix until all information covered by the cloud layer in the cloud image is recovered, and the cloud layer image is output and eliminated.
According to the embodiment of the disclosure, since the information for feature extraction and recovery of the updated cloud image and the updated cloud layer mask matrix is limited, a cloud image for recovering preset region information is generated in each feature extraction, an image updating module needs to be used for performing E times of feature extraction until all information covered by a cloud layer in the cloud image is recovered, and the cloud layer image is output and eliminated, where E is an integer greater than or equal to 2. For example, when E is 5, 5 times of feature extraction are performed on the input cloud image and the cloud mask matrix, and the cloud layer image is output.
In operation S502, after the cloud image with the recovered specific preset area information is obtained, the updated cloud image is updated by using the matrix updating module before the next feature extraction is performed, so as to obtain a cloud layer mask matrix corresponding to the updated cloud image.
According to the embodiment of the disclosure, the updated cloud image is the cloud image with the preset area information restored, and when the cloud image is updated next time, the information of the area needs to be restored this time, so that the cloud layer mask matrix needing to be updated is updated next time. And updating the cloud layer mask matrix according to the updated cloud image so as to update the cloud image next time by using the updated cloud layer mask matrix, namely, the cloud image and the cloud layer mask matrix are updated alternately.
According to the embodiment of the disclosure, the training set for training the remote sensing image processing model is from Landsat8 remote sensing satellites, each cloud image comprises remote sensing images of N wave bands, so that the input of the remote sensing image model comprises the remote sensing images of the N wave bands and the cloud layer mask matrix, wherein N is an integer greater than or equal to 2. For example, the cloud image comprises a remote sensing image with 9 wave bands, and the input of the remote sensing image processing model is the remote sensing image with 9 wave bands and the cloud layer mask matrix.
FIG. 6 schematically illustrates a remote sensing image processing model according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, as shown in fig. 6, a Python + pytorech framework is used to build an adaptive partial convolution based U-Net network. The inputs to the network are: the input of the network is N +1, the first N channels are remote sensing images, and the (N + 1) th channel is a corresponding cloud layer mask matrix. U-Net has five layers, where the convolution operation in the encoder is all done by an adaptive partial convolution layer, and the up-sampling is done using deconvolution in the decoding stage. And for the decoder to perform a stylistic fusion and approximation with the encoder, the characteristic data extracted by the corresponding encoder is linked with the cloud mask matrix through jump linking.
FIG. 7 schematically shows a flow chart of a method of determining a remote sensing image processing model according to an embodiment of the disclosure.
As shown in fig. 7, the method includes operations S701 to S702.
In operation S701, the cloud training image and the cloud layer mask training matrix are rotated according to a preset angle, so as to obtain extended cloud training data and a corresponding extended cloud layer mask training matrix.
According to an embodiment of the present disclosure, a training sample set includes a plurality of data pairs, each data pair including a cloud training image, a non-cloud training image, and a cloud mask training matrix.
According to an embodiment of the present disclosure, the set of training samples is obtained from the open source data set RICE. The open source data set RICE comprises a thin cloud data set RICE1 and a thick cloud data set RICE2, wherein the thick cloud data set RICE2 is from a Landsat8 remote sensing satellite. The original cloud layer image data obtained from the open source database comprises 450 pairs of images, and each pair of images comprises cloud image data, non-cloud image data and cloud layer mask data.
According to the embodiment of the disclosure, cloud image data, cloud-free image data and cloud mask data of original cloud layer image data are processed to obtain corresponding cloud training images, cloud-free training images and cloud mask training matrixes in a training sample set. For example, cloud image data and cloud-free image data are subjected to data expansion and registration processing; and carrying out binarization processing on the cloud layer mask data to obtain a binary cloud layer mask matrix.
According to an embodiment of the disclosure, a remote sensing image processing model employs a U-Net network, comprising an encoder and a decoder, each convolutional layer of the encoder is connected to the next convolutional layer of the encoder, and is also connected to the corresponding convolutional layer of the decoder.
According to the embodiment of the disclosure, the remote sensing image processing model rotates the cloud training image and the cloud layer mask training matrix by a preset angle by utilizing an OpenCV (open content library) of python to obtain the expanded cloud training data and the corresponding expanded cloud layer mask training matrix. Specifically, the preset angles include 0 degree, 90 degrees and 180 degrees, and random preset angle rotation is carried out on the cloud training image and the cloud layer mask training matrix. For example, the cloud training image and the cloud mask training matrix are rotated by 90 °, then rotated by 180 °, and rotated by 90 ° in the other direction, and the rotation angles are random during the rotation.
In operation S702, the extended cloud training data and the corresponding extended cloud layer mask training matrix are used as input of the remote sensing image processing model to be trained, so as to obtain the remote sensing image processing model.
According to the embodiment of the disclosure, after the cloud training image and the cloud layer mask training matrix are rotated, the number of samples in the training sample set can be expanded, and then the cloud training data and the corresponding expanded cloud layer mask training matrix are used as the input of the remote sensing image processing model to be trained to train the remote sensing image processing model.
According to the embodiment of the disclosure, a loss function of the remote sensing image processing model to be trained is obtained according to the cloud layer elimination image and the corresponding cloud-free training image obtained in the training process. And adjusting parameters of the remote sensing image processing model to be trained according to the loss function, and obtaining the trained remote sensing image processing model under the condition of meeting preset conditions.
According to the embodiment of the disclosure, the distribution of the initial parameters of the network for training the remote sensing image processing model is subject to normal distribution with the mean value of 0 and the standard deviation of 0.05, and Relu is an activation function between layers. And (4) optimizing by using an Adam algorithm, setting the initial learning rate to be 0.0001, iterating for 500000 times totally, and changing the learning rate to be 0.8 every 20000 times. The loss function of the remote sensing image processing model comprises loss of an effective area, loss of a cloud layer shielding area and regular constraint term loss, and the specific loss function satisfies the following conditions:
L=Lvalid+pLhole+qLtv (3)
where L is the loss function of the entire network, LvalidAs a loss function of the effective area, LholeLoss function for cloud occlusion region, LtvThe loss function is a regular constraint term and is used for keeping the smoothness of the picture, p is a training parameter of the loss function of the cloud shielding area, and q is a training parameter of the loss function of the regular constraint term.
According to an embodiment of the present disclosure, after training, p is 5 and q is 0.1 in the loss function.
The loss function of the effective area satisfies:
Figure BDA0003564556770000131
where M represents the corresponding binary cloud mask matrix, f (w, b, x)k) The output result of the kth input of the model, y, is expressed when the weight of the convolution kernel is W and the bias is bkThe theoretical result corresponding to the kth input is shown, and N shows N inputs in total.
The loss function of the cloud shielding area satisfies the following conditions:
Figure BDA0003564556770000132
where M represents the corresponding binary cloud mask matrix, f (W, b, x)k) Denotes the output result of the kth input of the model, y, when the weight of the convolution kernel is W and the offset is bkThe theoretical result corresponding to the kth input is shown, and N shows N inputs in total.
The loss function of the regular constraint term satisfies:
Figure BDA0003564556770000141
wherein x isi,jIndicating the input of the ith pixel in the horizontal direction and the jth pixel in the vertical direction.
Fig. 8 schematically illustrates a flow chart of a method of obtaining a clouded image and a cloud layer mask matrix of an object to be clouded according to an embodiment of the disclosure.
As shown in fig. 8, the method includes operations S801 to S802.
In operation S801, based on the registered cloud image, the registered cloud image is segmented according to a preset step size to obtain a cloud image to be cloud-removed, where the cloud image to be cloud-removed includes a plurality of image blocks with the preset step size.
According to the embodiment of the disclosure, after the original cloud layer image data is acquired, the cloud image and the cloud-free image are registered by using professional remote sensing image software. And after the registration, segmenting the registered cloud image according to a fixed step length, and segmenting the image into an image block set with a fixed size to obtain the cloud image to be subjected to cloud removal.
In operation S802, a cloud layer mask matrix corresponding to a cloud image is determined according to the cloud image to be cloud-removed, where a cloud area in the cloud layer mask matrix is marked as 0 and other areas are marked as 1.
According to the embodiment of the disclosure, the cloud layer mask is subjected to binarization processing according to the cloud image to be subjected to cloud removal, the position corresponding to the cloud area is marked as 0, and other parts which are not covered by the cloud layer are marked as 1, so that a binarization cloud layer mask matrix only containing 0 and 1 is obtained.
According to an embodiment of the present disclosure, after the cloud image and the cloud-free image are registered, the registered images are divided into a data set, a verification set, and a test set at a ratio of 70%, 20%, and 10%, respectively. The training sample set comprises a data set and a verification set and is used for training the remote sensing image processing model, and the test set is used for testing the trained remote sensing image processing model.
Fig. 9 schematically shows a block diagram of the configuration of the remote sensing image processing apparatus according to the embodiment of the present disclosure.
As shown in fig. 9, the training apparatus 900 for a remote sensing image processing model according to this embodiment includes an obtaining module 901, a first output module 902, a second output module 903, and an eliminating module 904.
An obtaining module 901, configured to obtain a cloud image to be cloud removed and a cloud layer mask matrix of the cloud image. In an embodiment, the obtaining module 901 may be configured to perform the operation S201 described above, which is not described herein again.
The first output module 902 is configured to input the cloud image and the cloud layer mask matrix into the image updating module, eliminate a cloud layer in a preset area in the cloud image based on the cloud layer mask matrix, and output an updated cloud image. In an embodiment, the first output module 902 may be configured to perform the operation S202 described above, which is not described herein again.
A second output module 903, configured to input the updated cloud image into the matrix update module, update the cloud mask matrix, and output the updated cloud mask matrix. In an embodiment, the second output module 903 may be configured to perform the operation S203 described above, which is not described herein again.
And an elimination module 904, configured to obtain an eliminated cloud layer image according to the updated cloud image and the updated cloud layer mask matrix. In an embodiment, the elimination module 904 may be configured to perform the operation S204 described above, which is not described herein again.
According to an embodiment of the present disclosure, the first output module 902 includes a first determining unit, a second determining unit, and a third determining unit.
The first determining unit is used for determining first characteristic information according to the cloud image and the cloud layer mask matrix. In an embodiment, the first determining unit may be configured to perform the operation S301 described above, which is not described herein again.
The second determining unit is used for adjusting the weight of a convolution kernel used for carrying out feature extraction on the cloud image based on the first feature information and determining second feature information. In an embodiment, the second determining unit may be configured to perform the operation S302 described above, which is not described herein again.
The third determining unit is used for adjusting the information quantity of the second characteristic information by using the scaling factor and outputting the updated cloud image. In an embodiment, the third determining unit may be configured to perform the operation S303 described above, which is not described herein again.
According to an embodiment of the present disclosure, the second output module 903 includes a first matrix determination unit, a second matrix determination unit, a third matrix determination unit, a fourth matrix determination unit, and a fifth matrix determination unit.
The first matrix determination unit is used for determining a first area to be convolved in the cloud layer mask matrix. In an embodiment, the first matrix determining unit may be configured to perform the operation S401 described above, which is not described herein again.
The second matrix determination unit is used for determining a second area corresponding to the first area in the updated cloud image. In an embodiment, the second matrix determining unit may be configured to perform the operation S402 described above, which is not described herein again.
The third matrix determining unit is used for updating a numerical value corresponding to the central position of the mask convolution kernel to be 1 under the condition that the second area has a part which is not covered by the cloud layer, so that the first area mask convolution kernel is obtained. In an embodiment, the third matrix determining unit may be configured to perform operation S403 described above, which is not described herein again.
The fourth matrix determining unit is used for convolving the first area by using the first area mask convolution kernel so as to update the first area of the cloud layer mask matrix. In an embodiment, the fourth matrix determining unit may be configured to perform the operation S404 described above, which is not described herein again.
And the fifth matrix determining unit is used for outputting the updated cloud layer mask matrix under the condition that all areas of the cloud layer mask matrix complete convolution operation. In an embodiment, the fifth matrix determining unit may be configured to perform operation S405 described above, which is not described herein again.
According to an embodiment of the present disclosure, the elimination module 904 includes a feature extraction unit and an update unit.
The feature extraction unit is used for performing E-time feature extraction on the updated cloud image and the updated cloud layer mask matrix by using the image updating module until all information covered by the cloud layer in the cloud image is recovered, outputting the image with the cloud layer eliminated, wherein the cloud image recovering the preset area information is generated after each feature extraction, and E is an integer greater than or equal to 2. In an embodiment, the feature extraction unit may be configured to perform the operation S501 described above, which is not described herein again.
The updating unit is used for updating the updated cloud image by using the matrix updating module after the cloud image for restoring the preset area information is obtained and before next feature extraction is carried out, so as to obtain the cloud layer mask matrix corresponding to the updated cloud image. In an embodiment, the updating unit may be configured to perform the operation S502 described above, which is not described herein again.
According to the embodiment of the disclosure, the remote sensing image processing device further comprises a preprocessing module, and the preprocessing module comprises a first preprocessing unit and a second preprocessing unit.
The first preprocessing unit is used for rotating the cloud training image and the cloud layer mask training matrix according to a preset angle to obtain expanded cloud training data and a corresponding expanded cloud layer mask training matrix. In an embodiment, the first preprocessing unit may be configured to perform the operation S701 described above, which is not described herein again.
The second preprocessing unit is used for taking the extended cloud training data and the corresponding extended cloud layer mask training matrix as the input of the remote sensing image processing model to be trained to obtain the remote sensing image processing model. In an embodiment, the second preprocessing unit may be configured to perform the operation S702 described above, which is not described herein again.
According to an embodiment of the present disclosure, the acquisition module 901 includes a registration unit and a marking unit.
The registration unit is used for segmenting the registered cloud image according to a preset step length based on the registered cloud image to obtain a cloud image to be subjected to cloud removal, and the cloud image to be subjected to cloud removal comprises a plurality of image blocks with the preset step length. In an embodiment, the registration unit may be configured to perform operation S801 described above, which is not described herein again.
The marking unit is used for determining a cloud layer mask matrix corresponding to the cloud image according to the cloud image to be subjected to cloud removal, wherein a cloud area in the cloud layer mask matrix is marked as 0, and other areas are marked as 1, so that the cloud layer mask matrix is obtained. In an embodiment, the marking unit may be configured to perform the operation S802 described above, which is not described herein again.
FIG. 10 schematically illustrates a block diagram of an electronic device adapted for remote sensing an image processing model according to an embodiment of the disclosure.
As shown in fig. 10, an electronic device 1000 according to an embodiment of the present disclosure includes a processor 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. Processor 1001 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 1001 may also include onboard memory for caching purposes. The processor 1001 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the present disclosure.
In the RAM1003, various programs and data necessary for the operation of the electronic apparatus 1000 are stored. The processor 1001, ROM1002, and RAM1003 are connected to each other by a bus 1004. The processor 1001 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM1002 and/or the RAM 1003. Note that the programs may also be stored in one or more memories other than the ROM1002 and the RAM 1003. The processor 1001 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 1000 may also include an input/output (I/O) interface 1005, the input/output (I/O) interface 1005 also being connected to bus 1004, according to an embodiment of the present disclosure. The electronic device 1000 may also include one or more of the following components connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement a method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM1002 and/or the RAM1003 described above and/or one or more memories other than the ROM1002 and the RAM 1003.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the training method of the remote sensing image processing model provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 1001. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication part 1009, and/or installed from the removable medium 1011. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program performs the above-described functions defined in the system of the embodiment of the present disclosure when executed by the processor 1001. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments of the present disclosure and/or the claims may be made without departing from the spirit and teachings of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. A remote sensing image processing method, a remote sensing image processing model comprises an image updating module and a matrix updating module, and the method comprises the following steps:
acquiring a cloud image to be cloud-removed and a cloud layer mask matrix of the cloud image;
inputting the cloud image and the cloud layer mask matrix into an image updating module, eliminating a cloud layer of a preset area in the cloud image based on the cloud layer mask matrix, and outputting an updated cloud image;
inputting the updated cloud image into a matrix updating module, updating the cloud mask matrix, and outputting the updated cloud mask matrix;
and obtaining an image with cloud layers eliminated according to the updated image with cloud and the updated cloud layer mask matrix.
2. The method of claim 1, wherein the inputting the cloud image and the cloud mask matrix into an image update module, eliminating cloud layers of a preset region in the cloud image based on the cloud mask matrix, and outputting the updated cloud image comprises:
determining first characteristic information according to the cloud image and the cloud layer mask matrix;
adjusting the weight of a convolution kernel for performing feature extraction on the cloud image based on the first feature information, and determining second feature information; and
and adjusting the information quantity of the second characteristic information by using a scaling factor, and outputting an updated cloud image.
3. The method of claim 2, wherein the inputting the updated cloud image into a matrix update module, updating the cloud mask matrix, and outputting the updated cloud mask matrix comprises:
determining a first region to be convolved in the cloud layer mask matrix;
determining a second region corresponding to the first region in the updated cloudy image;
under the condition that the second region has a part which is not covered by the cloud layer, updating a numerical value corresponding to the central position of the mask convolution kernel to be 1 to obtain a first region mask convolution kernel;
convolving the first region by using the first region mask convolution kernel to update the first region of the cloud layer mask matrix; and
and under the condition that all areas of the cloud layer mask matrix complete convolution operation, outputting an updated cloud layer mask matrix.
4. The method of claim 1, wherein the deriving an eliminated cloud image from the updated cloudy image and the updated cloud mask matrix comprises:
performing E-time feature extraction on the updated cloud image and the updated cloud layer mask matrix by using the image updating module until all information covered by a cloud layer in the cloud image is recovered, and outputting a cloud layer image to be eliminated, wherein the cloud image for recovering preset area information is generated after each feature extraction, and E is an integer greater than or equal to 2; and
after the cloud image for restoring the preset region information is obtained and before next feature extraction, the updated cloud image is updated by the matrix updating module, and the cloud layer mask matrix corresponding to the updated cloud image is obtained.
5. The method of claim 1, wherein the cloud image comprises a remote sensing image of N bands, and the input of the remote sensing image processing model is the remote sensing image of N bands and the cloud layer mask matrix, wherein N is an integer greater than or equal to 2.
6. The method of claim 1, the training sample set comprising a plurality of data pairs, each data pair comprising a cloud training image, a non-cloud training image, and a cloud mask training matrix, the determining the remote sensing image processing model comprising:
rotating the cloud training image and the cloud layer mask training matrix according to a preset angle to obtain expanded cloud training data and a corresponding expanded cloud layer mask training matrix; and
and taking the extended cloud training data and the corresponding extended cloud layer mask training matrix as the input of the remote sensing image processing model to be trained to obtain the remote sensing image processing model.
7. The method of claim 1, wherein the obtaining a cloudy image to be deblurred and a cloud-layer mask matrix of the cloudy image comprises:
based on the registered cloud image, segmenting the registered cloud image according to a preset step length to obtain a cloud image to be subjected to cloud removal, wherein the cloud image to be subjected to cloud removal comprises a plurality of image blocks with the preset step length; and
and determining a cloud layer mask matrix corresponding to the cloud image according to the cloud image to be subjected to cloud removal, wherein a cloud area in the cloud layer mask matrix is marked as 0, and other areas are marked as 1.
8. A remote sensing image processing apparatus, a remote sensing image processing model including an image updating module and a matrix updating module, the apparatus comprising:
the device comprises an acquisition module, a cloud layer mask matrix generation module and a cloud layer mask matrix generation module, wherein the acquisition module is used for acquiring a cloud image to be cloud-removed and the cloud layer mask matrix of the cloud image;
the first output module is used for inputting the cloud image and the cloud layer mask matrix into the image updating module, eliminating a cloud layer in a preset area in the cloud image based on the cloud layer mask matrix, and outputting an updated cloud image;
the second output module is used for inputting the updated cloud image into the matrix updating module, updating the cloud layer mask matrix and outputting the updated cloud layer mask matrix;
and the elimination module is used for obtaining an eliminated cloud layer image according to the updated cloud image and the updated cloud layer mask matrix.
9. An electronic device, comprising:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
11. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 7 when executed.
CN202210305227.1A 2022-03-25 2022-03-25 Remote sensing image processing method, device, equipment and medium Pending CN114677590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210305227.1A CN114677590A (en) 2022-03-25 2022-03-25 Remote sensing image processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210305227.1A CN114677590A (en) 2022-03-25 2022-03-25 Remote sensing image processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114677590A true CN114677590A (en) 2022-06-28

Family

ID=82076456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210305227.1A Pending CN114677590A (en) 2022-03-25 2022-03-25 Remote sensing image processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114677590A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876817A (en) * 2023-12-25 2024-04-12 北京化工大学 Method for generating countermeasure sample

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876817A (en) * 2023-12-25 2024-04-12 北京化工大学 Method for generating countermeasure sample

Similar Documents

Publication Publication Date Title
US8547389B2 (en) Capturing image structure detail from a first image and color from a second image
US8340415B2 (en) Generation of multi-resolution image pyramids
CN108229525B (en) Neural network training and image processing method and device, electronic equipment and storage medium
US8867858B2 (en) Method and system for generating an output image of increased pixel resolution from an input image
CN109753971B (en) Correction method and device for distorted text lines, character recognition method and device
CN110991430B (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
Fishbain et al. Real-time stabilization of long range observation system turbulent video
CN112800915A (en) Building change detection method, building change detection device, electronic device, and storage medium
CN112101309A (en) Ground object target identification method and device based on deep learning segmentation network
CN111797571B (en) Landslide susceptibility evaluation method, landslide susceptibility evaluation device, landslide susceptibility evaluation equipment and storage medium
CN113689372B (en) Image processing method, apparatus, storage medium, and program product
CN114677590A (en) Remote sensing image processing method, device, equipment and medium
CN115512222A (en) Method for evaluating damage of ground objects in disaster scene of offline training and online learning
Li et al. GTMNet: a vision transformer with guided transmission map for single remote sensing image dehazing
CN116309612B (en) Semiconductor silicon wafer detection method, device and medium based on frequency decoupling supervision
CN115760641A (en) Remote sensing image cloud and fog removing method and device based on multi-scale feature attention network
CN115760578A (en) Image processing method and device, electronic equipment and storage medium
Zhijian et al. Infrared image super-resolution method based on dual-branch deep neural network
CN115880517A (en) Model training method and device and related equipment
CN112164006A (en) Image color homogenizing method and device, electronic equipment and storage medium
CN117495713B (en) Remote sensing blurred image restoration method and system
CN116402725B (en) Oblique strip removing method, device, equipment and medium
CN116091367B (en) Blind deblurring method, device, equipment and medium for optical remote sensing image
Su et al. Restoration of turbulence-degraded images using the modified convolutional neural network
CN116402693B (en) Municipal engineering image processing method and device based on remote sensing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination