CN114298945B - Optical remote sensing image thick cloud removing method based on virtual image construction - Google Patents

Optical remote sensing image thick cloud removing method based on virtual image construction Download PDF

Info

Publication number
CN114298945B
CN114298945B CN202210005125.8A CN202210005125A CN114298945B CN 114298945 B CN114298945 B CN 114298945B CN 202210005125 A CN202210005125 A CN 202210005125A CN 114298945 B CN114298945 B CN 114298945B
Authority
CN
China
Prior art keywords
cloud
image
area
remote sensing
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210005125.8A
Other languages
Chinese (zh)
Other versions
CN114298945A (en
Inventor
柯樱海
王展鹏
吕明苑
朱丽娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN202210005125.8A priority Critical patent/CN114298945B/en
Publication of CN114298945A publication Critical patent/CN114298945A/en
Application granted granted Critical
Publication of CN114298945B publication Critical patent/CN114298945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a virtual image-based optical satellite remote sensing image thick cloud removing method. Firstly, for an image covered by thick cloud, cloud pixels are removed by using cloud mask data, the removed cloud pixels serve as target pixels to be recovered, and connected target pixels serve as cloud areas to be recovered. And respectively carrying out cloud removing treatment on each cloud area in the image. Then, a time sequence weighted spectrum distance is constructed, and for each target pixel, similar pixels are searched in the inner layer buffer area. Calculating weight by using the time sequence weighted spectral distance and the spatial distance of the similar pixel and the target pixel, distributing the residual of the similar pixel to the target pixel by a weight linear distribution method to obtain the residual of the target pixel, and further obtaining the residual image of the cloud area. And finally, combining the virtual image and the residual image of the cloud area to obtain the cloud-free image of the cloud area. The method effectively solves the problem that the optical remote sensing image cannot obtain the earth surface information under the cloud pollution condition.

Description

Optical remote sensing image thick cloud removing method based on virtual image construction
Technical Field
The invention belongs to the field of remote sensing image processing, and particularly relates to an optical remote sensing image thick cloud removing method based on virtual image construction.
Background
The satellite remote sensing is used as an important data source for large-scale long-time sequence earth observation, and plays an irreplaceable role in the aspects of land utilization change, crop management, environmental monitoring and the like. However, in the application process of satellite data, the optical satellite is inevitably affected by cloud pollution, so that the data availability is greatly reduced. Thick cloud cover shielding poses challenges to the timeliness and accuracy of monitoring, whether for long-term, terrestrial, geophysical monitoring or for short-term, sudden disaster occurrences. Therefore, the missing value caused by reconstructing the thick cloud is very important for the application of satellite data.
In the methods for removing thick clouds proposed in the past, the reconstruction of the cloud region is mainly assisted by the existing auxiliary information. According to the type of the dependent auxiliary information, the method for removing the thick cloud is summarized into three categories, namely a method based on the spatial information only, a method based on multiple data sources and a method based on the time and spatial information at the same time. However, in general, the existing method mainly considers using a monoscopic non-cloud image, the accuracy of this method depends on the similarity between the reference image and the cloud image, and the method using multi-temporal or long-time sequence images often has redundant time information. In addition, in the algorithm based on the single-scene reference image or the time series image, similar pixels corresponding to the cloud pixels are determined by using methods such as spectral distance, spectral angle and the like, but the similarity degree of the selected pixels is still difficult to determine. In addition, the existing method also needs to eliminate the precision difference caused by human factors in the actual process, and needs higher calculation efficiency to meet the application of long-time sequence and large scale.
Aiming at the problems, the invention provides an optical remote sensing image thick cloud removing method based on virtual image construction, and verification is carried out on a Landsat data set. The method can automatically select optimal multi-temporal data to carry out combination cloud removal by utilizing time series optical remote sensing images; meanwhile, a similar pixel selection strategy comprehensively considering spectrum and time is provided, so that similar pixel selection errors caused by time change are reduced; the cloud removing method has the advantages that high computing efficiency can be kept, cloud removing performance can be better achieved, and practical application of the remote sensing image on a long-time-sequence large scale can be met.
Disclosure of Invention
1. Technical problem to be solved
The invention aims to provide an optical remote sensing image thick cloud removing method based on a constructed virtual image, which aims to solve the problems in the background technology:
the problem that the optical remote sensing image cannot obtain the earth surface information under the cloud pollution condition is solved.
2. Technical scheme
An optical remote sensing image thick cloud removing method based on virtual image construction is characterized by comprising the following steps:
s1, acquiring a time sequence optical remote sensing image of the target research area;
s2, masking the target cloud pollution remote sensing image through cloud mask data;
s3, respectively constructing two layers of buffer areas aiming at each cloud area of the target image, and processing the buffer areas one by one;
s4, performing linear fitting based on the information of the cloud area buffer area and the information of the corresponding area in the time sequence image, adaptively selecting the number of images and obtaining linear fitting parameters, and constructing a virtual image of the cloud area and a residual image in the inner layer buffer area;
s5, aiming at each target pixel in the cloud area, selecting m similar pixels with the minimum spectral distance by using the time sequence weighted spectral distance;
s6, calculating interpolation distribution weight according to the time sequence weighted spectrum distance and the space distance of the m similar pixels and the target pixel;
s7, based on the interpolation distribution weight, distributing the residual error weighted sum of m similar image elements in the buffer area to the target image element in the cloud area to obtain a residual error image in the cloud area;
s8, summing the virtual image of the cloud area and the residual image of the corresponding area to obtain a cloud-free image of the cloud area;
and S9, repeating the operations of S4 to S8 until the reconstruction of each cloud area is completed, and merging to construct the final cloud-free image.
Preferably, the time-series optical remote sensing image in S1 includes multispectral optical satellite data such as Landsat series satellite data, Sentinel-2 satellite data, and the like, and the earth surface reflectivity product in the Landsat image is selected by the invention to prepare data for cloud removal reconstruction processing.
Preferably, in S2, masking the time-series optical remote sensing image by using cloud mask data; the cloud mask data is obtained in other ways, for example, the Landsat series data has a quality detection waveband, and when the pixel is identified as a cloud or cloud shadow by the quality detection waveband data, the pixel value is set to be a null value, that is, a part needing cloud removal, or the cloud pixel extracts an original optical remote sensing image by other cloud detection methods.
Preferably, in S3, each cloud region refers to an independent patch formed by a combination of cloud pixels, and different patches are processed separately without mutual interference.
Preferably, in S3, two layers of buffer pointers need to construct an inner layer buffer and an outer layer buffer for each cloud area, and first, by using the principle of the dilation algorithm to construct the inner layer buffer, for example, by using a sliding window, when there are cloud pixels in the window, all pixels (except for the cloud pixels) in the window will participate in constructing the inner layer buffer, and the outer layer buffer assumes that both the inner layer buffer and the cloud area are cloud pixels, and constructs the outer layer buffer in the same manner as above.
Preferably, the step of adaptively selecting the number of images and obtaining linear fitting parameters in S4 includes the following specific steps:
1) firstly, inputting an image and a cloud pollution image which are closest to the cloud pollution image in time sequence, and performing linear regression solution in an inner layer buffer area outside a cloud area to obtain linear regression parameters, wherein the specific formula is as follows:
Lp=a1L1+b+γ
wherein L ispRepresenting cloud contaminated images, i.e. images to be reconstructed, L1Representing t nearest to the cloud pollution image time in the time series1The moment image, γ, represents the residual image in the inner buffer in the linear regression formula. a is1Represents t1The coefficient value corresponding to the time image, b represents the constant term of the formula.
2) Based on the parameter a1(n) and b (n), and t1Time image, constructing virtual image Lv
Lv=a1L1+b
3) Based on virtual image LvAnd cloud pollution images, calculating Root Mean Square Error (RMSE) of the cloud pollution images and the virtual images of each wave band in the outer buffer area, summing the Root Mean Square Error (RMSE) and taking the sum as an accuracy evaluation index sMSE (the sum of RMSE), wherein the specific calculation method comprises the following steps:
Figure GDA0003674195080000041
wherein num represents the number of pixels in the outer buffer, and (x, y, n) represents the value of the nth band of pixel points (x, y);
4) adding a scene t in the time sequence1And (4) at the moment, the image on the other side corresponding to the cloud pollution image moment is processed according to the steps 1-3 again, when the result of the last sRRMSE is smaller than the result of the current time, the linear regression parameter and the image in the last time are output, otherwise, the cyclic operation is carried out, namely, a new image is added for recalculation, and the image adding sequence is carried out according to the forward and backward symmetry of the target image time. If cloud pollution exists in the cloud area of the cloud pollution image at a certain moment, subsequent circulation is skipped, and the next scene image is input. When multi-scene images are input, the formulas of linear regression and virtual image construction are respectively as follows:
Figure GDA0003674195080000042
Figure GDA0003674195080000043
preferably, the time-weighted Spectral Distance (TWSD) in S5 is calculated by the following formula:
Figure GDA0003674195080000051
wherein (x)i,yi) Is the position of the target pixel in the cloud area, (x)j,yj) Position of picture element in inner buffer, TWSDijRepresenting the time-sequence weighted spectral distance, a, of the ith and jth picture elementst(N) is determined based on the parameters obtained in the first step, and N represents the total number of bands.
Preferably, the calculation method of the spatial distance and the interpolation allocation weight in S6 is as follows:
Figure GDA0003674195080000052
wherein DistijThat is, the spatial distance between the ith target pixel and the jth similar pixel, and here, the method only calculates the m similar pixels with the minimum spectral distance corresponding to each target pixel. Then, calculating the weight of each similar pixel element to the target pixel element based on the time sequence weighted spectrum distance and the space distance, wherein the calculation method comprises the following steps:
Figure GDA0003674195080000053
Figure GDA0003674195080000054
at the jth similar pel weight w for the ith target pelijIs that
Figure GDA0003674195080000055
Preferably, in said S7Assigning weights w based on interpolationijAnd the residual error weighted sum of the m similar image elements in the buffer area is distributed to the target image element in the cloud area, and the calculation formula for obtaining the residual error image in the cloud area is as follows:
γ(xi,yi,n)=∑wij×γ(xj,yj,n)
wherein gamma (x)i,yiN) is a target pixel (x) in the cloud areai,yi) Is calculated by the weight w obtained in S6ijAnd the inner-layer buffer residual image γ (x) obtained in S4j,yjAnd n) calculating.
Preferably, in S8, the cloud region cloud-removed image is obtained by summing the virtual image obtained in S4 and the residual image obtained in S7, and then the final reconstruction result is obtained:
Lp=Lv
3. advantageous effects
Compared with the prior art, the invention has the advantages that:
(1) the method comprises the steps of constructing a virtual image, and selecting similar pixels by using time sequence weighted spectral distance to reconstruct the cloud pollution area of the optical remote sensing image. The method can automatically select the optimal image combination, and effectively overcomes the precision difference caused by artificially selecting the reference image; compared with the conventional spectral distance calculation method, the method can better consider time factors to select similar pixels and interpolation, and is more suitable for practical situations.
(2) The method can quickly reconstruct the cloud area of the remote sensing image while meeting high precision, can meet the practical application of the optical remote sensing image on a long-time-sequence large scale, and effectively improves the application capability of remote sensing data.
Drawings
Fig. 1 is a schematic flow chart of an optical remote sensing image thick cloud removal method based on virtual image construction according to the present invention;
FIG. 2 is a high resolution image thick cloud occlusion diagram according to an embodiment of the present invention;
fig. 3 is a diagram illustrating a result of removing thick clouds from a high-resolution image according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the attached drawings and examples for facilitating understanding and implementation of the present invention by those of ordinary skill in the art, and it is to be understood that the implementation examples described herein are only for illustrating and explaining the present invention and are not to be construed as limiting the present invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Example 1:
referring to fig. 1, the invention provides a method for removing thick cloud of an optical remote sensing image based on a virtual image, which specifically comprises the following steps:
s1, acquiring time sequence Landsat surface reflectivity product images of the target research area for data preparation;
s2, masking the time series Landsat remote sensing images through quality detection wave bands contained in the data, and setting the value of the pixel as a null value when the pixel is identified to be cloud or cloud shadow by the quality detection wave band data, namely, a part needing cloud removal;
s3, selecting a cloud pollution image to be processed by a target, respectively constructing an inner layer buffer area and an outer layer buffer area for each cloud area, and constructing the inner layer buffer area by using a sliding window as an example through the principle of an expansion algorithm, wherein when cloud pixels exist in the window, all pixels (except the cloud pixels) in the window participate in constructing the inner layer buffer area, and the inner layer buffer area and the cloud areas are assumed to be the cloud pixels by the outer layer buffer area, so that the outer layer buffer area is constructed in the same way;
s4, aiming at a single cloud area, firstly inputting an image and a cloud pollution image which have the time closest to that of the cloud pollution image in a time sequence, and carrying out linear regression solution on an inner layer buffer area outside the cloud area to obtain linear regression parameters, wherein the specific formula is as follows:
Lp=a1L1+b+γ
wherein L ispRepresenting cloud pollution images, on demandImage to be reconstructed, L1Representing t nearest to the cloud pollution image time in the time series1The moment image, γ, represents the residual image in the inner buffer in the linear regression formula. a is1Represents t1The coefficient value corresponding to the time image, b represents the constant term of the formula. Then, based on the parameter a1And b, and t1Time image, constructing virtual image Lv
Lv=a1L1+b
By virtual image LvAnd cloud pollution images, calculating Root Mean Square Error (RMSE) of the cloud pollution images and the virtual images of each wave band in the outer buffer area, summing the Root Mean Square Error (RMSE) and taking the sum as an accuracy evaluation index sMSE (the sum of RMSE):
Figure GDA0003674195080000081
where num represents the number of pels in the outer buffer, and their (x, y, n) represents the value of the nth band of pel point (x, y);
finally, a scene t is added in the time sequence1And (4) the image on the other side corresponding to the cloud pollution image at the moment is processed according to the step in S4 again, when the result of the last sRME is smaller than the result of the current time, the linear regression parameter and the image in the last time are output, otherwise, the cyclic operation is carried out, namely, a new scene image is added for recalculation, and the image adding sequence is carried out according to the front and back symmetry of the target image time. If cloud pollution exists in the cloud area of the cloud pollution image at a certain moment, subsequent circulation is skipped, and the next scene image is input. When multi-scene images are input, the formulas of linear regression and virtual image construction are respectively as follows:
Figure GDA0003674195080000082
Figure GDA0003674195080000083
s5, aiming at each target pixel in the cloud area, using the time sequence weighted spectrum distance TWSD:
Figure GDA0003674195080000084
wherein (x)i,yi) Is the position of the target pixel in the cloud area, (x)j,yj) Is the position of the pixels in the inner buffer, TWSDijRepresenting the time-sequence weighted spectral distance, a, of the ith and jth picture elementst(N) is determined based on the parameters obtained in the first step, N represents the total number of bands, and the smallest m similar pixels are selected according to the size of the spectral distance TWSD;
s6, calculating the spatial distance of the selected m similar image elements:
Figure GDA0003674195080000085
wherein DistijNamely the spatial distance between the ith target pixel and the jth similar pixel. Then, calculating the weight of each similar pixel element to the target pixel element based on the time sequence weighted spectrum distance and the space distance, wherein the calculation method comprises the following steps:
Figure GDA0003674195080000091
Figure GDA0003674195080000092
at the jth similar pel weight w for the ith target pelijIs that
Figure GDA0003674195080000093
S7, assigning weight w based on interpolationijAnd the residual weighted sum of the m similar pixels in the buffer area is distributed to the target pixel in the cloud area to obtain a residual image in the cloud area:
γ(xi,yi,n)=∑wij×γ(xj,yj,n)
wherein gamma (x)i,yiN) is a target pixel (x) in the cloud areai,yi) Is calculated by the weight w obtained in S6ijAnd the inner-layer buffer residual image γ (x) obtained in S4j,yjAnd n) calculating.
And S8, summing the virtual image obtained in S4 and the residual image obtained in S7 of the cloud region cloud-removed image to obtain a final reconstruction result:
Lp=Lv
and S9, repeating the operations of S4 to S8 until the reconstruction of each cloud area is completed, and merging to construct the final cloud-free image.
The method can automatically select optimal multi-temporal data to carry out combination cloud removal by utilizing time series optical remote sensing images; meanwhile, a similar pixel selection strategy comprehensively considering spectrum and time is provided, so that similar pixel selection errors caused by time change are reduced; the method has the advantages that high calculation efficiency can be kept, cloud removing performance is better, and practical application of the optical remote sensing image on a long-time sequence large scale can be met.
Example 2:
referring to fig. 2-3, the first embodiment differs from the first embodiment in that:
the yellow river delta was selected as the study area in this example. The method mainly writes and calculates codes on MATLAB software, and further supplements and proves the feasibility of the method by applying the method provided by the invention to practical cases.
Step one, obtaining a Landsat8 satellite earth surface reflectivity product covering 2019 years of yellow river delta region.
And step two, acquiring a cloud corresponding to each image and a shadow distribution area thereof through a quality detection waveband in the Landsat8 data, and performing mask processing by utilizing ENVI software to set the cloud area as a null value.
And step three, selecting the Landsat8 image of 2019, 7, month and 18 (figure 2) as target processing data, programming codes on MATLAB software, and firstly extracting each cloud area and a corresponding buffer area.
And step four, respectively constructing a linear regression equation of each cloud region on a code written based on MATLAB software, constructing a buffer region through a 16 multiplied by 16 sliding window, and obtaining an optimal virtual image and a buffer region residual image in a self-adaptive manner according to the precision result of an outer layer buffer region.
And step five, calculating a time sequence weighted spectral distance (TWSD) between each cloud pixel and the buffer area pixel through coefficients of different images in the linear regression process, and selecting 20 pixels with the minimum spectral distance as similar pixels for each cloud pixel.
And sixthly, calculating the spatial distance between each similar pixel and the target pixel, and calculating the weight of each similar pixel and the target pixel by combining the time sequence weighted spectrum distance.
And seventhly, redistributing the residual error value of the inner-layer buffer area by using the weight relation to obtain a residual error image in the cloud area.
Step eight, calculating each cloud area through the steps four to seven, and adding and correcting the virtual image and the residual image to obtain a final cloud-free image, wherein fig. 3 shows a result of cloud removal correction of the Landsat8 image of 7-month and 18-month in 2019.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. An optical remote sensing image thick cloud removing method based on virtual image construction is characterized by comprising the following steps:
s1, acquiring a time sequence optical remote sensing image of the target research area;
s2, masking the target cloud pollution remote sensing image through cloud mask data;
s3, respectively constructing two buffer areas aiming at each cloud area in a cloud pollution image, processing the buffer areas one by one, respectively processing different plaques which are formed by combining cloud pixels aiming at each cloud area, wherein the different plaques are not interfered with each other, and an inner buffer area and an outer buffer area are required to be constructed for each cloud area by two buffer area pointers;
s4, performing linear fitting based on the information of the cloud area buffer area and the information of the corresponding area in the time sequence image, adaptively selecting the number of images and obtaining linear fitting parameters, and constructing a virtual image of the cloud area and a residual image in the inner layer buffer area;
s5, aiming at each target pixel in the cloud area, selecting m similar pixels with the minimum spectral distance by using the time sequence weighted spectral distance;
s6, calculating interpolation distribution weight according to the time sequence weighted spectrum distance and the space distance of the m similar pixels and the target pixel;
s7, based on the interpolation distribution weight, distributing the residual error weighted sum of m similar image elements in the buffer area to the target image element in the cloud area to obtain a residual error image in the cloud area;
s8, summing the virtual image of the cloud area and the residual image of the corresponding area to obtain a cloud-free image of the cloud area;
s9, repeating the operations from S4 to S8 until each cloud area is reconstructed, and combining to construct a final cloud-free image;
in the step S4, the number of images is adaptively selected and linear fitting parameters are obtained, and the specific steps of constructing the virtual image and the residual image of the inner buffer area are as follows:
1) firstly, inputting an image and a cloud pollution image which are closest to the cloud pollution image in time sequence, and performing linear regression solution in an inner layer buffer area outside a cloud area to obtain linear regression parameters, wherein the specific formula is as follows:
Lp=a1L1+b+γ
wherein L ispRepresenting cloud-contaminated images, i.e. images that need to be reconstructed, L1Representing t nearest to the cloud pollution image time in the time series1Time image, gamma represents residual image in inner buffer area in linear regression formula, a1Represents t1The coefficient value corresponding to the nth wave band of the time image, b represents the constant item of the nth wave band of the formula;
2) based on the parameter a1And b, and t1Time image, constructing virtual image Lv
Lv=a1×+b
3) Based on virtual image LvAnd calculating the root mean square error of the cloud pollution image and the virtual image of each wave band in the outer buffer area, summing the root mean square errors, and taking the summed root mean square error as an accuracy evaluation index sRME, wherein the specific calculation method comprises the following steps:
Figure FDA0003674195070000021
wherein num represents the number of pixels in the outer buffer, and (x, y, n) represents the nth band of pixel points (x, y);
4) adding a scene t in the time sequence1And (3) the image on the other side corresponding to the cloud pollution image at the moment is processed according to the steps 1-3 again, when the sRME result of the last time is smaller than the result of the current time, the linear regression parameter and the image in the last time are output, and if not, the linear regression parameter and the image in the last time are outputPerforming cyclic operation, namely adding a new scene image for recalculation, wherein the image adding sequence is performed according to the front and back symmetry of the target image time; if cloud pollution exists in the cloud area of the cloud pollution image at a certain moment in the image input process, skipping the subsequent cycle and inputting the next image, and the optimal result can be automatically output in such a way; when multi-scene images are input, the formulas of linear regression and virtual image construction are respectively as follows:
Figure FDA0003674195070000031
Figure FDA0003674195070000032
2. the optical remote sensing image thick cloud removing method based on virtual image construction according to claim 1, characterized by comprising the following steps: the time series optical remote sensing image in the S1 comprises Landsat series satellite data and Sentinel-2 satellite data.
3. The method for removing the thick cloud of the optical remote sensing image based on the constructed virtual image as claimed in claim 1, wherein: masking the time series optical remote sensing image through cloud mask data in the S2; the Landsat series cloud mask data comprise quality detection wave bands, and when the pixel is identified to be cloud or cloud shadow by the quality detection wave band data, the value of the pixel is set to be a null value, namely, the part needing cloud removal.
4. The method for removing the thick cloud of the optical remote sensing image based on the constructed virtual image as claimed in claim 1, wherein: the time-series weighted spectral distance in S5 is calculated by the following formula:
Figure FDA0003674195070000033
wherein (x)i,yi) Is the position of the target pixel in the cloud area, (x)j,yj) Is the position of the pixels in the inner buffer, TWSDijRepresenting the time-sequence weighted spectral distance, a, of the ith and jth picture elementstIs determined based on the parameters obtained in the first step, N representing the total number of bands.
5. The optical remote sensing image thick cloud removing method based on the constructed virtual image according to claim 4, characterized in that: the spatial distance and interpolation allocation weight in S6 are calculated as follows:
Figure FDA0003674195070000041
wherein DistijThat is, the spatial distance between the ith target pixel and the jth similar pixel, the following method only calculates the m similar pixels with the minimum spectral distance corresponding to each target pixel, and calculates the weight of each similar pixel to the target pixel based on the time sequence weighted spectral distance and the spatial distance, and the calculation method is as follows:
Figure FDA0003674195070000042
Figure FDA0003674195070000043
at the jth similar pel weight w for the ith target pelijIs that
Figure FDA0003674195070000044
6. According to the claimsSolving 1 the optical remote sensing image thick cloud removing method based on the constructed virtual image is characterized by comprising the following steps: in S7, weight w is assigned based on interpolationijAnd the residual error weighted sum of the m similar image elements in the buffer area is distributed to the target image element in the cloud area, and the calculation formula for obtaining the residual error image in the cloud area is as follows:
γ(xi,yi,n)=∑wij×γ(xj,yj,n)
wherein, gamma (x)i,yiN) is a target pixel (x) in the cloud areai,yi) Is calculated by the weight w obtained in S6ijAnd the inner-layer buffer residual image γ (x) obtained in S4j,yjAnd n) calculating.
7. The method for removing the thick cloud of the optical remote sensing image based on the constructed virtual image as claimed in claim 1, wherein: in S8, the virtual image obtained in the cloud region cloud-removed image S4 and the residual image obtained in S7 are summed and corrected to obtain a final reconstruction result:
Lp=Lv+γ。
CN202210005125.8A 2022-01-05 2022-01-05 Optical remote sensing image thick cloud removing method based on virtual image construction Active CN114298945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210005125.8A CN114298945B (en) 2022-01-05 2022-01-05 Optical remote sensing image thick cloud removing method based on virtual image construction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210005125.8A CN114298945B (en) 2022-01-05 2022-01-05 Optical remote sensing image thick cloud removing method based on virtual image construction

Publications (2)

Publication Number Publication Date
CN114298945A CN114298945A (en) 2022-04-08
CN114298945B true CN114298945B (en) 2022-07-05

Family

ID=80976198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210005125.8A Active CN114298945B (en) 2022-01-05 2022-01-05 Optical remote sensing image thick cloud removing method based on virtual image construction

Country Status (1)

Country Link
CN (1) CN114298945B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800058A (en) * 2012-07-06 2012-11-28 哈尔滨工程大学 Remote sensing image cloud removing method based on sparse representation
CN108765329A (en) * 2018-05-21 2018-11-06 北京师范大学 A kind of spissatus minimizing technology and system of remote sensing image
CN109801253A (en) * 2017-11-13 2019-05-24 中国林业科学研究院资源信息研究所 A kind of adaptive cloud sector detection method of pair of high-resolution optical remote sensing image
CN110335208A (en) * 2019-06-10 2019-10-15 武汉大学 It is a kind of based on the spissatus minimizing technology of high-resolution remote sensing image gradually corrected
CN111899194A (en) * 2020-07-30 2020-11-06 青海省地理空间和自然资源大数据中心 Method for removing cloud and cloud shadow in remote sensing image
CN113837956A (en) * 2021-08-18 2021-12-24 西安理工大学 Method for detecting unpaired supervision cloud and removing thick cloud in large area

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063663B (en) * 2018-08-10 2021-08-03 武汉大学 Thick cloud detection and removal method for time sequence remote sensing image from coarse to fine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800058A (en) * 2012-07-06 2012-11-28 哈尔滨工程大学 Remote sensing image cloud removing method based on sparse representation
CN109801253A (en) * 2017-11-13 2019-05-24 中国林业科学研究院资源信息研究所 A kind of adaptive cloud sector detection method of pair of high-resolution optical remote sensing image
CN108765329A (en) * 2018-05-21 2018-11-06 北京师范大学 A kind of spissatus minimizing technology and system of remote sensing image
CN110335208A (en) * 2019-06-10 2019-10-15 武汉大学 It is a kind of based on the spissatus minimizing technology of high-resolution remote sensing image gradually corrected
CN111899194A (en) * 2020-07-30 2020-11-06 青海省地理空间和自然资源大数据中心 Method for removing cloud and cloud shadow in remote sensing image
CN113837956A (en) * 2021-08-18 2021-12-24 西安理工大学 Method for detecting unpaired supervision cloud and removing thick cloud in large area

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Thick Cloud Removal of Remote Sensing Images Using Temporal Smoothness and Sparsity Regularized Tensor Optimization;Chenxi Duan,et al;《remote sensing 》;20201020;全文 *
基于Landsat8影像的厚云及云影去除方法;陈梦露,等;《北京测绘》;20190430;第33卷(第4期);全文 *
基于多参考影像信息融合的遥感影像厚云去除;蒋斯立,等;《自然资源遥感》;20211206;全文 *
基于多参考影像信息融合的遥感影像厚云去除;金璐琦;《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》;20210215;全文 *

Also Published As

Publication number Publication date
CN114298945A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
Le Moigne et al. Refining image segmentation by integration of edge and region data
CN113112533B (en) SAR-multispectral-hyperspectral integrated fusion method based on multiresolution analysis
CN114972748B (en) Infrared semantic segmentation method capable of explaining edge attention and gray scale quantization network
CN110097498A (en) More air strips image mosaics and localization method based on unmanned aerial vehicle flight path constraint
Zhao et al. Seeing through clouds in satellite images
Sun et al. F3-Net: Multi-View Scene Matching for Drone-Based Geo-Localization
Zhao et al. Seeing through clouds in satellite images
Long et al. Dual self-attention Swin transformer for hyperspectral image super-resolution
CN115293968A (en) Super-light-weight high-efficiency single-image super-resolution method
Li et al. A pseudo-siamese deep convolutional neural network for spatiotemporal satellite image fusion
Liu et al. Thick cloud removal under land cover changes using multisource satellite imagery and a spatiotemporal attention network
CN112598590B (en) Optical remote sensing time series image reconstruction method and system based on deep learning
Guo et al. A flexible object-level processing strategy to enhance the weight function-based spatiotemporal fusion method
CN108171651B (en) Image alignment method based on multi-model geometric fitting and layered homography transformation
CN114092804A (en) Method and device for identifying remote sensing image
CN114298945B (en) Optical remote sensing image thick cloud removing method based on virtual image construction
Cresson et al. Comparison of convolutional neural networks for cloudy optical images reconstruction from single or multitemporal joint SAR and optical images
CN116739920A (en) Double-decoupling mutual correction multi-temporal remote sensing image missing information reconstruction method and system
CN116758388A (en) Remote sensing image space-time fusion method and device based on multi-scale model and residual error
CN115689941A (en) SAR image compensation method for cross-domain generation countermeasure and computer readable medium
CN116245757A (en) Multi-scene universal remote sensing image cloud restoration method and system for multi-mode data
Qadeer et al. Spatio-temporal crop classification on volumetric data
CN112558068B (en) Multi-baseline InSAR phase estimation method and system
CN108830793A (en) A kind of high-resolution remote sensing image radiation method for reconstructing
Dong et al. GC-UNet: an improved UNet model for mangrove segmentation using Landsat8

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant