CN113408398B - Remote sensing image cloud detection method based on channel attention and probability up-sampling - Google Patents

Remote sensing image cloud detection method based on channel attention and probability up-sampling Download PDF

Info

Publication number
CN113408398B
CN113408398B CN202110663934.3A CN202110663934A CN113408398B CN 113408398 B CN113408398 B CN 113408398B CN 202110663934 A CN202110663934 A CN 202110663934A CN 113408398 B CN113408398 B CN 113408398B
Authority
CN
China
Prior art keywords
module
remote sensing
layer
sampling
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110663934.3A
Other languages
Chinese (zh)
Other versions
CN113408398A (en
Inventor
张静
王雨晨
王慧
吴俊�
李云松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110663934.3A priority Critical patent/CN113408398B/en
Publication of CN113408398A publication Critical patent/CN113408398A/en
Application granted granted Critical
Publication of CN113408398B publication Critical patent/CN113408398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image cloud detection method based on channel attention and probability upsampling, which comprises the following steps: acquiring a training sample set and a test sample set; constructing a remote sensing image cloud detection network based on channel attention and probability up-sampling; performing iterative training on the remote sensing image cloud detection network; and acquiring a cloud detection result of the remote sensing image. The method comprises the steps of extracting spatial texture information of shallow features by using a channel attention module, and splicing the spatial texture information to deep features; meanwhile, the probability up-sampling module is adopted to enable the characteristic edge information to be more continuous, the problem that detection of a thin cloud area and a cloud boundary area is inaccurate is solved, and cloud detection precision is improved.

Description

Remote sensing image cloud detection method based on channel attention and probability up-sampling
Technical Field
The invention belongs to the technical field of image processing, relates to a remote sensing image cloud detection method, and particularly relates to a deep learning remote sensing image cloud detection method based on an attention mechanism and probability upsampling, which can be used for classifying and eliminating clouds of remote sensing images.
Background
The remote sensing image generally refers to a film or a photo recording electromagnetic waves of various ground features, and has better spatial resolution and more detail information than a common image, so the remote sensing image has been widely applied to a plurality of fields, such as: military affairs, agricultural monitoring, hydrology, city planning management, environmental protection and the like, however, remote sensing images have some problems to be solved, and among many problems, inaccuracy of transmission images caused by cloud blocking is particularly prominent. Global Cloud data provided by International Satellite Cloud climate programs ISCCP (International Satellite Cloud computing Project) show that over 60% of the world's area is often covered by clouds. In a satellite image acquired by a remote sensing satellite, it is difficult to acquire an accurate underlying surface due to the existence of clouds. Therefore, the application of the remote sensing image in various fields such as target identification, agricultural detection and the like is influenced, and the further development of the remote sensing cause is hindered to a great extent. Therefore, the detection of cloud occlusion by some technologies has important significance for improving the quality of remote sensing images.
Traditionally, the research methods of cloud detection are mainly multiband threshold and texture analysis. The multiband threshold method generally uses the difference between clouds and ground features in different bands to distinguish the clouds from the ground features, for example, the near infrared channel uses the high reflection and low temperature of the clouds to distinguish the clouds from the ground features; the texture analysis usually converts a cloud image into different color spaces for extracting texture features, thereby realizing effective separation of cloud and ground objects; these conventional methods usually take a lot of time to tune parameters and select thresholds, and the detection accuracy is low; meanwhile, in a specific area, such as a thin cloud area or a boundary area of a cloud, due to the fact that the specific area has a large similarity with a ground object, effective separation of the cloud and the ground object is difficult to perform by the multiband threshold value method and the texture analysis method.
In recent years, deep convolutional neural networks have enjoyed great success in the field of computer vision. Compared with the traditional cloud detection method, the performance of the existing convolutional neural network cloud detection algorithm is greatly improved, but the detection performance is still poor in some key areas, such as thin clouds and cloud boundary areas. These areas are not very concentrated in features or have high similarity between clouds and features, so that the algorithm is difficult to effectively distinguish between clouds and features. For example, a patent application with publication number CN110598600A entitled "a remote sensing image cloud detection method based on UNET neural network" discloses a convolutional neural network algorithm for remote sensing image cloud detection, which uses a coding and decoding network to perform downsampling to extract deep features of a cloud, and fuses shallow features of a coding segment with a decoding end through jump connection, so as to implement an efficient cloud detection method, thereby improving detection accuracy and enhancing universality of the algorithm. However, the algorithm does not pay much attention to the thin cloud area and the cloud boundary area, and therefore, the obtained detection result is not very accurate.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a remote sensing image cloud detection method based on channel attention and probability upsampling, and aims to solve the technical problem of low cloud detection precision in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
(1) Acquiring a training sample set and a testing sample set:
acquiring K remote sensing images with labels and containing cloud areas from a data set P = { (P) 1 ,L 1 ),(P 2 ,L 2 ),…,(P k ,L k ),…,(P K ,L K ) And combining M remote sensing images randomly selected from P and labels thereof to form a training sample set
Figure BDA0003116474900000021
The rest N remote sensing images and the labels thereof form a test sample set>
Figure BDA0003116474900000022
Wherein K is more than or equal to 10000 k Representing the kth remote sensing image, L k Represents P k Is in the presence of a label,. Sup.>
Figure BDA0003116474900000023
Represents the mth training image, and>
Figure BDA0003116474900000024
represents->
Figure BDA0003116474900000025
Is in the presence of a label,. Sup.>
Figure BDA0003116474900000026
Represents the kth test image, ->
Figure BDA0003116474900000027
To represent
Figure BDA0003116474900000028
In combination with a label>
Figure BDA0003116474900000029
K=M+N;
(2) Constructing a remote sensing image cloud detection network H based on channel attention and probability up-sampling:
(2a) Constructing a remote sensing image cloud detection network H comprising an encoding and decoding module and a probability up-sampling module, wherein the encoding and decoding module comprises a two-dimensional convolutional layer and four down-sampling modules cascaded with the two-dimensional convolutional layer, and the output end of a fourth down-sampling module is cascaded with four up-sampling modules and one two-dimensional convolutional layer; the probability up-sampling module is loaded between the last up-sampling module and the two-dimensional convolution layer; a channel attention module is respectively loaded between the first downsampling module and the fourth upsampling module, between the second downsampling module and the third upsampling module, between the third downsampling module and the second upsampling module, and between the fourth downsampling module and the first upsampling module; the down-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and a maximum pooling layer; the up-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and an up-sampling layer; the channel attention module comprises a channel splicing layer, a full connection layer and a sigmoid activation function which are sequentially cascaded, the input end of the channel splicing layer is connected with a two-dimensional convolution and a multi-path three-dimensional expansion convolution in parallel, and a global maximum pooling layer and a global average pooling layer which are connected in parallel are loaded between the channel splicing layer and the full connection layer; the probability upsampling module comprises a maximum pooling layer and an upsampling layer;
(2b) Defining a loss function of the remote sensing image cloud detection network H:
Figure BDA0003116474900000031
wherein y is (m) For inputting the m-th training image
Figure BDA0003116474900000032
The corresponding label->
Figure BDA0003116474900000033
y' (m) Predicting the m-th training image for the network H;
(3) Carrying out iterative training on the remote sensing image cloud detection network H:
(3a) The initialization iteration number is T, the maximum iteration number is T, T is more than or equal to 30, and the current cloud detection network is H t And let t =1,H t =H;
(3b) Will train the sample set P a Carrying out forward propagation as the input of the cloud detection network H of the remote sensing image to obtain a prediction result image set of the H
Figure BDA0003116474900000034
Wherein->
Figure BDA0003116474900000035
Represents the mth training image->
Figure BDA0003116474900000036
The predicted result of (2);
(3c) Calculating a set of prediction results by an L-loss function using a back propagation algorithm
Figure BDA0003116474900000037
Label sample set L corresponding to training image a Then using a random gradient descent method to reduce the classification error theta and carry out convolution kernel weight parameter omega of H t And a connection parameter upsilon of the full connection layer t Updating to obtain the remote sensing image cloud detection network H after the t iteration t
(3d) Judging whether T = T is true, if yes, obtaining a trained remote sensing image cloud detection network H * Otherwise, let t = t +1, and perform step (3 b);
(4) Acquiring a cloud detection result of the remote sensing image:
testing set P of remote sensing images b Cloud detection network H as trained remote sensing image * To obtain a result set of remote sensing image predictions
Figure BDA0003116474900000041
Compared with the prior art, the invention has the following advantages:
1. the remote sensing image cloud detection network constructed by the invention comprises a channel attention module loaded between a lower sampling module and an upper sampling module and a probability upper sampling module loaded at the output end of the last upper sampling module, wherein in the process of training the remote sensing image cloud detection network and acquiring the cloud detection result of the remote sensing image, the attention module obtains image characteristics of different receptive fields by using convolution kernels with different sizes, and simultaneously obtains long-distance information of the network in a channel dimension by adopting a multi-path three-dimensional expansion convolution layer, and the information of a coding end is connected to a decoding end in a jumping way after being guided, so that the fusion of the cloud detail information of a shallow layer of the coding end and the deep-layer cloud semantic information of the network of the decoding end can be realized, the network can pay more attention to cloud edge information and long-distance information, and compared with the prior art, the accuracy of cloud detection is effectively improved.
2. The probability up-sampling module constructed by the invention firstly performs down-sampling, and then performs point multiplication with the input features after up-sampling, so that the proportion of the feature size among pixels is kept, the loss of space information caused by the fact that the down-sampling features become small is avoided, the detail information of a cloud boundary region and a thin cloud region is optimized, and the accuracy of cloud detection is effectively improved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
FIG. 2 is a schematic diagram of a remote sensing image cloud detection network constructed by the invention.
Fig. 3 is a schematic structural diagram of a downsampling module constructed by the present invention.
Fig. 4 is a schematic structural diagram of an upsampling module constructed by the present invention.
FIG. 5 is a schematic diagram of a channel attention module constructed in accordance with the present invention.
Fig. 6 is a schematic structural diagram of a probability upsampling module constructed by the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
Referring to fig. 1, the present invention includes the steps of:
step 1) obtaining a training sample set and a testing sample set:
52272 remote sensing images P = { (P) with labels and cloud areas are obtained from a data set 1 ,L 1 ),(P 2 ,L 2 ),…,(P k ,L k ),...,(P 52272 ,L 52272 ) }. And selecting the high-grade first remote sensing satellite image and the manually marked label thereof as a total sample set. The size of the label corresponding to the high-grade first image is the same as that of the label corresponding to the image, the label is a binary image, if the pixel is a common ground feature, the pixel value of the corresponding position of the label is 0, and if the pixel is a cloud, the pixel value of the label is 255. The high-resolution first remote sensing satellite simultaneously realizes high resolution and large width, the 2m high resolution realizes the imaging width larger than 60km, the 16m resolution realizes the imaging width larger than 800km, and the method is suitable for the comprehensive requirements of various time resolutions, various spectral resolutions and multi-source remote sensing data. The high-resolution first remote sensing image is provided with four channels which are R, G, B and near infrared channels respectively. Dividing a training set and a test set of the sample set, randomly selecting 41624 remote sensing images and corresponding scene remote sensing image labels to form a training data set
Figure BDA0003116474900000051
The rest 10648 remote sensing images and labels thereof form a test sample set>
Figure BDA0003116474900000052
P k Representing the kth remote sensing image, L k Represents P k Is in the presence of a label,. Sup.>
Figure BDA0003116474900000053
Represents the mth training image, and>
Figure BDA0003116474900000054
represents->
Figure BDA0003116474900000055
Is in the presence of a label,. Sup.>
Figure BDA0003116474900000056
Represents the nth test image, and>
Figure BDA0003116474900000057
represents->
Figure BDA0003116474900000058
The label of (2);
step 2), constructing a remote sensing image cloud detection network H based on channel attention and probability up-sampling:
(2a) Constructing a remote sensing image cloud detection network H of an encoding and decoding module and a probability up-sampling module, wherein the encoding and decoding module comprises a two-dimensional convolution layer and four down-sampling modules cascaded with the two-dimensional convolution layer, and the output end of a fourth down-sampling module is cascaded with four up-sampling modules and one two-dimensional convolution layer; the probability up-sampling module is loaded between the last up-sampling module and the two-dimensional convolution layer; a channel attention module is respectively loaded between the first downsampling module and the fourth upsampling module, between the second downsampling module and the third upsampling module, between the third downsampling module and the second upsampling module, and between the fourth downsampling module and the first upsampling module;
the structure of the down-sampling module is shown in fig. 3, and comprises two convolution layers, a batch normalization layer, a Relu activation function and a maximum pooling layer which are sequentially cascaded, wherein the convolution layers extract features, the size of a convolution kernel is 3 x 3, the moving step length of the convolution kernel is 1, the batch normalization layer reduces the coupling between network layers and accelerates the learning of the network, the problems of gradient explosion and gradient disappearance are avoided due to the addition of the Relu activation function, the pooling window size of the maximum pooling layer is 2 x 2, the maximum pooling layer can reduce the deviation of an estimated mean value caused by parameter errors of the convolution layers, and more texture information is reserved;
the structure of the up-sampling module is shown in fig. 4, the up-sampling module comprises two convolution layers, a batch normalization layer, a Relu activation function and an up-sampling layer which are sequentially laminated, wherein the convolution layers, the batch normalization layer, the Relu activation function and the down-sampling layer are consistent, the window size of the up-sampling layer is also 2 x 2 and is symmetrical with the down-sampling module, and in the convolution neural network, the symmetrical structure can conveniently fuse the characteristics of two symmetrical ends;
the structure of the channel attention module is shown in fig. 5, the channel attention module firstly fits input information by one parallel two-way convolution and three-way expansion convolution, wherein the size of a two-dimensional convolution kernel is 3 multiplied by 3, the size of a three-dimensional convolution kernel is 3 multiplied by 3, the expansion rate is sequentially 2,5 and 7, and the expansion convolution usually expands the receptive field while not increasing the parameters; the multi-path convolution with different sizes can obtain information of receptive fields with different sizes, because the information is three-dimensional expansion convolution and also obtains information of channel dimensions, the characteristics of 4 paths of convolution layers are added and then are respectively input into a global maximum pooling layer and a global average pooling layer to carry out spatial information aggregation to generate context information with the size of 1 multiplied by C, wherein C is the number of channels for inputting the characteristics, the generated context information is added to pass through a full connection layer and then is subjected to Sigmoid activation to normalize the characteristic size to be between 0 and 1, an attention characteristic diagram is multiplied by the characteristics needing to strengthen the spatial position information to obtain a characteristic diagram strengthened by shallow layer characteristic spatial information, the characteristic diagram is added into the characteristics with the same size of deep layers to strengthen the detail information of the characteristic;
the structure of the probability upsampling module is as shown in fig. 6, firstly, maximum pooling downsampling is performed, then, upsampling is performed, and then, multiplication is performed with input features to obtain a final output result.
(2b) Defining a loss function of the remote sensing image cloud detection network H:
Figure BDA0003116474900000061
wherein y is (m) For inputting the mth training image
Figure BDA0003116474900000062
The corresponding label->
Figure BDA0003116474900000063
y' (m) Predicting the m-th training image for the network H;
step 3) carrying out iterative training on the remote sensing image cloud detection network H:
(3a) The initialization iteration number is T, the maximum iteration number is T =30, and the current cloud detection network is H t And let t =1,H t =H;
(3b) Will train the sample set P a The method is used as the input of a remote sensing image cloud detection network H for forward propagation, the cascaded downsampling layers obtain rich detail information, the detail information of each downsampling layer is added to the upsampling layers with the same characteristic size through a channel attention module, a probability upsampling module loaded after the last upsampling layer is optimized, and the prediction result image set H is obtained
Figure BDA0003116474900000071
Wherein +>
Figure BDA0003116474900000072
Represents the mth training image->
Figure BDA0003116474900000073
The predicted result of (2);
(3c) Calculating a set of prediction results by an L-loss function using a back propagation algorithm
Figure BDA0003116474900000074
Label sample set L corresponding to training image a The classification error theta is reduced by adopting a random gradient descent method, and H is subjected to reduction of the classification error theta t The convolution kernel weight parameter omega t And a connection parameter upsilon of the full connection layer t Updating, wherein the updating formulas are respectively as follows:
Figure BDA0003116474900000075
Figure BDA0003116474900000076
where η represents the learning step, η =0.01, ω t+1 And upsilon t+1 Respectively represent omega t And upsilon t As a result of the update of (a),
Figure BDA0003116474900000077
representing a partial derivative operation.
(3d) Judging whether T = T is true, if yes, obtaining a trained remote sensing image cloud detection network H * Otherwise, let t = t +1, and perform step (3 b);
step 4), obtaining a cloud detection result of the remote sensing image:
testing set P of remote sensing images b Predicting as the input of the trained remote sensing image cloud detection network H to obtain a result set of the remote sensing image cloud detection
Figure BDA0003116474900000078
Wherein +>
Figure BDA0003116474900000079
Represents->
Figure BDA00031164749000000710
And if the pixel is detected as a cloud pixel, the pixel at the corresponding position in the cloud detection result is 255, otherwise the pixel is 0./>

Claims (3)

1. A remote sensing image cloud detection method based on channel attention and probability up-sampling is characterized by comprising the following steps:
(1) Acquiring a training sample set and a testing sample set:
acquiring K remote sensing images with labels and containing cloud areas from a data set P = { (P) 1 ,L 1 ),(P 2 ,L 2 ),...,(P k ,L k ),...,(P K ,L K ) And combining M remote sensing images randomly selected from P and labels thereof to form a training sample set
Figure FDA0004048538420000011
The rest N remote sensing images and the labels thereof form a test sample set>
Figure FDA0004048538420000012
Wherein K is more than or equal to 10000 k Representing the kth remote sensing image, L k Represents P k In combination with a label>
Figure FDA0004048538420000013
Represents the mth training image, and>
Figure FDA0004048538420000014
represents->
Figure FDA0004048538420000015
Is in the presence of a label,. Sup.>
Figure FDA0004048538420000016
Represents the nth test image, and>
Figure FDA0004048538420000017
represents->
Figure FDA0004048538420000018
Is in the presence of a label,. Sup.>
Figure FDA0004048538420000019
(2) Constructing a remote sensing image cloud detection network H based on channel attention and probability up-sampling:
(2a) Constructing a remote sensing image cloud detection network H comprising an encoding and decoding module and a probability up-sampling module, wherein the encoding and decoding module comprises a two-dimensional convolution layer and four down-sampling modules cascaded with the two-dimensional convolution layer, and the output end of the fourth down-sampling module is cascaded with the four up-sampling modules and the two-dimensional convolution layer; the probability up-sampling module is loaded between the last up-sampling module and the two-dimensional convolution layer; a channel attention module is respectively loaded between the first downsampling module and the fourth upsampling module, between the second downsampling module and the third upsampling module, between the third downsampling module and the second upsampling module and between the fourth downsampling module and the first upsampling module; the down-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and a maximum pooling layer; the up-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and an up-sampling layer; the channel attention module comprises a channel splicing layer, a full-connection layer and a sigmoid activation function which are sequentially cascaded, wherein the input end of the channel splicing layer is connected with a two-dimensional convolution layer and a multi-path three-dimensional expansion convolution layer in parallel, and a global maximum pooling layer and a global average pooling layer which are connected in parallel are loaded between the channel splicing layer and the full-connection layer; the probability upsampling module comprises a maximum pooling layer and an upsampling layer;
(2b) Defining a loss function L of the remote sensing image cloud detection network H:
Figure FDA0004048538420000021
wherein y is (m) Representing the input mth training image
Figure FDA0004048538420000022
Corresponding label>
Figure FDA0004048538420000023
y' (m) Represents->
Figure FDA0004048538420000024
The predicted result of (2);
(3) Carrying out iterative training on the remote sensing image cloud detection network H:
(3a) The initialization iteration number is T, the maximum iteration number is T, T is more than or equal to 30, and the current cloud detection network is
H t And let t =1,H t =H;
(3b) Will train the sample set P a Cloud detection network H as remote sensing image t Is forward propagated to obtain a prediction result image set of H
Figure FDA0004048538420000025
Wherein +>
Figure FDA0004048538420000026
Represents the mth training image->
Figure FDA0004048538420000027
The predicted result of (2);
(3c) Calculating a set of prediction results by an L-loss function using a back propagation algorithm
Figure FDA0004048538420000028
Label sample set L corresponding to training image a Then using a random gradient descent method to reduce the classification error theta to H t The convolution kernel weight parameter omega t And a connection parameter upsilon of the full connection layer t Updating to obtain the remote sensing image cloud detection network H after the t iteration t
(3d) Judging whether T = T is true, if yes, obtaining a trained remote sensing image cloud detection network H * Otherwise, let t = t +1, and perform step (3 b);
(4) Obtaining a cloud detection result of the remote sensing image:
testing sample set P of remote sensing image b Cloud detection network H as trained remote sensing image * Is predicted to obtain P b Corresponding set of remote sensing image prediction results
Figure FDA0004048538420000029
2. The method for detecting the cloud of the remote sensing images based on the channel attention and the probability upsampling as recited in claim 1, wherein the remote sensing image in the step (2 a) is detected by a cloud detection network H, wherein:
the specific connection mode of the coding and decoding module is as follows: a first two-dimensional convolutional layer → a first downsampling module → a second downsampling module → a third downsampling module → a fourth downsampling module → a first upsampling module → a second upsampling module → a third upsampling module → a fourth upsampling module → a probabilistic upsampling module → a second two-dimensional convolutional layer;
the convolution kernel size of the first two-dimensional convolution layer is 3 multiplied by 3, the step length is 1, and the number of output channels is 32;
the four down-sampling modules respectively comprise two 2-dimensional convolution layers, the convolution kernels of the two 2-dimensional convolution layers are both 3 multiplied by 3, and the step length is 1; the sizes of the pooling windows of the maximum pooling layers contained in the four down-sampling modules are all 2 multiplied by 2; the number of output channels of the first, second, third and fourth down-sampling modules is 64, 128, 256 and 512 respectively;
the four up-sampling modules comprise two 2-dimensional convolution layers, the convolution kernels of the two 2-dimensional convolution layers are both 3 multiplied by 3, and the step length is 1; the sizes of the sampling windows of the four upsampling modules are all 2 multiplied by 2; the output channels of the first, second, third and fourth down-sampling modules are 512, 256, 128 and 64 respectively;
the convolution kernel size of the second two-dimensional convolution layer is 3 multiplied by 3, the step length is 1, and the number of output channels is 1;
the loading mode of the channel attention module is as follows: the first downsampling module → the first channel attention module → the fourth downsampling module, the second downsampling module → the second channel attention module → the third downsampling module, the third downsampling module → the third channel attention module → the second downsampling module, the fourth downsampling module → the fourth channel attention module → the first downsampling module;
the size of a convolution kernel of a two-dimensional convolution layer connected in parallel at the input end of a channel splicing layer is 3 multiplied by 3, three paths of three-dimensional expansion convolution layers are adopted, the sizes of the convolution kernels of the three-dimensional expansion convolution layers are 3 multiplied by 3, and the expansion rates are respectively set to be 2,5 and 7.
3. The method for detecting cloud of remote sensing images based on channel attention and probability upsampling as recited in claim 1, wherein the step (3 c) is performed by reducing classification error θ to H t The convolution kernel weight parameter omega t And a connection parameter upsilon of the full connection layer t Updating, wherein the updating formulas are respectively as follows:
Figure FDA0004048538420000031
Figure FDA0004048538420000032
wherein eta represents learning step length, 0.001 ≤ eta ≤ 0.02, and omega t+1 And upsilon t+1 Respectively represent omega t And upsilon t As a result of the update of (a),
Figure FDA0004048538420000033
representing a partial derivative operation. />
CN202110663934.3A 2021-06-16 2021-06-16 Remote sensing image cloud detection method based on channel attention and probability up-sampling Active CN113408398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110663934.3A CN113408398B (en) 2021-06-16 2021-06-16 Remote sensing image cloud detection method based on channel attention and probability up-sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110663934.3A CN113408398B (en) 2021-06-16 2021-06-16 Remote sensing image cloud detection method based on channel attention and probability up-sampling

Publications (2)

Publication Number Publication Date
CN113408398A CN113408398A (en) 2021-09-17
CN113408398B true CN113408398B (en) 2023-04-07

Family

ID=77684063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110663934.3A Active CN113408398B (en) 2021-06-16 2021-06-16 Remote sensing image cloud detection method based on channel attention and probability up-sampling

Country Status (1)

Country Link
CN (1) CN113408398B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677306B (en) * 2022-03-29 2022-11-15 中国矿业大学 Context aggregation image rain removing method based on edge information guidance
CN115019174B (en) * 2022-06-10 2023-06-16 西安电子科技大学 Up-sampling remote sensing image target recognition method based on pixel recombination and attention
CN116823664B (en) * 2023-06-30 2024-03-01 中国地质大学(武汉) Remote sensing image cloud removal method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612066A (en) * 2020-05-21 2020-09-01 成都理工大学 Remote sensing image classification method based on depth fusion convolutional neural network
CN111738124A (en) * 2020-06-15 2020-10-02 西安电子科技大学 Remote sensing image cloud detection method based on Gabor transformation and attention

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107066995A (en) * 2017-05-25 2017-08-18 中国矿业大学 A kind of remote sensing images Bridges Detection based on convolutional neural networks
WO2021013334A1 (en) * 2019-07-22 2021-01-28 Toyota Motor Europe Depth maps prediction system and training method for such a system
CN110728224B (en) * 2019-10-08 2022-03-11 西安电子科技大学 Remote sensing image classification method based on attention mechanism depth Contourlet network
CN111915592B (en) * 2020-08-04 2023-08-22 西安电子科技大学 Remote sensing image cloud detection method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612066A (en) * 2020-05-21 2020-09-01 成都理工大学 Remote sensing image classification method based on depth fusion convolutional neural network
CN111738124A (en) * 2020-06-15 2020-10-02 西安电子科技大学 Remote sensing image cloud detection method based on Gabor transformation and attention

Also Published As

Publication number Publication date
CN113408398A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN110232394B (en) Multi-scale image semantic segmentation method
CN113408398B (en) Remote sensing image cloud detection method based on channel attention and probability up-sampling
CN111797676B (en) High-resolution remote sensing image target on-orbit lightweight rapid detection method
CN110188765B (en) Image semantic segmentation model generation method, device, equipment and storage medium
CN111860612B (en) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
CN111915592B (en) Remote sensing image cloud detection method based on deep learning
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN110009010B (en) Wide-width optical remote sensing target detection method based on interest area redetection
CN111126359B (en) High-definition image small target detection method based on self-encoder and YOLO algorithm
CN112862774B (en) Accurate segmentation method for remote sensing image building
CN110930409B (en) Salt body semantic segmentation method and semantic segmentation system based on deep learning
CN111814685A (en) Hyperspectral image classification method based on double-branch convolution self-encoder
CN113554032B (en) Remote sensing image segmentation method based on multi-path parallel network of high perception
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN112464745B (en) Feature identification and classification method and device based on semantic segmentation
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN112149526B (en) Lane line detection method and system based on long-distance information fusion
CN111860823A (en) Neural network training method, neural network training device, neural network image processing method, neural network image processing device, neural network image processing equipment and storage medium
CN111027508B (en) Remote sensing image coverage change detection method based on deep neural network
CN113962281A (en) Unmanned aerial vehicle target tracking method based on Siamese-RFB
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN115527096A (en) Small target detection method based on improved YOLOv5
CN116580322A (en) Unmanned aerial vehicle infrared small target detection method under ground background
CN114693577A (en) Infrared polarization image fusion method based on Transformer
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant