CN113408398A - Remote sensing image cloud detection method based on channel attention and probability up-sampling - Google Patents
Remote sensing image cloud detection method based on channel attention and probability up-sampling Download PDFInfo
- Publication number
- CN113408398A CN113408398A CN202110663934.3A CN202110663934A CN113408398A CN 113408398 A CN113408398 A CN 113408398A CN 202110663934 A CN202110663934 A CN 202110663934A CN 113408398 A CN113408398 A CN 113408398A
- Authority
- CN
- China
- Prior art keywords
- module
- remote sensing
- layer
- sampling
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a remote sensing image cloud detection method based on channel attention and probability upsampling, which comprises the following steps: acquiring a training sample set and a test sample set; constructing a remote sensing image cloud detection network based on channel attention and probability up-sampling; performing iterative training on the remote sensing image cloud detection network; and acquiring a cloud detection result of the remote sensing image. The method comprises the steps of extracting spatial texture information of shallow features by using a channel attention module, and splicing the spatial texture information to deep features; meanwhile, the probability up-sampling module is adopted to enable the characteristic edge information to be more continuous, the problem that detection of a thin cloud area and a cloud boundary area is inaccurate is solved, and the cloud detection precision is improved.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to a remote sensing image cloud detection method, and particularly relates to a deep learning remote sensing image cloud detection method based on an attention mechanism and probability upsampling, which can be used for classifying and eliminating clouds of remote sensing images.
Background
The remote sensing image generally refers to a film or a photo recording electromagnetic waves of various ground features, and has better spatial resolution and more detail information than a common image, so the remote sensing image has been widely applied to a plurality of fields, such as: military affairs, agricultural monitoring, hydrology, city planning management, environmental protection and the like, however, remote sensing images have some problems to be solved, and among many problems, inaccuracy of transmission images caused by cloud blocking is particularly prominent. Global Cloud data provided by international Satellite Cloud climate program isccp (international Satellite Cloud computing project) show that over 60% of the area of the world is often covered by clouds. In the satellite image acquired by the remote sensing satellite, it is difficult to acquire an accurate underlying surface because of the existence of the cloud. Therefore, the application of the remote sensing image in various fields such as target identification, agricultural detection and the like is influenced, and the further development of the remote sensing industry is hindered to a great extent. Therefore, the detection of cloud occlusion by some technologies has important significance for improving the quality of remote sensing images.
Traditionally, the research methods of cloud detection mainly include methods such as multiband threshold value and texture analysis. The multiband threshold method generally uses the difference between clouds and ground features in different bands to distinguish the clouds from the ground features, for example, the near infrared channel uses the high reflection and low temperature of the clouds to distinguish the clouds from the ground features; the texture analysis usually converts the cloud image into different color spaces to extract texture features, so as to realize effective separation of cloud and ground objects; these conventional methods usually take a lot of time to tune parameters and select thresholds, and the detection accuracy is low; meanwhile, in a specific area, such as a thin cloud area or a boundary area of a cloud, due to the fact that the specific area has a large similarity with a ground object, effective separation of the cloud and the ground object is difficult to perform by the multiband threshold value method and the texture analysis method.
In recent years, deep convolutional neural networks have enjoyed great success in the field of computer vision. Compared with the traditional cloud detection method, the performance of the existing convolutional neural network cloud detection algorithm is greatly improved, but the detection performance is still poor in some key areas, such as thin clouds and cloud boundary areas. These areas are not very concentrated in features or have high similarity between clouds and features, so that the algorithm is difficult to effectively distinguish between clouds and features. For example, a patent application with the application publication number of CN110598600A entitled "a remote sensing image cloud detection method based on UNET neural network" discloses a convolutional neural network algorithm for remote sensing image cloud detection, which extracts deep features of a cloud by using a coding and decoding network downsampling, and fuses the shallow features of a coding segment with a decoding end through skip connection, so as to implement an efficient cloud detection method, thereby improving detection accuracy and enhancing universality of the algorithm. However, the algorithm does not pay much attention to the thin cloud area and the cloud boundary area, and therefore, the obtained detection result is not very accurate.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a remote sensing image cloud detection method based on channel attention and probability upsampling, and aims to solve the technical problem of low cloud detection precision in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
(1) acquiring a training sample set and a testing sample set:
acquiring K remote sensing images with labels and containing cloud areas from a data set, wherein the K remote sensing images are P { (P)1,L1),(P2,L2),…,(Pk,Lk),…,(PK,LK) And combining M remote sensing images randomly selected from P and labels thereof to form a training sample setForming a test sample set by the rest N remote sensing images and labels thereofWherein K is not less than 10000, PkRepresenting the kth remote sensing image, LkRepresents PkThe label of (a) is used,the m-th training image is represented,to representThe label of (a) is used,the k-th test image is represented,to representThe label of (a) is used,K=M+N;
(2) constructing a remote sensing image cloud detection network H based on channel attention and probability up-sampling:
(2a) constructing a remote sensing image cloud detection network H comprising an encoding and decoding module and a probability up-sampling module, wherein the encoding and decoding module comprises a two-dimensional convolutional layer and four down-sampling modules cascaded with the two-dimensional convolutional layer, and the output end of a fourth down-sampling module is cascaded with four up-sampling modules and one two-dimensional convolutional layer; the probability up-sampling module is loaded between the last up-sampling module and the two-dimensional convolution layer; a channel attention module is respectively loaded between the first downsampling module and the fourth upsampling module, between the second downsampling module and the third upsampling module, between the third downsampling module and the second upsampling module and between the fourth downsampling module and the first upsampling module; the down-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and a maximum pooling layer; the up-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and an up-sampling layer; the channel attention module comprises a channel splicing layer, a full connection layer and a sigmoid activation function which are sequentially cascaded, the input end of the channel splicing layer is connected with a two-dimensional convolution and a multi-path three-dimensional expansion convolution in parallel, and a global maximum pooling layer and a global average pooling layer which are connected in parallel are loaded between the channel splicing layer and the full connection layer; the probability upsampling module comprises a maximum pooling layer and an upsampling layer;
(2b) defining a loss function of the remote sensing image cloud detection network H:
wherein y is(m)For inputting the mth training imageCorresponding labely'(m)Predicting the m-th training image for the network H;
(3) carrying out iterative training on the remote sensing image cloud detection network H:
(3a) the initialization iteration number is T, the maximum iteration number is T, T is more than or equal to 30, and the current cloud detection network is HtAnd let t equal to 1, Ht=H;
(3b) Will train the sample set PaCarrying out forward propagation as the input of the cloud detection network H of the remote sensing image to obtain a prediction result image set of the HWhereinRepresenting the mth training imageThe predicted result of (2);
(3c) calculating a set of prediction results by an L-loss function using a back propagation algorithmLabel sample set L corresponding to training imageaThen using a random gradient descent method to reduce the classification error theta to obtain a convolution kernel weight parameter omega of HtAnd a connection parameter upsilon of the full connection layertUpdating to obtain the remote sensing image cloud detection network H after the t iterationt;
(3d) Judging whether T is true or not, if so, obtaining a trained remote sensing image cloud detection network H*Otherwise, let t be t +1, and execute step (3 b);
(4) obtaining a cloud detection result of the remote sensing image:
testing set P of remote sensing imagesbCloud detection network H as trained remote sensing image*To obtain a result set of remote sensing image predictions
Compared with the prior art, the invention has the following advantages:
1. the remote sensing image cloud detection network constructed by the invention comprises a channel attention module loaded between a down sampling module and an up sampling module and a probability up sampling module loaded at the output end of the last up sampling module, in the process of training the remote sensing image cloud detection network and acquiring the remote sensing image cloud detection result, the attention module uses convolution kernels of different sizes to derive image features for different sized fields, meanwhile, the multi-path three-dimensional expansion convolutional layer is adopted to obtain the long-distance information of the network in the channel dimension, the information of the coding end is guided and then is connected to the decoding end in a jumping mode, the fusion of the cloud detail information of the shallow layer of the coding end and the network deep layer cloud semantic information of the decoding end can be realized, the network pays more attention to cloud edge information and long-distance information, and compared with the prior art, the accuracy of cloud detection is effectively improved.
2. The probability up-sampling module constructed by the invention firstly performs down-sampling, and then performs point multiplication with the input features after up-sampling, so that the proportion of the feature size among pixels is kept, the loss of space information caused by the fact that the down-sampling features become small is avoided, the detail information of a cloud boundary region and a thin cloud region is optimized, and the accuracy of cloud detection is effectively improved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
FIG. 2 is a schematic diagram of a remote sensing image cloud detection network constructed by the invention.
Fig. 3 is a schematic structural diagram of a downsampling module constructed by the present invention.
Fig. 4 is a schematic structural diagram of an upsampling module constructed by the present invention.
FIG. 5 is a schematic diagram of a channel attention module constructed in accordance with the present invention.
Fig. 6 is a schematic structural diagram of a probability upsampling module constructed by the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
Referring to fig. 1, the present invention includes the steps of:
step 1) obtaining a training sample set and a testing sample set:
obtaining 52272 remote sensing images P { (P) with labels and containing cloud areas from a data set1,L1),(P2,L2),…,(Pk,Lk),...,(P52272,L52272)}. And selecting the high-grade first remote sensing satellite image and the manually marked label thereof as a total sample set. The size of the label corresponding to the image with the high mark one number is the same as that of the label corresponding to the image, the label is a binary image, if the pixel is a common ground feature, the pixel value of the corresponding position of the label is 0, if the pixel is a cloud,the tag pixel value is 255. The high-resolution first-number remote sensing satellite simultaneously realizes high resolution and large width, 2m high resolution realizes the imaging width larger than 60km, 16m resolution realizes the imaging width larger than 800km, and the high-resolution first-number remote sensing satellite meets the comprehensive requirements of various time resolutions, various spectral resolutions and multi-source remote sensing data. The remote sensing image with the high first number has four channels, namely R, G, B and a near infrared channel. Dividing a training set and a test set of a sample set, and randomly selecting 41624 remote sensing images and corresponding scene remote sensing image labels to form a training data setThe rest 10648 remote sensing images and labels thereof form a test sample setPkRepresenting the kth remote sensing image, LkRepresents PkThe label of (a) is used,the m-th training image is represented,to representThe label of (a) is used,the nth test image is represented and,to representThe label of (1);
step 2), constructing a remote sensing image cloud detection network H based on channel attention and probability up-sampling:
(2a) constructing a remote sensing image cloud detection network H of an encoding and decoding module and a probability up-sampling module, wherein the encoding and decoding module comprises a two-dimensional convolution layer and four down-sampling modules cascaded with the two-dimensional convolution layer, and the output end of a fourth down-sampling module is cascaded with four up-sampling modules and one two-dimensional convolution layer; the probability up-sampling module is loaded between the last up-sampling module and the two-dimensional convolution layer; a channel attention module is respectively loaded between the first downsampling module and the fourth upsampling module, between the second downsampling module and the third upsampling module, between the third downsampling module and the second upsampling module and between the fourth downsampling module and the first upsampling module;
the structure of the down-sampling module is shown in fig. 3, and comprises two convolution layers, a batch normalization layer, a Relu activation function and a maximum pooling layer which are sequentially cascaded, wherein the convolution layers extract features, the size of a convolution kernel is 3 x 3, the moving step length of the convolution kernel is 1, the batch normalization layer reduces the coupling between network layers and accelerates the learning of the network, the problems of gradient explosion and gradient disappearance are avoided due to the addition of the Relu activation function, the pooling window size of the maximum pooling layer is 2 x 2, the maximum pooling layer can reduce the deviation of an estimated mean value caused by parameter errors of the convolution layers, and more texture information is reserved;
the structure of the up-sampling module is shown in fig. 4, the up-sampling module comprises two convolution layers, a batch normalization layer, a Relu activation function and an up-sampling layer which are sequentially laminated, wherein the convolution layers, the batch normalization layer, the Relu activation function and the down-sampling layer are consistent, the window size of the up-sampling layer is also 2 x 2 and is symmetrical with the down-sampling module, and in the convolution neural network, the symmetrical structure can conveniently fuse the characteristics of two symmetrical ends;
the structure of the channel attention module is shown in fig. 5, the channel attention module firstly fits input information by one parallel two-dimensional convolution and three expansion convolutions, wherein the size of a two-dimensional convolution kernel is 3 × 3, the size of a three-dimensional convolution kernel is 3 × 3 × 3, the expansion rates are sequentially 2, 5 and 7, and the expansion convolution usually expands the receptive field while not increasing the parameters; the multi-path convolution with different sizes can obtain information of receptive fields with different sizes, because the information is three-dimensional expansion convolution and also obtains information of channel dimensions, the characteristics of 4 paths of convolution layers are added and then are respectively input into a global maximum pooling layer and a global average pooling layer to carry out spatial information aggregation to generate context information with the size of 1 multiplied by C, wherein C is the number of channels for inputting the characteristics, the generated context information is added to pass through a full connection layer and then is subjected to Sigmoid activation to normalize the characteristic size to be between 0 and 1, an attention characteristic diagram is multiplied by the characteristics needing to strengthen the spatial position information to obtain a characteristic diagram strengthened by shallow layer characteristic spatial information, the characteristic diagram is added into the characteristics with the same size of deep layers to strengthen the detail information of the characteristic;
the structure of the probability upsampling module is as shown in fig. 6, firstly, maximum pooling downsampling is performed, then, upsampling is performed, and then, multiplication is performed with input features to obtain a final output result.
(2b) Defining a loss function of the remote sensing image cloud detection network H:
wherein y is(m)For inputting the mth training imageCorresponding labely'(m)Predicting the m-th training image for the network H;
step 3) carrying out iterative training on the remote sensing image cloud detection network H:
(3a) the initialization iteration time is T, the maximum iteration time is T-30, and the current cloud detection network is HtAnd let t equal to 1, Ht=H;
(3b) Will trainTraining sample set PaThe method is used as the input of a remote sensing image cloud detection network H for forward propagation, the cascaded downsampling layers obtain rich detail information, the detail information of each downsampling layer is added to the upsampling layers with the same characteristic size through a channel attention module, a probability upsampling module loaded after the last upsampling layer is optimized, and the prediction result image set H is obtainedWhereinRepresenting the mth training imageThe predicted result of (2);
(3c) calculating a set of prediction results by an L-loss function using a back propagation algorithmLabel sample set L corresponding to training imageaThe classification error theta is reduced by adopting a random gradient descent method, and H is subjected to reduction of the classification error thetatThe convolution kernel weight parameter omegatAnd a connection parameter upsilon of the full connection layertUpdating, wherein the updating formulas are respectively as follows:
where η represents a learning step length, η is 0.01, ωt+1And upsilont+1Respectively represent omegatAnd upsilontAs a result of the update of (a),representing a partial derivative operation.
(3d) Judging whether T is true or not, if so, obtaining a trained remote sensing image cloud detection network H*Otherwise, let t be t +1, and execute step (3 b);
step 4), obtaining a cloud detection result of the remote sensing image:
testing set P of remote sensing imagesbPredicting as the input of the trained remote sensing image cloud detection network H to obtain the result set of the remote sensing image cloud detectionWhereinTo representAnd if the pixel is detected as a cloud pixel, the pixel at the corresponding position in the cloud detection result is 255, otherwise the pixel is 0.
Claims (3)
1. A remote sensing image cloud detection method based on channel attention and probability up-sampling is characterized by comprising the following steps:
(1) acquiring a training sample set and a testing sample set:
acquiring K remote sensing images with labels and containing cloud areas from a data set, wherein the K remote sensing images are P { (P)1,L1),(P2,L2),…,(Pk,Lk),…,(PK,LK) And combining M remote sensing images randomly selected from P and labels thereof to form a training sample setForming a test sample set by the rest N remote sensing images and labels thereofWherein K is not less than 10000, PkRepresenting the kth remote sensing image, LkRepresents PkThe label of (a) is used,the m-th training image is represented,to representThe label of (a) is used,the nth test image is represented and,to representThe label of (a) is used,K=M+N;
(2) constructing a remote sensing image cloud detection network H based on channel attention and probability up-sampling:
(2a) constructing a remote sensing image cloud detection network H comprising an encoding and decoding module and a probability up-sampling module, wherein the encoding and decoding module comprises a two-dimensional convolutional layer and four down-sampling modules cascaded with the two-dimensional convolutional layer, and the output end of a fourth down-sampling module is cascaded with four up-sampling modules and one two-dimensional convolutional layer; the probability up-sampling module is loaded between the last up-sampling module and the two-dimensional convolution layer; a channel attention module is respectively loaded between the first downsampling module and the fourth upsampling module, between the second downsampling module and the third upsampling module, between the third downsampling module and the second upsampling module and between the fourth downsampling module and the first upsampling module; the down-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and a maximum pooling layer; the up-sampling module comprises a plurality of convolution layers, a batch normalization layer, a Relu activation function and an up-sampling layer; the channel attention module comprises a channel splicing layer, a full-connection layer and a sigmoid activation function which are sequentially cascaded, wherein the input end of the channel splicing layer is connected with a two-dimensional convolution layer and a multi-path three-dimensional expansion convolution layer in parallel, and a global maximum pooling layer and a global average pooling layer which are connected in parallel are loaded between the channel splicing layer and the full-connection layer; the probability upsampling module comprises a maximum pooling layer and an upsampling layer;
(2b) defining a loss function L of the remote sensing image cloud detection network H:
wherein y is(m)Representing the input mth training imageCorresponding labely'(m)To representThe predicted result of (2);
(3) carrying out iterative training on the remote sensing image cloud detection network H:
(3a) the initialization iteration number is T, the maximum iteration number is T, T is more than or equal to 30, and the current cloud detection network is HtAnd let t equal to 1, Ht=H;
(3b) Will train the sample set PaCloud detection network H as remote sensing imagetIs forward propagated to obtain a prediction result image set of HWhereinRepresenting the mth training imageThe predicted result of (2);
(3c) calculating a set of prediction results by an L-loss function using a back propagation algorithmLabel sample set L corresponding to training imageaThen using a random gradient descent method to reduce the classification error theta to HtThe convolution kernel weight parameter omegatAnd a connection parameter upsilon of the full connection layertUpdating to obtain the remote sensing image cloud detection network H after the t iterationt;
(3d) Judging whether T is true or not, if so, obtaining a trained remote sensing image cloud detection network H*Otherwise, let t be t +1, and execute step (3 b);
(4) obtaining a cloud detection result of the remote sensing image:
2. The method for detecting the cloud of the remote sensing images based on the channel attention and the probability upsampling as recited in claim 1, wherein the remote sensing image in the step (2a) is detected by a cloud detection network H, wherein:
the specific connection mode of the coding and decoding module is as follows: a first two-dimensional convolutional layer → a first downsampling module → a second downsampling module → a third downsampling module → a fourth downsampling module → a first upsampling module → a second upsampling module → a third upsampling module → a fourth upsampling module → a probabilistic upsampling module → a second two-dimensional convolutional layer;
the convolution kernel size of the first two-dimensional convolution layer is 3 multiplied by 3, the step length is 1, and the number of output channels is 32;
the four down-sampling modules respectively comprise two 2-dimensional convolution layers, the convolution kernels of the two 2-dimensional convolution layers are both 3 multiplied by 3, and the step length is 1; the sizes of the pooling windows of the maximum pooling layers contained in the four down-sampling modules are all 2 multiplied by 2; the number of output channels of the first, second, third and fourth down-sampling modules is 64, 128, 256 and 512 respectively;
the four up-sampling modules respectively comprise two 2-dimensional convolution layers, the convolution kernels of the two 2-dimensional convolution layers are both 3 multiplied by 3, and the step length is 1; the sampling window sizes of the four up-sampling modules are all 2 multiplied by 2; the number of output channels of the first, second, third and fourth down-sampling modules is 512, 256, 128 and 64 respectively;
the convolution kernel size of the second two-dimensional convolution layer is 3 multiplied by 3, the step length is 1, and the number of output channels is 1;
the loading mode of the channel attention module is as follows: the first downsampling module → the first channel attention module → the fourth downsampling module, the second downsampling module → the second channel attention module → the third downsampling module, the third downsampling module → the third channel attention module → the second downsampling module, the fourth downsampling module → the fourth channel attention module → the first downsampling module;
the channel attention module is characterized in that the convolution kernel size of a two-dimensional convolution layer connected with the input end of a channel splicing layer in parallel is 3 multiplied by 3, three paths of three-dimensional expansion convolution layers are adopted, the convolution kernel sizes of the three-dimensional expansion convolution layers are 3 multiplied by 3, and the expansion rates are respectively set to be 2, 5 and 7.
3. The method for detecting cloud of remote sensing images based on channel attention and probability upsampling as recited in claim 1, wherein the step (3c) is performed by reducing classification error theta to HtThe convolution kernel weight parameter omegatAnd a connection parameter upsilon of the full connection layertUpdating, wherein the updating formulas are respectively as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110663934.3A CN113408398B (en) | 2021-06-16 | 2021-06-16 | Remote sensing image cloud detection method based on channel attention and probability up-sampling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110663934.3A CN113408398B (en) | 2021-06-16 | 2021-06-16 | Remote sensing image cloud detection method based on channel attention and probability up-sampling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113408398A true CN113408398A (en) | 2021-09-17 |
CN113408398B CN113408398B (en) | 2023-04-07 |
Family
ID=77684063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110663934.3A Active CN113408398B (en) | 2021-06-16 | 2021-06-16 | Remote sensing image cloud detection method based on channel attention and probability up-sampling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113408398B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677306A (en) * | 2022-03-29 | 2022-06-28 | 中国矿业大学 | Context aggregation image rain removing method based on edge information guidance |
CN115019174A (en) * | 2022-06-10 | 2022-09-06 | 西安电子科技大学 | Up-sampling remote sensing image target identification method based on pixel recombination and attention |
CN116823664A (en) * | 2023-06-30 | 2023-09-29 | 中国地质大学(武汉) | Remote sensing image cloud removal method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018214195A1 (en) * | 2017-05-25 | 2018-11-29 | 中国矿业大学 | Remote sensing imaging bridge detection method based on convolutional neural network |
CN110728224A (en) * | 2019-10-08 | 2020-01-24 | 西安电子科技大学 | Remote sensing image classification method based on attention mechanism depth Contourlet network |
CN111612066A (en) * | 2020-05-21 | 2020-09-01 | 成都理工大学 | Remote sensing image classification method based on depth fusion convolutional neural network |
CN111738124A (en) * | 2020-06-15 | 2020-10-02 | 西安电子科技大学 | Remote sensing image cloud detection method based on Gabor transformation and attention |
CN111915592A (en) * | 2020-08-04 | 2020-11-10 | 西安电子科技大学 | Remote sensing image cloud detection method based on deep learning |
WO2021013334A1 (en) * | 2019-07-22 | 2021-01-28 | Toyota Motor Europe | Depth maps prediction system and training method for such a system |
-
2021
- 2021-06-16 CN CN202110663934.3A patent/CN113408398B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018214195A1 (en) * | 2017-05-25 | 2018-11-29 | 中国矿业大学 | Remote sensing imaging bridge detection method based on convolutional neural network |
WO2021013334A1 (en) * | 2019-07-22 | 2021-01-28 | Toyota Motor Europe | Depth maps prediction system and training method for such a system |
CN110728224A (en) * | 2019-10-08 | 2020-01-24 | 西安电子科技大学 | Remote sensing image classification method based on attention mechanism depth Contourlet network |
CN111612066A (en) * | 2020-05-21 | 2020-09-01 | 成都理工大学 | Remote sensing image classification method based on depth fusion convolutional neural network |
CN111738124A (en) * | 2020-06-15 | 2020-10-02 | 西安电子科技大学 | Remote sensing image cloud detection method based on Gabor transformation and attention |
CN111915592A (en) * | 2020-08-04 | 2020-11-10 | 西安电子科技大学 | Remote sensing image cloud detection method based on deep learning |
Non-Patent Citations (2)
Title |
---|
JING ZHANG等: ""A cloud detection method using convolutional neural network based on Gabor transform and attention mechanism with dark channel SubNet for remote sensing image"", 《REMOTE SENSING》 * |
裴亮等: ""结合全卷积神经网络与条件随机场的资源3号遥感影像云检测"", 《激光与光电子学进展》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677306A (en) * | 2022-03-29 | 2022-06-28 | 中国矿业大学 | Context aggregation image rain removing method based on edge information guidance |
CN114677306B (en) * | 2022-03-29 | 2022-11-15 | 中国矿业大学 | Context aggregation image rain removing method based on edge information guidance |
CN115019174A (en) * | 2022-06-10 | 2022-09-06 | 西安电子科技大学 | Up-sampling remote sensing image target identification method based on pixel recombination and attention |
CN116823664A (en) * | 2023-06-30 | 2023-09-29 | 中国地质大学(武汉) | Remote sensing image cloud removal method and system |
CN116823664B (en) * | 2023-06-30 | 2024-03-01 | 中国地质大学(武汉) | Remote sensing image cloud removal method and system |
Also Published As
Publication number | Publication date |
---|---|
CN113408398B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110232394B (en) | Multi-scale image semantic segmentation method | |
CN111860612B (en) | Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method | |
CN110188765B (en) | Image semantic segmentation model generation method, device, equipment and storage medium | |
CN109190752B (en) | Image semantic segmentation method based on global features and local features of deep learning | |
CN111797676B (en) | High-resolution remote sensing image target on-orbit lightweight rapid detection method | |
CN110111366B (en) | End-to-end optical flow estimation method based on multistage loss | |
CN113408398B (en) | Remote sensing image cloud detection method based on channel attention and probability up-sampling | |
CN111915592B (en) | Remote sensing image cloud detection method based on deep learning | |
CN108520501B (en) | Video rain and snow removing method based on multi-scale convolution sparse coding | |
CN110009010B (en) | Wide-width optical remote sensing target detection method based on interest area redetection | |
CN105825200B (en) | EO-1 hyperion Anomaly target detection method based on dictionary learning and structure rarefaction representation | |
CN111126359B (en) | High-definition image small target detection method based on self-encoder and YOLO algorithm | |
CN111310666B (en) | High-resolution image ground feature identification and segmentation method based on texture features | |
CN107977683B (en) | Joint SAR target recognition method based on convolution feature extraction and machine learning | |
CN111814685A (en) | Hyperspectral image classification method based on double-branch convolution self-encoder | |
CN112365091B (en) | Radar quantitative precipitation estimation method based on classification node map attention network | |
US20200034664A1 (en) | Network Architecture for Generating a Labeled Overhead Image | |
CN112464745A (en) | Ground feature identification and classification method and device based on semantic segmentation | |
CN113887472A (en) | Remote sensing image cloud detection method based on cascade color and texture feature attention | |
CN113962281A (en) | Unmanned aerial vehicle target tracking method based on Siamese-RFB | |
CN117237733A (en) | Breast cancer full-slice image classification method combining self-supervision and weak supervision learning | |
CN115908924A (en) | Multi-classifier-based small sample hyperspectral image semantic segmentation method and system | |
CN115049945A (en) | Method and device for extracting lodging area of wheat based on unmanned aerial vehicle image | |
CN114693577A (en) | Infrared polarization image fusion method based on Transformer | |
Shit et al. | An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |