CN113936204B - High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network - Google Patents

High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network Download PDF

Info

Publication number
CN113936204B
CN113936204B CN202111386889.8A CN202111386889A CN113936204B CN 113936204 B CN113936204 B CN 113936204B CN 202111386889 A CN202111386889 A CN 202111386889A CN 113936204 B CN113936204 B CN 113936204B
Authority
CN
China
Prior art keywords
neural network
snow
deep
network model
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111386889.8A
Other languages
Chinese (zh)
Other versions
CN113936204A (en
Inventor
汪左
涂征洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Normal University
Original Assignee
Anhui Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Normal University filed Critical Anhui Normal University
Priority to CN202111386889.8A priority Critical patent/CN113936204B/en
Publication of CN113936204A publication Critical patent/CN113936204A/en
Application granted granted Critical
Publication of CN113936204B publication Critical patent/CN113936204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high-resolution remote sensing image cloud and snow identification method and device fusing topographic data and a deep neural network, and the method comprises the following steps: s1, inputting a high-resolution remote sensing image into a deep Labv3+ semantic segmentation neural network model, and inputting topographic data of a position corresponding to the remote sensing image into a topographic feature extraction network model; s2, outputting the topographic features extracted by the topographic feature extraction network model to a deep Lab v3+ semantic segmentation neural network model, and fusing the topographic features with deep features in the deep Lab v3+ semantic segmentation neural network model; and S3, outputting a cloud pixel and a snow pixel in the high-resolution remote sensing image by the deep Lab v3+ semantic segmentation neural network model. The topographic features are integrated into a topographic feature extraction network, and a channel attention module is introduced into a deep Lab v3+ semantic segmentation neural network model, so that in the application of identifying the cloud and snow in the mountainous area, the cloud and snow identification precision of the model can be improved, the mutual misclassification of the cloud and snow is reduced, and meanwhile, the model prediction time is reduced.

Description

High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a high-resolution remote sensing image cloud and snow recognition method and device fusing topographic data and a deep neural network.
Background
The accumulated snow is one of the most active natural elements on the earth surface as an important component of the freezing circle, and has an important position in the aspects of climate change research, water resource utilization in arid and semi-arid regions and the like. On the one hand, snow has a high albedo, its accumulation and ablation always being accompanied by energy balance, and also has a sensitive feedback effect on climate changes, playing an extremely important role in the global or regional climate system. On the other hand, seasonal accumulated snow is one of main fresh water resource sources in arid and semiarid regions in China, and snow melting runoff accounts for more than 70% of river supply in spring, so that the snow melting runoff has direct influence on the industrial and agricultural development and the resident life in the regions. Meanwhile, spring snow-melting type flood is also one of the typical disasters in arid and semiarid regions. The remote sensing technology has the capability of acquiring large-area snow accumulation information and has obvious advantages for alpine mountain areas which are difficult to achieve by ground observation means. Therefore, the snow cover recognition research aiming at the remote sensing image has important scientific and practical significance.
For the optical remote sensing image, the cloud can bring great influence on ground object recognition, and the thick cloud layer can completely shield the ground surface, so that information is lost; the thin clouds intermix with the underlying surface information, resulting in a reduction in the accuracy of target recognition and information extraction. The cloud inspection standard in the satellite cloud picture can be divided into no cloud, partial cloud and complete cloud, and the clouds are always in a pollution state in snow remote sensing and can influence the recognition and classification of snow. Cloud and snow recognition is one of important tasks of remote sensing image snow recognition due to the high-reflectivity spectral characteristics of clouds and snow with similar visible light wave bands, and a classic algorithm of cloud and snow recognition is developed. Most high-resolution remote sensing images have high spatial resolution and richer detail textures, but the number of wave bands is small, and the images generally only have four optical wave bands and lack a key wave band, namely a short wave infrared wave band, required in a classic cloud and snow recognition algorithm. Therefore, how to identify the snow and the cloud by aiming at the high-resolution remote sensing image lacking the short-wave infrared band is one of the important directions of the snow remote sensing research.
For the mountain snow, different terrains can lead to the snow to receive the influence of different solar radiation volume, wind speed wind direction and air humidity etc. and the snowmelt of going uphill simultaneously can produce the influence to the downhill path to demonstrate different snow distributions and snow properties, and the distribution of cloud is more unordered, receives the topography to influence lessly. Therefore, the recognition of the snow cover in the mountainous area by combining the topographic data has certain reliability, but the deep neural network cloud and snow recognition based on the high-resolution remote sensing image at the present stage is not effectively added to the topographic data. Therefore, the method and the device for identifying the cloud and snow of the high-resolution remote sensing image, which are integrated with the terrain data and the deep neural network, have important value.
Disclosure of Invention
The invention provides a high-resolution remote sensing image cloud and snow recognition method fusing topographic data and a deep neural network, which can improve the cloud and snow recognition precision of a model and reduce the mutual wrong separation of cloud and snow in application of mountainous area cloud and snow recognition by extracting network model fusion topographic features by utilizing topographic features and introducing a channel attention module in a DeepLab v3+ semantic segmentation neural network model.
The invention is realized in this way, a high resolution remote sensing image cloud and snow recognition device fusing terrain data and a deep neural network, the device comprises:
a terrain feature extraction network model based on a simplified residual error neural network and a DeepLab v3+ semantic segmentation neural network model introducing an attention mechanism, wherein,
the output end of the ASPP layer of the hollow convolution pyramid pooling layer of the DeepLab v3+ semantic segmentation neural network model is connected with the 1 multiplied by 1 convolution layer through the fusion layer Concat, and the output end of the topographic feature extraction network model is connected with the fusion layer Concat.
Further, the topographic feature extraction network model is sequentially composed of a cavity 3 × 3 convolution layer, a batch normalization layer, a Relu activation function, a cavity 3 × 3 convolution layer, a batch normalization layer, a Relu activation function and three layers of cavity residual error networks.
Furthermore, a channel attention module is arranged at the output end of the fusion layer Concat.
The invention is realized in such a way, and provides a high-resolution remote sensing image cloud and snow identification method integrating topographic data and a deep neural network, which specifically comprises the following steps:
s1, inputting a high-resolution remote sensing image into a deep Lab v3+ semantic segmentation neural network model, and inputting topographic data of a position corresponding to the remote sensing image into a topographic feature extraction network model;
s2, outputting the topographic features extracted by the topographic feature extraction network model to a deep Lab v3+ semantic segmentation neural network model, and fusing the topographic features with deep features in the deep Lab v3+ semantic segmentation neural network model;
and S3, outputting a cloud pixel and a snow pixel in the high-resolution remote sensing image by the deep Lab v3+ semantic segmentation neural network model.
Further, after step S2, the method further includes:
weighting different channel attentions of the deep features of the fused terrain feature by a channel attentions module.
Further, before step S1, the method further includes:
constructing a sample data set, and training a high-resolution remote sensing image cloud and snow recognition device fusing topographic data and a deep neural network based on the sample data, wherein the method for constructing the sample data specifically comprises the following steps:
carrying out radiometric calibration, geometric correction and atmospheric correction on the high-resolution remote sensing image;
marking cloud pixels and snow pixels in the processed image, and cutting an integrated area of the cloud pixels and the snow pixels;
and carrying out sample enhancement on the cut image to form a sample data set.
According to the method, the network model fusion terrain features are extracted by utilizing the terrain features, the channel attention module is introduced into the deep Lab v3+ semantic segmentation neural network model, and in the application of identifying the cloud and snow in the mountainous area, the cloud and snow identification precision of the model can be improved, the condition that the cloud and snow are wrongly separated from each other is reduced, and meanwhile, the model prediction time is reduced.
Drawings
Fig. 1 is a schematic structural diagram of a high-resolution remote sensing image cloud and snow recognition model fusing topographic data and a deep neural network provided by an embodiment of the present invention;
fig. 2 is a flowchart of a cloud and snow identification method fused with terrain data according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a sample data set generation process provided in an embodiment of the present invention;
FIG. 4 is a cut high resolution remote sensing image data according to an embodiment of the present invention;
fig. 5 is label attribute data of the clipped high-resolution remote sensing image data according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of a sample enhancement method according to an embodiment of the present invention;
FIG. 7 is a cut terrain slope provided by an embodiment of the present invention;
FIG. 8 is a cut terrain slope provided by an embodiment of the present invention;
FIG. 9 is a comparison diagram of model snow and cloud recognition with different input patterns provided by an embodiment of the present invention;
FIG. 10 is a graph of the change in the loss function value of different introduction schemes of channel attention according to an embodiment of the present invention;
fig. 11 is a comparison diagram of snow and cloud recognition for different channel attention-introducing schemes provided by the embodiment of the invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be given in order to provide those skilled in the art with a more complete, accurate and thorough understanding of the inventive concept and technical solutions of the present invention.
1. High resolution remote sensing image data;
taking WFV data (called GF-1WFV data) acquired by a high-resolution first-order (GF-1) satellite which is a high-resolution satellite in China as an example, the GF-1WFV data is processed as follows:
and carrying out radiometric calibration, geometric correction and atmospheric correction on the GF-1WFV remote sensing data. Radiometric calibration is intended to eliminate the errors of the satellite sensor itself, to determine the radiation values at the sensor entrance, and to give physical significance. The method mainly comprises the step of converting a digital quantization value (DN) or a sensor voltage in a downloaded remote sensing original image into an absolute radiance value (reflectivity) through a radiometric calibration formula. The formula for high-resolution one-number radiometric calibration is as follows:
L=Gain*DN+Bias (1)
wherein Gain and Bias are calibration coefficients, and parameters can be obtained in China resource satellite application center.
The geometric correction is to more accurately overlay the topographic feature data. The control points used for geometric correction of the data are places with obvious relief, such as lake edges, and obvious ridge lines.
The atmospheric correction is to eliminate the error caused by the radiation brightness under the influence of atmospheric and illumination factors, and to convert the radiation brightness after radiometric calibration and geometric correction into the actual reflectivity of the earth surface. The air correction of the data adopts FLAASH air correction based on a MODTRAN 4+ radiation transmission model, and is correction based on a pixel level.
2. Topographic data
The snow cover in the mountainous area has obvious terrain texture under a certain scale, and the distribution of the snow cover has different influences along with the change of the ground surface. Therefore, the method adopts ASTER GDEM v2 data to calculate the topographic features and performs auxiliary judgment on snow cover identification so as to improve the accuracy of snow cover identification in mountainous areas. The data is jointly developed by METI in Japan and NASA in USA, and can be acquired at a geospatial data cloud website, and the spatial resolution is 30m. The ASTER GDEM data is generated based on the calculation of 'advanced satellite heat transfer emission and anti-radiometer (ASTER)' data, is high-resolution elevation image data covering the global land surface, is released in 6 months and 29 days in 2009, and is widely applied to global remote sensing research. Due to the fact that the original data of the v1 version is abnormal in a local area, the v2 version uses an algorithm to improve 1,500,000 images in the data of the v1 version. The official agency carries out precision test on the ASTER GDEM v2 data, and the result shows that the space and elevation precision of the data is improved. The results of calculating the slope and the direction of the slope by using the ASTER GDEM v2 are shown in FIGS. 7 and 8.
3. Generation of sample data set
The generation process of the sample data set is shown in fig. 3, and the invention carries out sample labeling on a high-resolution remote sensing image (GF-1 WFV image). Firstly, 6 preprocessed images are cut in areas where clouds and snow are more prominent, as shown in fig. 4. And vectorizing the clipped snow cloud category, and obtaining label attribute data of 9 images in total by using a vector-to-grid tool, as shown in fig. 5. The markers are classified into three categories, namely snow, clouds and background. The pixel ratios of the various types in the tag attribute data are counted, as shown in table 1. It can be seen that the data set is unbalanced, the background value is too much, and is as high as 87.93%, so that the attention during model training optimization is biased to the background value.
TABLE 1 proportion of label pixels of cloud and snow category image data set
Figure BDA0003367403550000061
When the manual sample is extracted, the classification precision is not high due to different subjective judgments of the people who process the manual sample, and the number of samples is limited. For such problems, a sample enhancement method is used herein to eliminate the model overfitting phenomenon caused by too small number of samples, so as to improve the robustness of the model, and the sample enhancement method is as shown in fig. 6. The method mainly carries out random cutting of uniform size on all cut images and label attribute data, enriches sample data amount through rotation, gamma conversion, blur conversion and Gaussian noise selected according to certain probability, ensures that sample images input into a network at each time are identical in scale, texture, detail and the like, and finally obtains 10000 samples with the pixel size of 256 x 4 as a cloud and snow recognition deep neural network model sample set.
4. High-resolution remote sensing image cloud and snow recognition model integrating terrain data and deep neural network
A coding region of the deep Lab v + network model adopts an independent topographic feature extraction network model (ResNET model) to extract topographic features, and fusion is carried out in deep features of the deep Lab v3+ semantic segmentation neural network model, so that the interaction between the topographic features and spectral data in feature extraction is avoided, the recognition precision is improved, and the false classification condition of cloud and snow is reduced; therefore, the cloud and snow recognition model fusing the terrain data is composed of a terrain feature extraction network model and a deep lab v3+ semantic segmentation neural network model, the output end of the cavity convolution pyramid pooling layer of the deep lab v3+ semantic segmentation neural network model is connected with a1 × 1 convolution layer through a fusion layer Concat, the output end of the terrain feature extraction network model is connected with the fusion layer Concat, and the structure of the cloud and snow recognition model fusing the terrain data is shown in fig. 1.
The method comprises the steps of extracting a topographic feature network, preliminarily extracting two convolutional layers, extracting three layers of cavity residual errors in a connected mode, changing convolution in jump connection into depth separable convolution through residual error connection, reducing complexity of a model as far as possible on the basis of obtaining topographic features, and reducing operation memory. Fusing a terrain feature extraction network into a DeepLab v3+ semantic segmentation neural network model, fusing the output terrain feature and deep semantic features in the DeepLab v3+ semantic segmentation neural network with each other mainly through a Concatenate method, and continuously optimizing the capability of extracting the terrain feature along with the training of the model; therefore, the terrain feature extraction network sequentially comprises a cavity 3 × 3 convolutional layer, a batch normalization layer, a Relu activation function, a cavity 3 × 3 convolutional layer, a batch normalization layer, a Relu activation function and three layers of cavity residual error networks.
The main characteristic of the DeepLab v3+ semantic segmentation neural network model structure is that most of convolutions in the network are replaced by hollow convolutions (Depthwise partial convolution), the number of calculated parameters is not increased on the premise of obtaining a larger receptive field, and the capability of the model for extracting dense features of images is enhanced.
The skeleton network of the coding region is an Xceptation network which adopts hole convolution, the network is developed based on Inception v3+, the model structure is similar to residual connection in the middle of ResNet, the correlation and the spatial correlation between channels are considered to be better to be processed separately, and then the common convolution is divided into depth convolution (Depthwis convolution) and Pointwise convolution (Pointwis convolution) by adopting depth separable convolution. The deep convolution only carries out space convolution on each channel characteristic value independently, and the point-by-point convolution only carries out convolution on different channel characteristic values of each pixel, so that the parameter quantity and the calculated quantity can be reduced, the calculation complexity is reduced, and the similar performance is maintained. Xception does not modify the three structures of the traditional entry flow network: all max pooling layer operations were replaced with depth-split convolution with step size based on entryflow, middleflow, exitflow. Finally, as with deep Labv3, the context information of the remote sensing image in four different scales is extracted by using the cavity convolution on four different receptive fields through a cavity convolution pyramid layer (ASPP), so that robust segmentation is realized, and the segmentation effect is improved.
The DeepLabv3+ semantic segmentation neural network model decoding area uses a jump connection mode of a full-volume computer network (FCN) for reference, low-level detail features of a coding area are fused with deep features output by the coding area through convolution dimensionality reduction, 1 x 1 convolution and bilinear interpolation upsampling are used for restoring a feature fusion image to the size of an original image, and finally, a Softmax activation function is used for classifying each pixel.
Fig. 2 is a flowchart of a high-resolution remote sensing image cloud and snow recognition method fusing topographic data and a deep neural network, which specifically includes the following steps:
s1, inputting a high-resolution remote sensing image into a deep Lab v3+ semantic segmentation neural network model, and inputting topographic data corresponding to the remote sensing image into a topographic feature extraction network;
s2, outputting the topographic features extracted by the topographic feature extraction network to a deep Lab v3+ semantic segmentation neural network model, and fusing deep features in the deep Lab v3+ semantic segmentation neural network model through a Concatenate method;
and S3, outputting a cloud pixel and a snow pixel in the high-resolution remote sensing image by the deep Lab v3+ semantic segmentation neural network model.
In the embodiment of the invention, the channel attention is used for weighting a plurality of deep network characteristics of the neural network model, thereby reducing the calculation amount of the model. Acquiring two spatial attention feature maps by using a maximum pooling layer and an average pooling layer in the feature channel direction, fusing by using a Concatenate method, obtaining a spatial attention weight matrix with the same size as the feature map by using a Softmax function after convolution, and finally multiplying by the original feature map to obtain a new feature map with attention. The attention of the channels mainly acquires the relation between different channels of the feature map, tells the model operation to pay more attention to which part of features, reduces unnecessary features to participate in calculation, aggregates two spatial dimension features through global maximum pooling and average pooling, then acquires the attention weight of each channel by using an MLP (full connection layer + Relu activation function + full connection layer) and a Softmax function, and finally multiplies the attention weight with the original feature map. The most commonly used traditional attention in semantic segmentation experiments is that channel attention and spatial attention are simply connected in series to screen spatial distribution and feature channels of a feature map, so that the burden of high-dimensional data is reduced, a network can pay more attention to important parts of input information, and the mapping relation from input to output is judged better.
According to the method, the topographic features are fused into the network by utilizing the topographic features, and the channel attention module is introduced into the deep Lab v3+ semantic segmentation neural network, so that in the application of identifying the cloud and snow in the mountainous area, the cloud and snow identification precision of the model can be improved, the condition that the cloud and snow are wrongly separated from each other is reduced, and meanwhile, the model prediction time is reduced.
Modeling and experiments are carried out according to the method of the invention, and the identification effect of the invention is explained.
Software environment: the language, python3.7; deep learning framework tensorflow 1.14.0, keras 2.3.1; operating system, windows. And (4) a video card high-performance computing library, namely CUDA11.0.
Hardware environment: cpu, inter (R) Core (TM) i7-9700F @3.0GHz; video card, NVIDIA GeForce RTX 2060SUPPER 8GB video memory.
And (3) carrying out training prediction on the deep Lab v3+ semantic segmentation neural network added with the topographic feature extraction network, wherein the classification result is shown in figure 9, and the classification precision is shown in table 2.
Table 2 comparison of recognition test accuracy of model snow and cloud with different input shape data
Figure BDA0003367403550000101
As can be seen from fig. 9, after the elevation data is added, the model precision drops sharply, because the direction of the snow cover in the mountainous area affected by the terrain is mainly heavier than the slope, the slope direction, and the like, and the elevation does not greatly help the cloud and snow identification, but affects GF-1 data processing because the elevation is too high. The first row shows that the thin cloud boundary is significantly less noisy after using slope or sloping data, since the topographical features can be well matched to the underlying surface texture under the thin cloud, thereby reducing the effect of the thin cloud on the terrain, and the cloud pixel accuracy is as high as 0.956 using sloping data. The second and third rows show that without using terrain data, areas with wide snow coverage are easily mistakenly divided into snow, after slope and slope are used, the misclassification is obviously improved, but a small part of the snow is still identified as cloud, and the intersection ratio of the snow and the cloud is stronger than that of using only GF-1 data. When the slope data is used, the snow intersection ratio is up to 0.819, and the snow pixel precision is up to 0.930, and the influence of terrain data on mountainous area cloud and snow recognition is proved.
The model is improved by using different attention introduction schemes, the model is trained under the same environment and parameters, the trained model is used for carrying out prediction timing through a 6124 image, model efficiency under different attention introduction schemes is shown in table 3, and it can be seen that the prediction time of the non-attention model is longest, and the training time and the storage memory of the attention introduction scheme 1 are the most. Figure 10 shows the training set loss function values versus the test set loss function values versus batch training.
TABLE 3 comparison of model efficiencies under different channel attention pull-in scenarios
Figure BDA0003367403550000102
Figure BDA0003367403550000111
And (3) annotation: the channel attention module in the channel attention scheme 1 is arranged at the input end of the ASPP; the channel attention module in the channel attention scheme 2 is arranged at the output end of the map feature extraction network; the channel attention module in the channel attention scheme 3 is disposed at the input/output end of the hollow pyramid pooling layer ASPP.
As can be seen from the above diagram, the loss function values tested by the channel attention scheme 1 and the channel attention scheme 2 have large fluctuation and both tend to increase. Comparing the loss function values for channel attention protocol 3 with the non-attention (fig. 4-10 (a)), the training loss values were more stable with channel attention, remained around 0.10 after the 20 th iteration, and also exhibited a slightly decreasing trend at 190 iterations. The trained model is predicted by adopting the same test sample set in the section, a more prominent cloud and snow recognition image is selected, as shown in fig. 11, and the comparison result is shown in table 4.
TABLE 4 comparison of model snow and cloud recognition test accuracy under different channel attention schemes
Figure BDA0003367403550000112
As can be seen from fig. 11 and table 4, the accuracy of each type of model is reduced when the channel attention scheme 1 is compared with the channel attention scheme 2 without using attention. The precision of various pixels of the channel attention 3 scheme is close to that of the pixel without attention, and the precision of the cloud pixel is slightly improved to 0.954. This is because it performs attention processing after multi-scale features are acquired, making the model more fully attentive.
The invention has been described above with reference to the accompanying drawings, and it is obvious that the invention is not limited to the specific implementation in the above-described manner, and it is within the scope of the invention to apply the inventive concept and solution to other applications without substantial modification.

Claims (2)

1. A high-resolution remote sensing image cloud and snow identification method fusing topographic data and a deep neural network is characterized by specifically comprising the following steps:
s0, constructing a sample data set, and training a high-resolution remote sensing image cloud and snow recognition model fusing topographic data and a deep neural network based on the sample data; the high-resolution remote sensing image cloud and snow recognition model fusing the topographic data and the deep neural network consists of a topographic feature extraction network model based on a simplified residual neural network and a deep Lab v3+ semantic segmentation neural network model introducing an attention mechanism, wherein,
the skeleton network of the coding region of the deep Lab v3+ semantic segmentation neural network model adopts an Xceptance network of cavity convolution; the decoding area adopts a jump connection mode of a full convolution network, low-level detail features of the coding area are fused with deep-level features output by the coding area through convolution dimensionality reduction, a feature fusion image is restored to the size of an original image by 1 x 1 convolution and bilinear interpolation upsampling, and each pixel is classified by using a Softmax activation function;
adopting an independent topographic feature extraction network model to extract topographic features in a coding region of the deep Lab v3+ semantic segmentation neural network model;
the output end of the ASPP layer of the cavity convolution pyramid pooling layer of the coding region of the deep Lab v3+ semantic segmentation neural network model is connected with the 1 multiplied by 1 convolution layer through a fusion layer Concat, and the output end of the topographic feature extraction network model is connected with the fusion layer Concat; arranging a channel attention module at the Concat output end of the fusion layer;
the topographic feature extraction network model consists of a cavity 3 x 3 convolutional layer, a batch normalization layer, a Relu activation function, a cavity 3 x 3 convolutional layer, a batch normalization layer, a Relu activation function and three layers of cavity residual error networks in sequence;
s1, inputting a high-resolution remote sensing image into a deep Lab v3+ semantic segmentation neural network model, and inputting topographic data of a position corresponding to the remote sensing image into a topographic feature extraction network model;
s2, outputting the topographic features extracted by the topographic feature extraction network model to a deep Lab v3+ semantic segmentation neural network model, and fusing deep features in the deep Lab v3+ semantic segmentation neural network model through a Concatenate method; weighting different channel attentions of deep features of the fused terrain features through a channel attentions module;
and S3, outputting a cloud pixel and a snow pixel in the high-resolution remote sensing image by the deep Lab v3+ semantic segmentation neural network model.
2. The method for identifying the cloud and snow of the high-resolution remote sensing image fused with the topographic data and the deep neural network as claimed in claim 1, wherein the sample data is constructed by the following specific steps:
carrying out radiometric calibration, geometric correction and atmospheric correction on the high-resolution remote sensing image;
marking cloud pixels and snow pixels in the processed image, and cutting an integrated area of the cloud pixels and the snow pixels;
and carrying out sample enhancement on the cut image to form a sample data set.
CN202111386889.8A 2021-11-22 2021-11-22 High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network Active CN113936204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111386889.8A CN113936204B (en) 2021-11-22 2021-11-22 High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111386889.8A CN113936204B (en) 2021-11-22 2021-11-22 High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network

Publications (2)

Publication Number Publication Date
CN113936204A CN113936204A (en) 2022-01-14
CN113936204B true CN113936204B (en) 2023-04-07

Family

ID=79287294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111386889.8A Active CN113936204B (en) 2021-11-22 2021-11-22 High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network

Country Status (1)

Country Link
CN (1) CN113936204B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821333B (en) * 2022-05-16 2022-11-18 中国人民解放军61540部队 High-resolution remote sensing image road material identification method and device
CN115482463B (en) * 2022-09-01 2023-05-05 北京低碳清洁能源研究院 Land coverage identification method and system for generating countermeasure network mining area
CN115661655B (en) * 2022-11-03 2024-03-22 重庆市地理信息和遥感应用中心 Southwest mountain area cultivated land extraction method with hyperspectral and hyperspectral image depth feature fusion
CN116740569B (en) * 2023-06-15 2024-01-16 安徽理工大学 Deep learning-based snowfall area cloud detection system
CN117496162B (en) * 2024-01-03 2024-03-22 北京理工大学 Method, device and medium for removing thin cloud of infrared satellite remote sensing image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930817A (en) * 2016-05-05 2016-09-07 中国科学院寒区旱区环境与工程研究所 Road accumulated snow calamity monitoring and early warning method based on multisource remote sensing data
CN110197182A (en) * 2019-06-11 2019-09-03 中国电子科技集团公司第五十四研究所 Remote sensing image semantic segmentation method based on contextual information and attention mechanism
CN111127493A (en) * 2019-11-12 2020-05-08 中国矿业大学 Remote sensing image semantic segmentation method based on attention multi-scale feature fusion
CN111932567A (en) * 2020-07-30 2020-11-13 中国科学院空天信息创新研究院 Satellite image-based ice lake contour automatic extraction method
CN113658200A (en) * 2021-07-29 2021-11-16 东北大学 Edge perception image semantic segmentation method based on self-adaptive feature fusion

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11461998B2 (en) * 2019-09-25 2022-10-04 Samsung Electronics Co., Ltd. System and method for boundary aware semantic segmentation
CN111223183A (en) * 2019-11-14 2020-06-02 中国地质环境监测院 Landslide terrain detection method based on deep neural network
CN111274865B (en) * 2019-12-14 2023-09-19 深圳先进技术研究院 Remote sensing image cloud detection method and device based on full convolution neural network
CN111079683B (en) * 2019-12-24 2023-12-12 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN112686903A (en) * 2020-12-07 2021-04-20 嘉兴职业技术学院 Improved high-resolution remote sensing image semantic segmentation model
CN112991351B (en) * 2021-02-23 2022-05-27 新华三大数据技术有限公司 Remote sensing image semantic segmentation method and device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930817A (en) * 2016-05-05 2016-09-07 中国科学院寒区旱区环境与工程研究所 Road accumulated snow calamity monitoring and early warning method based on multisource remote sensing data
CN110197182A (en) * 2019-06-11 2019-09-03 中国电子科技集团公司第五十四研究所 Remote sensing image semantic segmentation method based on contextual information and attention mechanism
CN111127493A (en) * 2019-11-12 2020-05-08 中国矿业大学 Remote sensing image semantic segmentation method based on attention multi-scale feature fusion
CN111932567A (en) * 2020-07-30 2020-11-13 中国科学院空天信息创新研究院 Satellite image-based ice lake contour automatic extraction method
CN113658200A (en) * 2021-07-29 2021-11-16 东北大学 Edge perception image semantic segmentation method based on self-adaptive feature fusion

Also Published As

Publication number Publication date
CN113936204A (en) 2022-01-14

Similar Documents

Publication Publication Date Title
CN113936204B (en) High-resolution remote sensing image cloud and snow identification method and device fusing terrain data and deep neural network
CN113850825B (en) Remote sensing image road segmentation method based on context information and multi-scale feature fusion
Lu et al. Multi-scale strip pooling feature aggregation network for cloud and cloud shadow segmentation
CN112668494A (en) Small sample change detection method based on multi-scale feature extraction
CN113449594B (en) Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN113239830B (en) Remote sensing image cloud detection method based on full-scale feature fusion
CN112101363B (en) Full convolution semantic segmentation system and method based on cavity residual error and attention mechanism
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN113223042B (en) Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN117078943A (en) Remote sensing image road segmentation method integrating multi-scale features and double-attention mechanism
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN115713537A (en) Optical remote sensing image cloud and fog segmentation method based on spectral guidance and depth attention
CN114821340A (en) Land utilization classification method and system
CN111738052A (en) Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning
CN115830596A (en) Remote sensing image semantic segmentation method based on fusion pyramid attention
CN116310339A (en) Remote sensing image segmentation method based on matrix decomposition enhanced global features
Byun et al. Deep Learning-Based Rainfall Prediction Using Cloud Image Analysis
CN117132884A (en) Crop remote sensing intelligent extraction method based on land parcel scale
Lv et al. Multi-scale attentive region adaptive aggregation learning for remote sensing scene classification
CN115330703A (en) Remote sensing image cloud and cloud shadow detection method based on context information fusion
Lv et al. A novel spatial–spectral extraction method for subpixel surface water
CN114694019A (en) Remote sensing image building migration extraction method based on anomaly detection
CN114821074A (en) Airborne LiDAR point cloud semantic segmentation method, electronic equipment and storage medium
CN112949771A (en) Hyperspectral remote sensing image classification method based on multi-depth multi-scale hierarchical attention fusion mechanism
Deng et al. A paddy field segmentation method combining attention mechanism and adaptive feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant