CN113591608A - High-resolution remote sensing image impervious surface extraction method based on deep learning - Google Patents
High-resolution remote sensing image impervious surface extraction method based on deep learning Download PDFInfo
- Publication number
- CN113591608A CN113591608A CN202110783897.XA CN202110783897A CN113591608A CN 113591608 A CN113591608 A CN 113591608A CN 202110783897 A CN202110783897 A CN 202110783897A CN 113591608 A CN113591608 A CN 113591608A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- impervious surface
- sensing image
- image
- resolution remote
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 55
- 238000013135 deep learning Methods 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 51
- 238000005457 optimization Methods 0.000 claims abstract description 15
- 230000004927 fusion Effects 0.000 claims description 53
- 230000006870 function Effects 0.000 claims description 44
- 230000008569 process Effects 0.000 claims description 25
- 238000010606 normalization Methods 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 11
- 230000006835 compression Effects 0.000 claims description 10
- 238000007906 compression Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000011084 recovery Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 238000010586 diagram Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 abstract description 6
- 238000011160 research Methods 0.000 abstract description 6
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000013136 deep learning model Methods 0.000 abstract description 4
- 230000035945 sensitivity Effects 0.000 abstract description 2
- 230000004913 activation Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 210000002569 neuron Anatomy 0.000 description 7
- 230000002829 reductive effect Effects 0.000 description 7
- 238000012795 verification Methods 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000002759 z-score normalization Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000008021 deposition Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 239000002352 surface water Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a high-resolution remote sensing image impervious surface extraction method based on deep learning. According to the method, a deep learning network architecture is utilized, and impervious surface extraction is converted into a deep learning network model construction and training optimization problem. The method has the advantages that the characteristics of the high-resolution remote sensing image are fully considered, and the strong calculation capability and the feature extraction capability of the deep learning model are utilized to realize the accurate extraction of the remote sensing impervious surface. Meanwhile, in consideration of the sensitivity degree of the enhancement network to the detail impervious surface, a characteristic pyramid is introduced, image multi-scale information is fused, the generalization capability of the model is improved, and the extraction precision of the impervious surface of the remote sensing image is further improved. The method can realize automatic characteristic extraction of the impervious surface, and has very important practical application value for the extraction research of the remote sensing impervious surface to urban construction, urban ecological environment and the like.
Description
Technical Field
The invention belongs to the field of information extraction of remote sensing image data, and particularly relates to a technical scheme for extracting a high-resolution remote sensing image impervious surface based on deep learning.
Background
Impermeable Surfaces (IS) in the broad sense refers to any material of natural or man-made origin that can isolate Surface water from penetrating into the soil and thereby alter the flow, material deposition and contamination profile of flood run-off. In remote sensing images, the impervious surface means a surface where water cannot rapidly penetrate into the ground surface, and the surface is usually an artificial hard surface with impermeability, such as a building roof, a parking lot, a road and the like. The impervious surface is an important index for researching the urban expansion condition and measuring the urban ecological environment condition. The real-time and accurate impervious surface information is very important for city planning and environment and resource management, and the development of the relevant research of impervious surfaces has important significance for city ecological construction, city dynamic monitoring and the like.
The impervious surface remote sensing research has been greatly improved after the development of the last ten years, and remote sensing innovative technologies and methods for the impervious surface information inversion at home and abroad are successively proposed, such as a spectrum mixed analysis method, an index method, a regression analysis method and the like. But due to the limitations of spatial resolution and the high complexity of the impervious surface, the accuracy of the impervious surface extraction is limited. The deep learning method can fully utilize the processing performance of a computer, has the advantages of strong feature expression capability, automation and the like, and achieves a series of breakthrough application achievements in the fields of image classification, semantic segmentation, target detection and the like. Therefore, the deep learning method can solve the problems of large data volume, complex ground features and the like of the high-resolution remote sensing image, and has practical significance for the research of urban impervious surface extraction.
Disclosure of Invention
The invention aims to accurately extract impervious surface information from a high-resolution remote sensing image and provides a high-resolution remote sensing image impervious surface automatic extraction scheme based on deep learning.
In order to realize the purpose of the invention, the technical scheme is as follows:
a high-resolution remote sensing image impervious surface extraction method based on deep learning comprises the following steps:
s1: obtaining a high-resolution remote sensing image of a target area, and labeling the image to obtain a waterproof surface label image; carrying out data preprocessing operation on the high-resolution remote sensing image and the corresponding impervious surface label image to finally obtain an impervious surface sample data set;
s2: constructing a multi-scale fusion network model for extracting the characteristics of the high-resolution remote sensing image, wherein the multi-scale fusion network model takes U-Net as a backbone network, introduces a characteristic pyramid into the U-Net network to fuse the characteristics of different scales in the upsampling process, and predicts the high-resolution remote sensing image of the input model pixel by pixel ground object categories based on the fusion characteristics to realize the detection of the target of the multi-scale impervious surface;
s3: taking the impervious surface sample data set as training data, and performing iterative optimization on parameters of the multi-scale fusion network model by using a neural network optimizer through a minimized loss function to enable the multi-scale fusion network model to accurately detect the impervious surface in the high-resolution remote sensing image;
s4: and inputting the target high-resolution remote sensing image into a multi-scale fusion network model obtained by training, extracting image fusion characteristics and predicting the ground object type pixel by pixel to obtain an area with the ground object type of a watertight surface.
Based on the technical scheme, the steps are preferably realized in the following specific mode. The preferred implementation manners of each step can be combined correspondingly without conflict, and are not limited.
Preferably, in step S1, the watertight surface data set is constructed according to steps S11 to S12:
s11: marking the impervious surface area in the high-resolution remote sensing image of the target area to generate a impervious surface label image;
s12: extracting image blocks of the high-resolution remote sensing image and the corresponding impervious surface label image by using a random window with a fixed size, taking the extracted high-resolution remote sensing image blocks and impervious surface label image blocks as samples, and constructing a impervious surface sample data set after sample screening, data enhancement and normalization operations.
Preferably, the impervious surface sample data set is divided into a training set and a testing set in advance, the multi-scale fusion network model is trained by the training set, the model precision is verified by the testing set, and the impervious surface sample data set is used for impervious surface extraction of the actual high-resolution remote sensing image after meeting the precision requirement.
Further, the data enhancement operation includes multi-angle flipping, mirror mapping, linear stretching, and adding noise.
Further, the size of the random window is 256 × 256 pixels.
Preferably, in step S2, the multi-scale fusion network model includes a U-Net network and a feature pyramid network, where the U-Net network is composed of a left compression channel and a right expansion channel, the left compression channel downsamples the input high-resolution remote sensing image layer by layer, reduces spatial dimensions of pooling layers to realize remote sensing image feature extraction, and the right expansion channel upsamples layer by layer to gradually restore image details and spatial dimensions, and fuses with feature maps of corresponding layers of the left compression channel by using a jump structure; the feature pyramid network is used as a sub-network outside the backbone network U-Net, a feature graph of each level of a right side expansion channel in an up-sampling process can be obtained, size recovery is carried out on the feature graphs obtained by each level in a bilinear interpolation mode, and then the feature graphs after size recovery form a fusion feature graph in a channel splicing mode; and finally, inputting the fused feature map into a softmax classifier, predicting the probability that the pixel point belongs to each ground feature category, and obtaining the probability that each pixel point in the high-resolution remote sensing image corresponds to each ground feature category, so as to determine the region where the impervious surface category is located.
Further, the number of characteristic channels of the characteristic diagram after channel splicing is adjusted by using a convolution kernel of 1 × 1, and then softmax classification is carried out.
Preferably, the Loss function is composed of a cross entropy Loss function and a Dice Loss.
Preferably, the multi-scale fusion network model is trained by using a back propagation and optimization algorithm, and the weight is continuously updated by using an Adam optimization algorithm in the training process, so that the loss function error is continuously reduced until the model tends to be stable.
Preferably, in S5, the target high-resolution remote sensing image is cut into blocks in an edge overlapping manner, then each image block is input into a trained multi-scale fusion network model to obtain output classification blocks, and all classification blocks are sequentially spliced by ignoring an edge policy to obtain a ground feature classification result of a complete image, i.e., a region where the ground feature classification is a watertight surface can be extracted from the ground feature classification result.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention utilizes the superior fitting and calculating capacity of the modern artificial intelligence method to convert the impervious surface extraction into the deep learning network model construction and training optimization problem, and provides a new data-driven thought for the remote sensing impervious surface extraction.
(2) The method fully considers the characteristic that the high-resolution remote sensing image has high spatial resolution, utilizes the calculation capability and the characteristic extraction capability of deep learning, forms an improved multi-scale fusion network model based on a U-net model and introduces a characteristic pyramid network, can realize multi-scale characteristic fusion of the remote sensing image, is suitable for extracting multi-scale information of the image from the impervious surface of the high-resolution remote sensing image, and improves the extraction precision of the impervious surface of a small target.
(3) According to the invention, the characteristics of the target in the training data are actively learned, and the spectrum and space information is extracted, analyzed and applied according to the input image data, so that the problems of low efficiency, complex operation and the like in the characteristic selection process are avoided, the automatic extraction of the impervious surface can be realized, and the accurate impervious surface information is finally obtained. The method has the advantages of improving the Overall Accuracy (OA), Recall rate (Recall), F1 score (F1-score), Mean Intersection over Union (MIoU) and Kappa Coefficient (Kappa Coefficient) Accuracy evaluation, along with simple and easy operation and excellent extraction effect. The invention provides demonstration and reference for accurate and automatic extraction research of the remote sensing impervious surface.
Drawings
FIG. 1 is a flow chart of a high-resolution remote sensing image impervious surface extraction method based on deep learning;
fig. 2 is a schematic diagram of feature map fusion in a feature pyramid.
FIG. 3 is a comparison of the prediction results based on the deep learning model to the impervious surface label image.
Detailed Description
The invention will be further elucidated and described with reference to the drawings and the detailed description.
The high-resolution remote sensing image generally refers to a remote sensing image with a resolution at a meter level and a sub-meter level, and the complexity of urban land utilization conditions and the diversity of impervious surface materials cause that the direct extraction of the impervious surface from the high-resolution remote sensing image has certain challenges. Aiming at the requirement of extracting the urban impervious surface from the high-resolution remote sensing image, the difficulties of high precision, efficiency and automation degree exist in the existing remote sensing impervious surface extraction research. The conception of the invention is as follows: and comprehensively utilizing a deep neural network architecture of an artificial intelligence method, and converting impervious surface extraction into a deep learning network model construction and training optimization problem. The remote sensing image and the impervious surface label image thereof are used as basic input units, global optimization and class spatial relation information are introduced as constraints, and a deep learning model is trained to accurately extract the impervious surface. In the technical route, how to design the model structure of the deep learning network model and how to train the deep learning network model so that the impervious surface can be accurately extracted from the complex remote sensing image are main problems to be solved. In the invention, a watertight surface extraction network is designed based on a U-net model and a characteristic pyramid network, an improved multi-scale fusion network model is provided, and multi-scale characteristic fusion of remote sensing images is realized so as to be suitable for high-resolution remote sensing image watertight surface extraction; meanwhile, based on the provided multi-scale fusion network model, the invention designs a network training framework, selects a model training strategy and configures network hyper-parameters.
As shown in fig. 1, in a preferred embodiment of the present invention, a flow chart of a method for extracting a watertight surface of a high-resolution remote sensing image based on deep learning is provided, which includes four steps, respectively S1 to S4:
s1: obtaining a high-resolution remote sensing image of a target area, and labeling the image to obtain a waterproof surface label image; and carrying out data preprocessing operation on the high-resolution remote sensing image and the corresponding impervious surface label image to finally obtain an impervious surface sample data set.
S2: and constructing a multi-scale fusion network model for extracting the characteristics of the high-resolution remote sensing image, wherein the multi-scale fusion network model takes U-Net as a backbone network, introduces a characteristic pyramid into the U-Net network to fuse the characteristics of different scales in the upsampling process, and predicts the high-resolution remote sensing image of the input model pixel by pixel ground object categories based on the fusion characteristics to realize the detection of the target of the multi-scale impervious surface.
S3: and taking the impervious surface sample data set as training data, and performing iterative optimization on parameters of the multi-scale fusion network model by using a neural network optimizer through a minimized loss function, so that the multi-scale fusion network model can accurately detect the impervious surface in the high-resolution remote sensing image.
S4: and inputting the target high-resolution remote sensing image into a multi-scale fusion network model obtained by training, extracting image fusion characteristics and predicting the ground object type pixel by pixel to obtain an area with the ground object type of a watertight surface.
And the steps S1-S4 realize accurate extraction of the impervious surface of the high-resolution remote sensing image by utilizing a multi-scale fusion network model. The following describes in detail specific implementations of the above-described S1 to S4 in this embodiment and effects thereof.
The high-resolution remote sensing image has a large size, and the high-resolution remote sensing image is directly input into the multi-scale fusion network model, so that a large pressure is generated on a computer memory due to the overlarge data volume of the high-resolution remote sensing image, and the memory overflow is caused. For this problem, a scheme of image down-sampling or image cropping is generally adopted to solve the problem. However, in the image down-sampling process, the spatial detail characteristics of the image are lost, which has a certain influence on the final extraction precision. Therefore, the image needs to be cut into image blocks with appropriate sizes, and then the image blocks are input into the network, so that some small and meaningful features contained in a small number of adjacent pixel points, such as texture features and contour features of the impervious surface, can be detected.
In this embodiment, a regular grid, a sliding window, and a random window sample selection scheme may be introduced. The regular grid selection scheme is to cut the image according to a regular grid with a fixed size. The scheme can ensure that all the regions are completely traversed, but the number of the obtained samples is fixed, and certain limitations exist in feature learning among ground objects, so that the model training result is influenced. The sliding window selection scheme is to adopt a window with a fixed size, cut images at certain intervals and in a certain sequence, and the data selection scheme based on the sliding window enables a network to learn features among land and objects more fully, but the setting of a proper matrix window size is difficult, more information can be easily ignored when the matrix is too large, and information redundancy can be easily caused when the matrix is too small. The random window selection scheme is that a series of samples are cut out at random by using a window with a fixed size, the spatial position of data selection is not limited, the flexibility and the efficiency are high, the sample randomness can be enhanced, and compared with the regular grid selection, a larger number of training samples can be obtained. The cut samples need to be subjected to sample screening to remove unqualified samples, and the sample screening can be performed manually or assisted by a machine.
In addition, under the condition of limited data, the data enhancement can better train the network by improving the quantity and quality of the existing training data, and the occurrence of an overfitting phenomenon is avoided. For the remote sensing image, the data enhancement operation such as geometric transformation can enable the model to better learn the characteristics between the ground features, because the remote sensing image has different image characteristics, such as different ground feature distribution and different forms, when the sensor shoots at different angles. The remote sensing image enhancement is to improve the image information and quality, so that the image characteristics are more obvious, and the identification effect of the impervious surface of the image is enhanced. Therefore, geometric transformation, color space enhancement, kernel filter, and other image enhancement methods can be introduced in the present embodiment.
In addition, the pixel value of the remote sensing image data is usually in the [0, 255] interval, when the data distribution range is large, the process of the deep learning model for seeking the optimal solution is slow, and even results are not converged, so that data normalization operation is required to control the data distribution within a certain range. Data normalization is an important preprocessing step before model training, and is used for unifying the dimension of data, transforming data with different orders of magnitude to the same order of magnitude, weakening the influence of larger variable values on model convergence and being beneficial to improving the convergence speed and classification precision of the model.
The currently used normalization methods are min-max normalization and z-score normalization. The min-max normalization, also called dispersion normalization, maps data values between [0, 1], and this method is applied to data distributed in a limited range, and the specific formula is as follows:
wherein: x is the number ofmaxIs the maximum value of the sample data; x is the number ofmaxIs the minimum value of the sample data.
z-score normalization is normalization using the mean and standard deviation of the raw data, and this method is applicable to cases where there are no significant boundaries. The data after z-score normalization fit the standard normal distribution, i.e. the mean is 0 and the standard deviation is 1, the formula is as follows:
wherein: μ is the mean of the data; σ is the standard deviation of the data.
In the embodiment, min-max standardization is selected to carry out normalization preprocessing on training and verification images, and the data range of each channel is reduced from the interval [0, 255] to the interval [0, 1 ].
Based on the above determination, in step S1 of the present embodiment, the watertight surface data set may be constructed according to the following steps S11 to S12 for the obtained high-resolution remote sensing image:
s11: marking the impervious surface area in the whole high-resolution remote sensing image of the target area to generate a impervious surface label image;
s12: extracting image blocks of the high-resolution remote sensing image and the corresponding impervious surface label image by using a random window (random positioning and no limitation on the position of the window) with 256-256 pixels in fixed size, taking the extracted high-resolution remote sensing image blocks and impervious surface label image blocks as samples, and constructing a impervious surface sample data set after sample screening, data enhancement (operations including multi-angle turning, mirror image mapping, linear stretching and noise addition) and min-max normalization.
Considering the requirement of model training, the impervious surface sample data set can be divided into a training set and a test set in advance, the multi-scale fusion network model is trained by using the training set, the model precision is verified by using the test set, and the impervious surface sample data set is used for impervious surface extraction of the actual high-resolution remote sensing image after meeting the precision requirement.
The multi-scale fusion network model actually comprises two parts, wherein the first part is a U-Net network which is used as a backbone network, and the second part is a feature pyramid network which is used as a functional sub-network and is used for performing additional fusion on part of output features of the backbone network and extracting shallow layer feature information.
The U-Net network belongs to the prior art, and for convenience of description, the network structure is simply introduced as follows: the U-Net network is composed of a left compression channel and a right expansion channel, the two sides of the U-Net network are in a symmetrical relation and are close to a U shape in shape. The compression channel downsamples an input high-resolution remote sensing image layer by layer, spatial dimension of a pooling layer is reduced, feature extraction of the remote sensing image is achieved, each network layer is composed of different types and numbers of convolution layers and pooling layers, the input image size of the compression channel on the left side is 256 multiplied by 256, the dimension is 3, and a high-dimensional feature map with the size of 16 multiplied by 16 and the dimension of 1024 is obtained after the input image size passes through the 4 convolution layers and the pooling layers. And the high-dimensional feature map enters a right expansion channel to perform up-sampling layer by layer, gradually recovers image details and space dimensions, and is fused with the feature map of the corresponding hierarchy of the left compression channel by using a jump structure. The characteristic pyramid network in the invention is used as a sub-network outside the backbone network U-Net and is combined with the U-Net through a right expansion channel of the U-Net network. The feature pyramid network can obtain the feature map of each level of the right side expansion channel in the up-sampling process, and the feature maps of different levels are different in size, so that a feature pyramid is formed. By the aid of the network structure design, low-dimensional features are enhanced to a certain extent, the feature pyramid is used as a sub-network with enhanced functions to extract image shallow information and fuse the image shallow information into a feature map extracted by a backbone network, the problem of unbalance of feature information is avoided, and accordingly a better extraction effect is achieved.
In addition, since the feature maps in the feature pyramid are not uniform in size, a size recovery operation is required. Most current methods typically pass the feature pyramid to the corresponding upsampling layer after it is obtained, and then gradually restore the image size through a convolution operation. However, due to the fact that the shape and size difference of the impervious surface in the remote sensing image is large, repeated convolution operation can cause the characteristic of the impervious surface to be confused with the characteristic of the background ground object, and accurate extraction of the impervious surface is not facilitated. In order to solve the problem, the invention adopts a bilinear interpolation mode to restore the characteristic graph of the characteristic pyramid to the original size, rather than repeating the decoding process in the backbone network. Compared with deconvolution operation, the method has more complete preservation of original features, thereby further avoiding information loss. As shown in fig. 2, the feature maps of different levels are restored to the original size same as the input image by bilinear interpolation, and then the feature maps with the restored size are fused in a channel splicing manner to form a fused feature map, and the fused feature map carries multi-dimensional information for subsequent classification mapping. It should be noted that the channel splicing operation does not change the size but increases the number of channels, so the number of channels in the fused feature map needs to be adjusted by using a convolution kernel of 1 × 1 to obtain the fused feature map. And inputting the fusion characteristic graph into a softmax classifier, predicting the probability that the pixel point belongs to each ground object class, and obtaining the probability that each pixel point in the high-resolution remote sensing image corresponds to each ground object class, so as to determine the region where the impervious surface class is located. The concrete ground object category can be adjusted according to the actual condition, so the softmax classifier can be a multi-classifier or a two-classifier. However, since the present invention requires extraction of the impervious surface, the sofimax classification should set a class label as the impervious surface class.
For the multi-classifier, let the images be classified into C types, and for each pixel i { i ═ 1, 2.. multidata, N } in the sample image, N is the total number of pixels, and its true class label is expressed asThe C-dimensional output characteristic vector obtained by the sample through forward propagation is recorded as a sideThe process of finding the optimal solution for the model parameters may translate the scaled down output valuesAnd ground real labelThe gap therebetween. For multi-classification problems, feature vectors are typically mapped using the softmax functionConverting the linear predicted values of all the categories into probability values, and then calculating the prediction probability of the pixel i belonging to the C-th category according to the formula:
in this embodiment, if only the impervious surface needs to be extracted, two classifiers may be used. For the two-classifier, the final output is a 2-dimensional feature map representing the two classification probabilities of each pixel belonging to and not belonging to the impervious surface. And solving the dimension to which the maximum probability value belongs, namely the pixel class label by using the argmax function.
In the training process of the invention, the loss function and the neural network optimizer can be adjusted and optimized according to the actual situation. In this embodiment, the total Loss function may be preferably composed of a cross entropy Loss function and a Dice Loss function. And training the multi-scale fusion network model by using a back propagation and optimization algorithm, and continuously updating the weight by using an Adam optimization algorithm in the training process to continuously reduce the total loss function error until the model tends to be stable.
In the model prediction process, in order to prevent memory overflow, images to be classified are generally cut into image blocks with fixed sizes to be respectively predicted, and then the image blocks are spliced into a whole image. However, the convolution operation fills the boundary of the image block with 0, so that the prediction accuracy of the boundary pixel of each image block is lower than that of the central pixel by the prediction method, and the classified image obtained after the splicing has obvious splicing traces. In order to obtain a higher prediction result, the invention can adopt a mode of neglecting edge prediction and adopting a sliding window to obtain an image block with a certain overlapping area, overlapping pixels are arranged in the horizontal direction and the vertical direction of the sliding window, the part of pixels are subsequently used as a discarded part, and the non-overlapping central part needs to be reserved. And then, the classification result of the middle non-overlapping partial area is reserved for each predicted image block, the result with inaccurate edges is discarded, and the image blocks are sequentially spliced, so that obvious splicing traces can be avoided, and the image prediction effect is improved.
Therefore, the specific implementation process of step S4 is as follows: and the target high-resolution remote sensing image is cut in a block mode by adopting an edge overlapping mode, then each image block is input into a multi-scale fusion network model obtained by training to obtain an output classification image block, all classification image blocks are spliced in sequence by neglecting an edge strategy to obtain a ground feature class classification result of a complete image, and an area with a ground feature class of a watertight surface can be extracted from the ground feature class classification result.
The following is based on the above embodiment method, and the effect is shown by applying it to a specific example. The specific process is as described above, and is not described again, and the specific parameter setting and implementation effect are mainly shown below.
Examples
The invention is described in detail below by taking a high-resolution remote sensing image of Hangzhou as an example, and the specific steps are as follows:
1) and adopting Google Earth high-resolution satellite image data of a main urban area in Hangzhou city. 3 high-resolution remote sensing images with the size of 8320 multiplied by 8320 pixels in the range of main urban areas of Hangzhou city are selected as training samples, and 1 image with the size of 8320 multiplied by 8320 pixels is selected as a test sample. The image includes various complex ground features such as vegetation, roads, buildings, water bodies, bare lands and the like.
According to the step S1, marking is carried out on the selected high-resolution remote sensing image, and the image is divided into two categories, namely a watertight surface and a permeable surface according to a ground truth value, so that label data corresponding to the watertight surface are obtained. A series of samples are randomly cut out by adopting a window with the fixed size of 256 multiplied by 256 pixels, and a random window selection scheme which does not limit the space position of data selection is adopted to cut the remote sensing image and the corresponding label data thereof, so that the phenomenon that the memory overflows due to the fact that the remote sensing image and the corresponding label data are directly input into a network to generate large pressure on the memory of a computer is avoided. The remote sensing image and the corresponding label data are subjected to multi-angle overturning, mirror image mapping, linear stretching and noise adding to perform data enhancement operation on the cut sample to be trained so as to enhance the robustness of the image and reduce the sensitivity of the data. And screening out images with too low ratio of impervious surface pixels in the data set in order to ensure that positive and negative samples in the data set are uniformly distributed. And finally obtaining a watertight surface data set, and randomly dividing the watertight surface data set into a training set and a verification set according to the ratio of 8: 2.
2) And constructing a multi-scale fusion network model for extracting the features of the high-resolution remote sensing image according to the step S2, wherein the multi-scale fusion network model takes U-Net as a backbone network, introduces a feature pyramid into the U-Net network to fuse the features of different scales in the upsampling process, and predicts the classes of the high-resolution remote sensing image of the input model pixel by pixel based on the fusion features, so as to realize the detection of the multi-scale impervious surface target, and the specific model structure is as described above. In addition, a Batch Normalization (BN) layer was introduced in this example. The Batch Normalization is used as a layer of a neural network and is placed before an activation function, the data distribution of input Batch data is adjusted through Batch Normalization processing, the nonlinear expression capability of the model is enhanced, and the training process is accelerated. Meanwhile, in the network model, the feature map of the upper level is fused to enhance the low-dimensional information, and the fusion mode can amplify the features and generate an overfitting phenomenon. During the training process of the deep learning network, a part of neurons is temporarily discarded according to the random selection of the probability p, and the discarded neurons can be temporarily considered not to be part of the network structure, but the weight is kept because the neurons can possibly participate in the subsequent training. In this way, the generalization capability of the network is improved, and overfitting is prevented. Thus, in this example, Dropout was introduced to inactivate part of the neurons and to some extent prevent the overfitting phenomenon. In this example, a Dropout layer with a rejection rate of 0.5 is added after the fourth layer convolution operation of the U-Net network, i.e., neurons are discarded with a probability of 0.5 in each training iteration.
3) And constructing a model training framework. Firstly constructing model input of training data in a training frame, defining a Loss function by combining a cross entropy Loss function and a Dice Loss function construction, evaluating an error between a model predicted value and a real value, constructing a neural network training optimizer to optimize a training parameter, and stopping training when the Loss value reaches a certain threshold value. The following describes the example training framework in detail:
(1) loss function
And constructing a Loss function by combining a cross entropy Loss function and the Dice Loss function, wherein the cross entropy is used for evaluating the difference between the probability distribution and the real distribution of the current training, and the Dice Loss function can obtain a better training effect on the condition of sample class imbalance.
Wherein the cross entropy loss function is:
where x denotes the sample, y denotes the actual label, a denotes the predicted output, and n denotes the total number of samples.
Dice Coefficient and Dice Loss are defined as:
in the formula, | X ^ Y | is the intersection between X and Y, | X | and | Y | respectively represent the element quantity of X and Y, and the value range is between [0, 1] after the numerator is multiplied by 2 in order to ensure that the denominator is repeatedly calculated.
The total Loss function is the sum of the cross entropy Loss function and the Dice Loss.
(2) Activation Function (Activation Function)
The activation function plays a crucial role in increasing the nonlinearity of the neural network model and improving the expression capability of the model. The activation function is responsible for mapping the input of the artificial neural network neuron to the output end, and the nonlinear mapping is completed by introducing nonlinear elements into the neural network, so that the complex problem can be solved better. Nonlinear activation functions play a significant role in the development of neural networks.
Classical U-net networks use a Linear rectification function (ReLU) as their activation function. The ReLU function is widely used because of its simple calculation, high efficiency, and fast convergence speed. However, since the negative gradient is set to zero when the input is less than 0, which may inhibit neurons, the weights cannot be updated, resulting in failure of the model to learn valid features. In this example, an Exponential Linear Unit (ELU) is used as the activation function. When x is larger than 0, the ELU function is linear and can relieve gradient disappearance, and when x is smaller than 0, the ELU function has the characteristic of soft saturation, improves the robustness to input change and accelerates network convergence.
Assuming that the output of a certain node is x, the output f (x) after passing through the ELU layer is shown as the following formula, that is, the ELU activation function outputs in a manner similar to exponential calculation for the case where x is less than zero:
the ELU activation function can enable the output mean value to approach 0, thereby improving the network convergence speed, effectively relieving the problem of gradient disappearance and having robustness to noise.
(3) Optimizer (Optimizer)
The neural network optimizer is used for updating the neural network parameter variables in the model to approach or reach an optimal value, so that the loss function result is minimized. In this example, the network parameters are trained using an adaptive moment estimation optimizer (Adam).
(4) Optimization algorithm
In this example, a gradient update optimization strategy such as learning rate decay (learning rate decay) may be introduced. In the model training process, a higher learning rate is maintained to help the model to ensure the convergence speed, then in the training iterative process, the learning rate is gradually attenuated to enable the training to jump out of a local optimal value, and when the training is converged to be close to the optimal value, the small learning rate is used for preventing back-and-forth oscillation to help the model to converge, thereby being beneficial to model refinement. In this example, step decay (step decay), a step decay function, is used for the training processThe learning rate in (1) is dynamically reduced. Learning rate per T1,T2,...,TmThe second iteration reduces the learning rate to the original beta1,β2,…,βmDouble, decreasing by a certain percentage after a certain number of training epochs.
(5) Early stopping (early stopping) strategy
The early-stopping strategy is one of regularization methods for preventing the overfitting phenomenon of the model in network training, and aims to solve the problem that the epoch number needs to be manually set, so that the model obtains better generalization performance. The early-stopping strategy is simple and effective and is widely used in various model training processes. In the model training process, the accuracy of the test set is reduced due to continuous training, and the early-stopping strategy can stop the training when the loss value of the model on the training set or the verification set is not reduced any more. In this example, the model training process is controlled by using an early-stop model training strategy, and when the error of the model on the verification set is no longer reduced, the training is stopped after 10 epochs.
(6) Hyper-parameter settings
The hyper-parameters include the patch size patch _ size, the training sample size batch _ size, the learning rate, the number of training rounds epoch, and so on. The hyper-parameter setting during model training needs to be comprehensively considered and determined according to data content, the volume and computer hardware resources. In this example, for the super parameter setting, the patch size patch _ size is set to 256 × 256 pixels, the training sample size batch _ size is set to 16, the learning rate is initially 0.01, and the number of training rounds epoch is set to 100.
4) And according to the step S3, inputting the training set into the multi-scale fusion network model based on the constructed loss function, optimizing the parameter variables of the multi-scale fusion network model through the constructed neural network optimizer, and continuously carrying out iterative training. Finally, the model terminates training under the early-stop strategy when the number of training rounds is 97, and the optimal loss value of 0.091 is reached. The best model is saved.
In order to verify the prediction effect of the optimal model, feature extraction is carried out on the images of the test set by using the model obtained by training, and pixel-by-pixel prediction is carried out by using the extracted image features to realize impervious surface extraction.
The method for extracting the impervious surface from the high-resolution remote sensing image has better results in the precision evaluation of Overall precision (OA), Recall rate (Recall), F1 score (F1-score), Mean Intersection over Unit (MIoU) and Kappa Coefficient (Kappa Coefficient), and is shown in the following table 1.
TABLE 1
In order to visually display the impervious surface extraction effect, the impervious surface extraction result on the test set sample is obtained under the deep learning high-resolution remote sensing image impervious surface extraction method. Fig. 3 is a partial test result, comparing and analyzing a representative impervious surface extraction image with a real surface impervious surface image, wherein the image underlying surface comprises different ground objects such as buildings, roads, vegetation, water bodies and the like. Therefore, the method for extracting the impervious surface of the deep learning high-resolution remote sensing image can clearly divide the impervious surface of buildings, roads, bridges and the like from the pervious surface of water, farmlands, vegetation and the like, can better distinguish bare land and the impervious surface, and can accurately extract the information of the impervious surface. The method can also extract the water-impervious surface of a small target more accurately.
The method has the total precision of 95.52 percent, the recall rate, the F1 score and the MIoU are respectively 0.9351, 0.9378 and 0.8712, and the Kappa coefficient is 0.91, so that the method has ideal precision indexes, can be applied to the extraction of the impervious surface of the actual high-resolution remote sensing image, and has very important practical application value.
When the model is applied, according to the step of S4, the target high-resolution remote sensing image is partitioned according to the neglected edge prediction scheme, then the partitioned target high-resolution remote sensing image is input into the trained multi-scale fusion network model, the image fusion characteristics are extracted, the pixel-by-pixel ground object type prediction is carried out, then the neglected edge splicing is carried out again, and the region with the ground object type of the impervious surface is obtained.
The above-described embodiments are merely preferred embodiments of the present invention, which should not be construed as limiting the invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, the technical scheme obtained by adopting the mode of equivalent replacement or equivalent transformation is within the protection scope of the invention.
Claims (10)
1. A high-resolution remote sensing image impervious surface extraction method based on deep learning is characterized by comprising the following steps:
s1: obtaining a high-resolution remote sensing image of a target area, and labeling the image to obtain a waterproof surface label image; carrying out data preprocessing operation on the high-resolution remote sensing image and the corresponding impervious surface label image to finally obtain an impervious surface sample data set;
s2: constructing a multi-scale fusion network model for extracting the characteristics of the high-resolution remote sensing image, wherein the multi-scale fusion network model takes U-Net as a backbone network, introduces a characteristic pyramid into the U-Net network to fuse the characteristics of different scales in the upsampling process, and predicts the high-resolution remote sensing image of the input model pixel by pixel ground object categories based on the fusion characteristics to realize the detection of the target of the multi-scale impervious surface;
s3: taking the impervious surface sample data set as training data, and performing iterative optimization on parameters of the multi-scale fusion network model by using a neural network optimizer through a minimized loss function to enable the multi-scale fusion network model to accurately detect the impervious surface in the high-resolution remote sensing image;
s4: and inputting the target high-resolution remote sensing image into a multi-scale fusion network model obtained by training, extracting image fusion characteristics and predicting the ground object type pixel by pixel to obtain an area with the ground object type of a watertight surface.
2. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 1, characterized in that: in the step S1, a watertight surface data set is constructed according to the steps S11 to S12:
s11: marking the impervious surface area in the high-resolution remote sensing image of the target area to generate a impervious surface label image;
s12: extracting image blocks of the high-resolution remote sensing image and the corresponding impervious surface label image by using a random window with a fixed size, taking the extracted high-resolution remote sensing image blocks and impervious surface label image blocks as samples, and constructing a impervious surface sample data set after sample screening, data enhancement and normalization operations.
3. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 1, characterized in that: the impervious surface sample data set is divided into a training set and a testing set in advance, the multi-scale fusion network model is trained by the aid of the training set, model accuracy is verified by the aid of the testing set, and the impervious surface sample data set is used for impervious surface extraction of an actual high-resolution remote sensing image after accuracy requirements are met.
4. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 2, characterized in that: the data enhancement operations include multi-angle flipping, mirror mapping, linear stretching, and adding noise.
5. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 2, characterized in that: the size of the random window is 256 × 256 pixels.
6. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 1, characterized in that: in the step S2, the multi-scale fusion network model includes a U-Net network and a feature pyramid network, where the U-Net network is composed of a left compression channel and a right expansion channel, the left compression channel downsamples the input high-resolution remote sensing image layer by layer, reduces the spatial dimension of the pooling layer to realize the feature extraction of the remote sensing image, and the right expansion channel upsamples layer by layer to restore the image details and spatial dimension step by step, and fuses with the feature map of the corresponding layer of the left compression channel by using a skip structure; the feature pyramid network is used as a sub-network outside the backbone network U-Net, a feature graph of each level of a right side expansion channel in an up-sampling process can be obtained, size recovery is carried out on the feature graphs obtained by each level in a bilinear interpolation mode, and then the feature graphs after size recovery form a fusion feature graph in a channel splicing mode; and finally, inputting the fused feature map into a softmax classifier, predicting the probability that the pixel point belongs to each ground feature category, and obtaining the probability that each pixel point in the high-resolution remote sensing image corresponds to each ground feature category, so as to determine the region where the impervious surface category is located.
7. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 6, characterized in that: and adjusting the number of the characteristic channels of the characteristic diagram after the channel splicing by using a convolution kernel of 1 multiplied by 1, and then performing softmax classification.
8. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 1, characterized in that: the Loss function consists of a cross entropy Loss function and a Dice Loss.
9. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 1, characterized in that: and training the multi-scale fusion network model by using a back propagation and optimization algorithm, and continuously updating the weight by using an Adam optimization algorithm in the training process to continuously reduce the loss function error until the model tends to be stable.
10. The deep learning-based high-resolution remote sensing image impervious surface extraction method according to claim 1, characterized in that: in the step S4, the target high-resolution remote sensing image is cut into blocks in an edge overlapping manner, then each image block is input into a trained multi-scale fusion network model to obtain output classification image blocks, and then all classification image blocks are sequentially spliced by ignoring an edge strategy to obtain a ground feature classification result of a complete image, i.e., a region where the ground feature classification is a watertight surface can be extracted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110783897.XA CN113591608A (en) | 2021-07-12 | 2021-07-12 | High-resolution remote sensing image impervious surface extraction method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110783897.XA CN113591608A (en) | 2021-07-12 | 2021-07-12 | High-resolution remote sensing image impervious surface extraction method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113591608A true CN113591608A (en) | 2021-11-02 |
Family
ID=78246868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110783897.XA Pending CN113591608A (en) | 2021-07-12 | 2021-07-12 | High-resolution remote sensing image impervious surface extraction method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113591608A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115100540A (en) * | 2022-06-30 | 2022-09-23 | 电子科技大学 | Method for automatically extracting high-resolution remote sensing image road |
CN115797184A (en) * | 2023-02-09 | 2023-03-14 | 天地信息网络研究院(安徽)有限公司 | Water super-resolution extraction model based on remote sensing image |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985238A (en) * | 2018-07-23 | 2018-12-11 | 武汉大学 | The high-resolution remote sensing image impervious surface extracting method and system of combined depth study and semantic probability |
CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
CN110728658A (en) * | 2019-09-16 | 2020-01-24 | 武汉大学 | High-resolution remote sensing image weak target detection method based on deep learning |
CN110992382A (en) * | 2019-12-30 | 2020-04-10 | 四川大学 | Fundus image optic cup optic disc segmentation method and system for assisting glaucoma screening |
CN111046921A (en) * | 2019-11-25 | 2020-04-21 | 天津大学 | Brain tumor segmentation method based on U-Net network and multi-view fusion |
CN111145170A (en) * | 2019-12-31 | 2020-05-12 | 电子科技大学 | Medical image segmentation method based on deep learning |
CN112183360A (en) * | 2020-09-29 | 2021-01-05 | 上海交通大学 | Lightweight semantic segmentation method for high-resolution remote sensing image |
CN112434663A (en) * | 2020-12-09 | 2021-03-02 | 国网湖南省电力有限公司 | Power transmission line forest fire detection method, system and medium based on deep learning |
CN112508936A (en) * | 2020-12-22 | 2021-03-16 | 中国科学院空天信息创新研究院 | Remote sensing image change detection method based on deep learning |
-
2021
- 2021-07-12 CN CN202110783897.XA patent/CN113591608A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985238A (en) * | 2018-07-23 | 2018-12-11 | 武汉大学 | The high-resolution remote sensing image impervious surface extracting method and system of combined depth study and semantic probability |
CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
CN110728658A (en) * | 2019-09-16 | 2020-01-24 | 武汉大学 | High-resolution remote sensing image weak target detection method based on deep learning |
CN111046921A (en) * | 2019-11-25 | 2020-04-21 | 天津大学 | Brain tumor segmentation method based on U-Net network and multi-view fusion |
CN110992382A (en) * | 2019-12-30 | 2020-04-10 | 四川大学 | Fundus image optic cup optic disc segmentation method and system for assisting glaucoma screening |
CN111145170A (en) * | 2019-12-31 | 2020-05-12 | 电子科技大学 | Medical image segmentation method based on deep learning |
CN112183360A (en) * | 2020-09-29 | 2021-01-05 | 上海交通大学 | Lightweight semantic segmentation method for high-resolution remote sensing image |
CN112434663A (en) * | 2020-12-09 | 2021-03-02 | 国网湖南省电力有限公司 | Power transmission line forest fire detection method, system and medium based on deep learning |
CN112508936A (en) * | 2020-12-22 | 2021-03-16 | 中国科学院空天信息创新研究院 | Remote sensing image change detection method based on deep learning |
Non-Patent Citations (1)
Title |
---|
RAGHAV MEHTA: "M-NET: A CONVOLUTIONAL NEURAL NETWORK FOR DEEP BRAIN STRUCTURE SEGMENTATION", 《IEEE》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115100540A (en) * | 2022-06-30 | 2022-09-23 | 电子科技大学 | Method for automatically extracting high-resolution remote sensing image road |
CN115100540B (en) * | 2022-06-30 | 2024-05-07 | 电子科技大学 | Automatic road extraction method for high-resolution remote sensing image |
CN115797184A (en) * | 2023-02-09 | 2023-03-14 | 天地信息网络研究院(安徽)有限公司 | Water super-resolution extraction model based on remote sensing image |
CN115797184B (en) * | 2023-02-09 | 2023-06-30 | 天地信息网络研究院(安徽)有限公司 | Super-resolution extraction method for surface water body |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11521379B1 (en) | Method for flood disaster monitoring and disaster analysis based on vision transformer | |
Nhat-Duc et al. | Automatic recognition of asphalt pavement cracks using metaheuristic optimized edge detection algorithms and convolution neural network | |
CN111986099B (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
CN110533631B (en) | SAR image change detection method based on pyramid pooling twin network | |
CN111723732B (en) | Optical remote sensing image change detection method, storage medium and computing equipment | |
CN114120102A (en) | Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium | |
CN110705457A (en) | Remote sensing image building change detection method | |
CN114444791A (en) | Flood disaster remote sensing monitoring and evaluation method based on machine learning | |
CN113780296A (en) | Remote sensing image semantic segmentation method and system based on multi-scale information fusion | |
CN113343563B (en) | Landslide susceptibility evaluation method based on automatic sample selection and surface deformation rate | |
Cheng et al. | ResLap: Generating high-resolution climate prediction through image super-resolution | |
CN111986193B (en) | Remote sensing image change detection method, electronic equipment and storage medium | |
CN112950780B (en) | Intelligent network map generation method and system based on remote sensing image | |
CN111028255A (en) | Farmland area pre-screening method and device based on prior information and deep learning | |
CN116524369B (en) | Remote sensing image segmentation model construction method and device and remote sensing image interpretation method | |
CN106096655A (en) | A kind of remote sensing image airplane detection method based on convolutional neural networks | |
CN111915058A (en) | Flood prediction method and device based on long-time memory network and transfer learning | |
CN111368843B (en) | Method for extracting lake on ice based on semantic segmentation | |
CN113591608A (en) | High-resolution remote sensing image impervious surface extraction method based on deep learning | |
CN112906662A (en) | Method, device and equipment for detecting change of remote sensing image and storage medium | |
CN116246169A (en) | SAH-Unet-based high-resolution remote sensing image impervious surface extraction method | |
CN114283285A (en) | Cross consistency self-training remote sensing image semantic segmentation network training method and device | |
CN111079807A (en) | Ground object classification method and device | |
CN115629160A (en) | Air pollutant concentration prediction method and system based on space-time diagram | |
CN114494870A (en) | Double-time-phase remote sensing image change detection method, model construction method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211102 |
|
RJ01 | Rejection of invention patent application after publication |