CN111767801B - Remote sensing image water area automatic extraction method and system based on deep learning - Google Patents

Remote sensing image water area automatic extraction method and system based on deep learning Download PDF

Info

Publication number
CN111767801B
CN111767801B CN202010493489.6A CN202010493489A CN111767801B CN 111767801 B CN111767801 B CN 111767801B CN 202010493489 A CN202010493489 A CN 202010493489A CN 111767801 B CN111767801 B CN 111767801B
Authority
CN
China
Prior art keywords
remote sensing
water area
sensing image
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010493489.6A
Other languages
Chinese (zh)
Other versions
CN111767801A (en
Inventor
李春风
余仲阳
王涛
郭明强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202010493489.6A priority Critical patent/CN111767801B/en
Publication of CN111767801A publication Critical patent/CN111767801A/en
Application granted granted Critical
Publication of CN111767801B publication Critical patent/CN111767801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image water area automatic extraction method and system based on deep learning. Preprocessing remote sensing image data, obtaining different water area indexes through band operation, obtaining priori feature information extracted from a water area, fusing remote sensing image data with Google map tile data and the like to realize multi-source feature information fusion, and then constructing a data set through visual interpretation and vectorization; training, verifying and testing a semantic segmentation model WE-Net constructed by a convolutional neural network; and calling a remote sensing image water area segmentation model WE-Net to realize automatic classification of the water areas and outputting a binarization gray level diagram, wherein the diagram is a classification and extraction result. The beneficial effects of the invention are as follows: the water area in the research area can be extracted by calling the remote sensing image water area segmentation model, manual visual interpretation can be replaced, manpower and material resources are saved, and auxiliary technical support is provided for updating the high-precision image map, including lake area change detection, water system transition and the like.

Description

Remote sensing image water area automatic extraction method and system based on deep learning
Technical Field
The invention relates to the field of geographic information, in particular to the field of surface waters, and particularly relates to a remote sensing image water area automatic extraction method and system based on deep learning.
Background
River and lake are the most common expression forms of surface water areas, are increased or reduced due to the change of factors such as climate change, land utilization, crust movement and the like, and have important significance for the detection of surface water body change and ecological problems such as wetland ecological system protection and recovery, aquatic animal and plant protection, river supervision, pollution control and the like. Along with the continuous development of remote sensing technology, remote sensing images gradually become effective means for extracting the change of the surface water area. The traditional remote sensing image water body extraction method usually adopts manual visual interpretation, and manual drawing is needed, and the accuracy is high, but time and labor are wasted. In addition, the single-band threshold method and the water body index method have the problems of manually determining the threshold value, homospectral foreign matters, low automation degree, poor real-time performance and the like, and the water area range of the research area is difficult to acquire quickly in time. Subsequently, machine learning methods such as machine learning, support vector machines and K-means algorithms are widely applied to water area extraction, but the problems of low precision, weak generalization capability and the like still exist.
Along with the continuous development of smart city construction, the requirements on automatic extraction of ground objects are continuously improved, and the traditional remote sensing water body extraction method obviously cannot meet the requirements, so that the automatic classification and extraction of the water areas are urgently needed in a mode with high precision, simple operation and low cost. The rapid development of deep learning technology, especially in the application of convolutional neural network in computer vision, has led to great success in the fields of target detection and semantic segmentation in image processing, which also directly promotes the research of applying deep learning in the remote sensing field to solve the problems of classification, detection, extraction and the like of ground features. The water area is extracted in high precision and real time by the deep learning technology and the traditional water body index method. Finally, the method for extracting the whole remote sensing image water area is set as a set of complete water area automatic extraction method and system, and technical support and data support are provided for scientific research and project practice related to water area change detection.
Disclosure of Invention
The invention aims to solve the technical problem that the remote sensing image water area extraction method wastes time and labor in the prior art, and provides a remote sensing image water area automatic extraction method and system based on deep learning.
The invention solves the technical problems, and adopts the technical principle that: the invention discloses a remote sensing image water area automatic extraction method model based on deep learning, which is called WE-Net, and realizes automatic recognition of a water area in a remote sensing image by training, testing and calling a remote sensing image water area segmentation model WE-Net. The automatic classification method of the high-resolution remote sensing image comprises the following steps: p1: firstly, preprocessing remote sensing image data, including radiation correction, geometric correction and research area cutting of the remote sensing image; p2: obtaining different water area indexes through band operation, and obtaining priori feature information extracted from a water area; p3: the remote sensing image data and Google map tile data and the like are fused to realize multi-source characteristic information fusion, and then a data set is constructed through visual interpretation and vectorization; p4, training, verifying and testing a semantic segmentation model WE-Net constructed by a convolutional neural network; p5: calling a remote sensing image water area segmentation model WE-Net to realize automatic classification of the water area, and outputting a png format binary gray level diagram which is a classification and extraction result; p6: the classification result is fine-tuned by a long-range conditional random field. According to the invention, only the spectrum data and the radar data of the multiband remote sensing image are subjected to basic image processing and feature information fusion, the water area in the research area can be extracted by calling the remote sensing image water area segmentation model, the classification accuracy of the model can reach 92.64% in application, manual visual interpretation can be replaced, manpower and material resources are saved, and auxiliary technical support is provided for updating a high-accuracy image map, including lake area change detection, water system transition and the like.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a remote sensing image water area automatic extraction method based on deep learning in an embodiment of the invention;
FIG. 2 is a schematic diagram of a remote sensing image water area segmentation model WE-Net implementation block diagram for constructing remote sensing image semantic segmentation in an embodiment of the invention;
FIG. 3 is a block diagram of an implementation of a constructed residual learning module RLU in an embodiment of the present invention;
FIG. 4 is a block diagram of an implementation of the build global attention module GAB in an embodiment of the invention;
fig. 5 is a block diagram of implementation of the boundary learning module BLU constructed in the embodiment of the invention.
Detailed Description
For a clearer understanding of technical features, objects and effects of the present invention, a detailed description of embodiments of the present invention will be made with reference to the accompanying drawings.
The embodiment of the invention provides a remote sensing image water area automatic extraction method and system based on deep learning.
The first embodiment describes the method for automatically extracting a remote sensing image water area based on deep learning according to the first embodiment with reference to fig. 1, and the method comprises the following steps:
step (1), downloading sentinel-2 data (S2 AMSIL 1C) of European space agency, opening a CMD control console, performing atmospheric correction through a command L2A_Process in a Sen2cor, and resampling (ras- > geometry- > reconstruction) the corrected data through SNAP software to obtain remote sensing image each wave band data which can be processed by using ENVI5.3 software.
Step (2), calculating normalized difference water index NDWI (ndwi= (Green-NIR)/(green+nir)), modified water index model NDWI3 (NDWI 3= (NIR-SWIR-2)/(nir+swir-2)), modified normalized difference water index MNDWI (mndwi= (Green-SWIR-1)/(green+swir-1)), enhanced water index EWI (ewi= (Green-NIR-1)/(green+nir+swir-1)), normalized vegetation coverage index NDVI (ndvi= (NIR-Red)/(nir+red)), wetland forest index i (wfi= (NIR-Red)/SWIR-2)), red-blue (Red, green, blue), three visible light bands, and near infrared, mid infrared SWIR-1 and SWIR-2, each band data or data output as one gray scale map.
And (3) creating a personal geographic database- > a new element data set- > a new face vector file water.shp through ArcGIS software, loading red, green and blue three wave bands to form a true color image and the 12 gray maps, combining tile data of a Google map, vectorizing a water area distribution area according to a remote sensing visual interpretation method, storing the water area distribution area in the face vector file water.shp, converting a water area/shp file containing a real water area into a grid file through a ToRaster tool in an ArcToolBox, finally outputting a gray map water/png in a png format, binarizing the gray map water/png to enable a pixel value of the water area to be 1, enabling a pixel value of a non-water area to be 0, and enabling the binarized water/png file to be a label file of the water area of the manufactured remote sensing image.
Step (4), calling an imread function in the python inner opencv-python library function to read the 12 gray images and the 1 tag file, and cutting according to the steps of 128 and 256 image sizes in a one-to-one correspondence manner, so that the cut image sizes are 256 x1, and respectively storing under 13 folders; invoking an imgauge library function to convert and amplify the cut image according to a data enhancement method, such as cutting, rotation, mirror image change, gao Sijia noise and the like, in a one-to-one correspondence manner so as to expand a data set; finally, counting the pixel mean value and standard deviation of all data, and normalizing the data; and dividing the plurality of pictures after normalization processing to obtain a training set, a verification set and a test set.
Step (5), calling a convolution layer, a pooling layer, an up-sampling layer, a loss function and an activation function in a deep learning framework TensorFlow and Keras, so as to build a remote sensing image water area segmentation model WE-Net based on deep learning, wherein the segmentation model has 13 inputs when training samples, 12 gray maps with the respective scales equal to 256 x and 256 x1 and a label file which corresponds to each gray map one by one after binarization; the partition model WE-Net is implemented through an encoding step, a decoding step, a residual learning module step, a global attention module step, and a boundary learning unit step, which will be described in detail in the second embodiment.
Step (6), setting training batch size to be 16 and learning rate parameter learning to be 0.001 according to the calculation performance and model parameter quantity of two NVIDIAGTX1080Ti display cards, calling a train function, performing multi-round iterative training on a remote sensing image water area segmentation model WE-Net by using the training set, and performing iterative verification on the model after each round of training by using the verification set; the training process is visualized by taking the number of training wheels as a horizontal axis and the IOU value as a vertical axis, after tens of training, the IOU rises first and then approaches a certain IOU value infinitely, and finally, in the next tens of training, the IOU value does not change along with the increase of the number of wheels, the remote sensing image water area segmentation model is regarded as converged, model parameters are saved, training is stopped, and overfitting is prevented; otherwise, if the IOU of the training set and the verification set is changed continuously, returning to the step (4) to modify the batch and learning rate parameters, and loading the training set for retraining; finally, the stored remote sensing image water area segmentation model WE-Net is called through a test function, the IOU value calculated on the test set is calculated, and the accuracy of the remote sensing image water area segmentation model is evaluated. In this embodiment, the accuracy evaluation index IOU of the remote sensing image water area segmentation model WE-Net reaches 0.9401 in the training set, the verification set reaches 0.9326, the model is saved after the verification set is not lowered any more, and finally the test function is called to detect the accuracy IOU reaching 0.9264 in the test set.
Step (7), after the remote sensing image water area segmentation model outputs the water area automatic extraction result, performing post-processing on the result through guide filtering GF and a conditional random field model CRF; the guide filtering is used for regarding the tag file as a guide image, taking an original image as an input image, and optimizing the boundary of a water area extraction result so as to eliminate salt and pepper noise; the binary potential function in the conditional random field constrains the colors and positions between any two pixel points, so that the pixel points with similar colors and adjacent positions are easier to have the same classification, and the smoothness between the adjacent pixel points is considered, so that the edges are smoothed, and the semantic segmentation result is fine-tuned.
And (8) after the remote sensing image water area segmentation model WE-Net is trained and tested to obtain a satisfactory result, saving the weight parameters and the network model as WE-Net.h5, which is a weight file saved by ME-Net after training. The local machine is taken as a server to issue REST service through a flash frame; the client converts the remote sensing image into a base64 format character < img src= "data: image/png through a base64 coding tool base64.B64 encoding (); base64 "/>, transmitting the character to the local server through a post request; the server responds to the post request, acquires data through the request, decodes the remote sensing image through the base64.B64decode (), calls the remote sensing image water area segmentation model WE-Net and the post-processing algorithm guide filtering GF and the conditional random field CRF, realizes automatic water area extraction, and returns the extraction result through the base64 code.
A second embodiment, referring to fig. 2, 3, 4 and 5, describes a remote sensing image water area segmentation model WE-Net according to the present embodiment, which includes the following steps:
encoding: in the encoding stage, the characteristic information of the water area is extracted through convolution and pooling. The coding stage takes 12 gray images as input data, and obtains a feature image through convolution fusion of the input data, wherein each time the feature image passes through a pooling layer, the feature image is a scale, and the feature image comprises 5 scales in total, namely 256 x 32, 128 x 64, 64 x 128, 32 x 256 and 16 x 512; after the pooling layer, the size of the feature map is halved and the number of channels is doubled, and then the water area feature information of the image is extracted through two convolutional neural networks.
Decoding: and in the decoding stage, the image size is restored through convolution and 4 times of up-sampling, and a water area extraction result is obtained. The feature images with the same size as the feature images corresponding to the encoding stage are fused through a global attention module after being sampled once in each decoding stage, and then boundary texture information of a water domain part in the feature images is integrated and extracted through a boundary learning unit; finally, outputting a binary gray scale map with the size of 256-1, wherein if the gray scale map has a value of 1, the binary gray scale map represents a water area part, and if the binary gray scale map has a value of 0, the binary gray scale map represents a non-water area part; wherein the loss function is set to a binary cross entropy loss function.
Residual error learning module: in the residual error learning module stage, a shortcut connection is added to the network, so that the information circulation speed and the network training efficiency are improved, and meanwhile, two convolution layers are added, so that the capability of extracting characteristic information of the model is improved. The residual error learning module takes a characteristic image with a coding stage scale of 2w x 2h x c as input, the result of the characteristic image after the two convolutions are carried out on the characteristic image through c convolution kernels with 3 x c is directly added with the original characteristic image according to pixels in a one-to-one correspondence sum, then the characteristic image is transformed and activated through softmax, the path is called quick connection, and the scales of the finally obtained characteristic image and the original characteristic image are consistent and are all 2w x 2h x c; wherein w, h, and c represent the width, height, and channel number of the feature map in order.
A global attention module: and in the global attention module stage, the semantic segmentation information of the decoding stage and the position information of the encoding stage are fused, and the feature extraction information is compressed and enhanced in a global average pooling weighting mode. The global attention module takes a feature map with a coding stage scale of 2w x 2h x c and a decoding stage scale of w x 2c as input, firstly obtains feature information after global average pooling of the feature map in a decoding stage, then carries out multiplexing weighting on the feature map with the feature information as the feature map in the coding stage to obtain a new feature map, the new feature map is directly added with the feature map in the coding stage according to a pixel sum, the up-sampled scale of the feature map in the decoding stage is changed into 2w x 2h x 2c, and finally the feature map obtained after the up-sampling of the unsample and the feature map added according to the sum after weighting are spliced according to channels to obtain a fused feature map, and the feature map is 2w x 2h x 3c.
Boundary learning module: in the boundary learning unit stage, a residual error learning module is formed by convolution layers with different scales and shortcut connection shortcut, a convolution kernel is added on a branch, an aliasing effect generated in the process of upsampling and fusing the feature images with different scales is eliminated, and the feature information of remote sensing images with different scales is learned. The boundary learning unit takes a feature map output by a global attention module stage as input, the feature map is subjected to information circulation through three different branches, the first branch is a shortcut connection and does not perform data transformation, the second branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h x c, the third branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h c, then the feature map of the three branches is subjected to convolution with the size of 3 x c, and finally the feature map with the aliasing eliminated is obtained.
According to another aspect of the invention, the invention further provides a remote sensing image water area automatic extraction system based on deep learning, which comprises the following modules:
the data preprocessing module is used for downloading sentinel-2 data of the European space agency, carrying out atmospheric correction through a command in a Sen2cor, and resampling the corrected data through SNAP software to obtain data of each wave band of the remote sensing image;
the information extraction module is used for calculating a normalized difference water body index NDWI, an improved water body index model NDWI3, an improved normalized difference water body index MNDWI, an enhanced water body index EWI, a novel water body index NWI, a normalized vegetation cover index NDVI, a wetland forest index, three visible light wave bands of red, green and blue (Red, green, blue), and near infrared NIR, mid-infrared SWIR-1 and SWIR-2 wave bands through a wave band operation tool of remote sensing image processing software ENVI5.3, wherein each wave band data or index data is output as a gray scale map;
the label file manufacturing module is used for creating a water area vector file water.shp through ArcGIS software, loading the 12 gray maps, vectorizing a water area distribution area according to a remote sensing visual interpretation method, finally outputting a gray map water.png, binarizing the gray map water.png, and obtaining a binary water.png file which is a label file of the manufactured remote sensing image water area distribution area;
the data set generating module is used for calling a library function of opencv-python in python to read the 12 gray images and the tag file, and cutting the 12 gray images according to the steps of 128 and the image sizes of 256 in a one-to-one correspondence manner, so that the cut images have the sizes of 256 x1 and are respectively stored under 13 folders; invoking an imgauge library function to perform one-to-one correspondence conversion and augmentation on the cut image according to a data enhancement method so as to expand a data set; finally, counting the pixel mean value and standard deviation of all data, and normalizing the data; dividing the normalized pictures to obtain a training set, a verification set and a test set;
the classification model building module is used for calling a convolution layer, a pooling layer, an up-sampling layer, a loss function and an activation function in a deep learning framework TensorFlow and Keras so as to build a remote sensing image water area segmentation model WE-Net based on deep learning, wherein the segmentation model has 13 inputs when training samples, namely 12 gray level images and a corresponding label file; the partition model WE-Net is realized through the following coding step, decoding step, global attention module step and boundary learning unit step;
and a coding module: the method is used for extracting the characteristic information of the water area through convolution and pooling in the encoding stage; the coding stage takes 12 gray images as input data, and obtains a feature image through convolution fusion of the input data, wherein each time the feature image passes through a pooling layer, the feature image is a scale, and the feature image comprises 5 scales in total, namely 256 x 32, 128 x 64, 64 x 128, 32 x 256 and 16 x 512; after the pooling layer, the size of the feature map is halved and the number of channels is doubled, and then the water area feature information of the image is extracted through two convolutional neural networks;
and a decoding module: the method is used for restoring the image size through convolution and 4 times of up-sampling in the decoding stage to obtain a water area extraction result. The feature images with the same size as the feature images corresponding to the encoding stage are fused through a global attention module after being sampled once in each decoding stage, and then boundary texture information of a water domain part in the feature images is integrated and extracted through a boundary learning unit; finally, outputting a binary gray scale map with the size of 256-1, wherein if the gray scale map has a value of 1, the binary gray scale map represents a water area part, and if the binary gray scale map has a value of 0, the binary gray scale map represents a non-water area part;
residual error learning module: the method is used for adding a shortcut connection to the network in the residual error learning module stage, improving the information circulation speed and the network training efficiency, and simultaneously adding two convolution layers to improve the capability of the model for extracting the characteristic information. The residual learning module takes a characteristic diagram with a coding stage scale of 2w x 2h x c as input, the result of the characteristic diagram after the two times of convolution kernels of c 3 x c is directly added with the original characteristic diagram according to pixel one-to-one correspondence, then the characteristic diagram is transformed and activated through softmax, the path is called shortcut connection, and the scales of the finally obtained characteristic diagram and the original characteristic diagram are consistent, and are all 2w x 2h x c.
A global attention module: the method is used for fusing semantic segmentation information of a decoding stage and position information of an encoding stage in a global attention module stage, and compressing and enhancing feature extraction information in a global average pooling weighting mode. The global attention module takes a feature map with a coding stage scale of 2w×2h×2c and a decoding stage scale of w×2c as input, obtains feature information after global average pooling of the feature map in a decoding stage, weights the feature map with the feature information as a weight value in the coding stage to obtain a new feature map, enables the up-sampled scale of the feature map in the decoding stage to be changed into 2w×2h×2c, and finally splices the feature map obtained after up-sampling and the weighted feature map according to channels to obtain a fused feature map, wherein the scale of the feature map is 2w×2h×3c.
Boundary learning module: the method is used for forming a residual error learning module through convolution layers of different scales and shortcut connection shortcut in a boundary learning unit stage, adding a convolution kernel on a branch, eliminating an aliasing effect generated in the process of upsampling and fusing the feature images of different scales, and learning the feature information of remote sensing images of different scales. The boundary learning unit takes a feature map output by a global attention module stage as input, the feature map is subjected to information circulation through three different branches, the first branch is a shortcut connection and does not perform data transformation, the second branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h x c, the third branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h c, then the feature map of the three branches is subjected to convolution with the size of 3 x c, and finally the feature map with the aliasing eliminated is obtained.
The model training module is used for setting training batch size and learning rate parameters according to the calculation performance of the display card and the model parameter quantity, calling a train function to iteratively train the remote sensing image water area segmentation model WE-Net by using the training set, and verifying the model after each round of training by using the verification set; the training process is visualized by taking the number of training wheels as a horizontal axis and the IOU value as a vertical axis, after tens of training, the IOU rises first and then approaches a certain IOU value infinitely, and finally, in the next tens of training, the IOU value does not change along with the increase of the number of wheels, the model is regarded as converged, model parameters are saved, training is stopped, and overfitting is prevented; otherwise, if the IOU of the training set and the verification set is continuously changed, returning to the step S4 to modify the batch and learning rate parameters, and loading the training set for retraining; finally, the stored remote sensing image water area segmentation model WE-Net is called through a test function, the IOU value calculated on the test set is calculated, and the precision of the model is evaluated.
The model fine adjustment module is used for carrying out post-processing on the result through the guide filtering GF and the conditional random field model CRF after the trained remote sensing image water area segmentation model outputs the result of the automatic water area extraction, namely after the segmentation result is output; the guide filtering is used for regarding the tag file as a guide image, taking an original image as an input image, and optimizing the boundary of a water area extraction result so as to eliminate salt and pepper noise; the binary potential function in the conditional random field constrains the colors and positions between any two pixel points, so that the pixel points with similar colors and adjacent positions are easier to have the same classification, and the smoothness between the adjacent pixel points is considered, so that the edges are smoothed, and the semantic segmentation result is fine-tuned.
The model application module is used for storing weight parameters and a network model WE-Net.h5 after training and testing a remote sensing image water area segmentation model WE-Net to obtain a satisfactory result, and issuing REST service by taking the local machine as a server through a flash framework; the client side converts the remote sensing image into a base64 format character through a base64 coding tool, and transmits the character to the local server through a post request; the server responds to the post request, decodes the remote sensing image, calls a remote sensing image water area segmentation model WE-Net and a post-processing algorithm guide filtering GF and a conditional random field CRF, realizes automatic water area extraction, and returns an extraction result to the client through base64 coding.
The beneficial effects of the invention are as follows: the water area in the research area can be extracted by only carrying out basic image processing and feature information fusion on the spectrum data and the radar data of the multiband remote sensing image and calling a remote sensing image water area segmentation model, the model can achieve 92.64% of classification accuracy by expanding a data set in application, manual visual interpretation can be replaced, manpower and material resources are saved, and auxiliary technical support is provided for updating a high-accuracy image map, including lake area change detection, water system transition and the like.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (5)

1. A remote sensing image water area automatic extraction method based on deep learning is characterized by comprising the following steps of: the method comprises the following steps:
s1: carrying out atmospheric correction on spectrum data of a certain remote sensing image, and resampling the corrected data to obtain data of each wave band of the remote sensing image;
s2: carrying out normalization processing on each band of data of the remote sensing image through a band operation tool of remote sensing image processing software, respectively calculating a normalized difference water body index NDWI, an improved water body index model NDWI3, an improved normalized difference water body index MNDWI, an enhanced water body index EWI, a novel water body index NWI, a normalized vegetation coverage index NDVI and a wetland forest index, marking three visible light bands of red, green and blue, and near infrared NIR, mid infrared SWIR-1 and SWIR-2 bands, and outputting each band of data or the index data into a gray scale map, thereby obtaining 12 gray scale maps;
s3: newly creating a water area vector file through ArcGIS software, loading the 12 gray maps, vectorizing a water area distribution area according to a remote sensing visual interpretation method, finally outputting 12 processed gray maps, and performing binarization processing on the processed gray maps to obtain a label file of the remote sensing image water area distribution area;
s4: calling an opencv-python library function in python to read the 12 processed gray images and the corresponding label files, and cutting according to the one-to-one correspondence of the step length m and the image size n, so that the cut image size is n 1, m and n are positive integers larger than 0, and respectively storing the positive integers under a plurality of folders;
invoking an imgauge library function to perform conversion and augmentation processing on all the cut images one by one according to a data augmentation method to obtain an expanded data set; finally, respectively counting the pixel mean value and standard deviation of each image in the data set, and carrying out standardized processing on the data; then carrying out normalization processing on all images in the data set, and dividing all normalized images to obtain a training set, a verification set and a test set;
s5: calling a convolution layer, a pooling layer, an up-sampling layer, a loss function and an activation function in a deep learning framework TensorFlow and Keras, and constructing a remote sensing image water area segmentation model based on deep learning, wherein the remote sensing image water area segmentation model has 13 inputs when training samples, namely 12 gray level images and a corresponding label file;
the remote sensing image water area segmentation model is realized through the following coding stage, decoding stage, residual error learning module stage, global attention module stage and boundary learning unit stage:
in the encoding stage, extracting the characteristic information of the water area through convolution and pooling;
in the decoding stage, the image size is restored through convolution and 4 times of up-sampling, and a water area extraction result is obtained;
in the residual error learning module stage, a shortcut connection is arranged in a convolutional neural network so as to improve the information circulation speed and the network training efficiency, and meanwhile, two convolutional layers are added to improve the capability of a model for extracting characteristic information; the residual learning module takes a characteristic diagram with the scale of 2w x 2h x c in the coding stage as input, the result of the characteristic diagram after two convolution kernels of 3 x c is directly added with the original characteristic diagram according to pixel one-to-one correspondence, then the characteristic diagram is transformed and activated through softmax, the path is called quick connection, and the scale of the finally obtained characteristic diagram is consistent with that of the original characteristic diagram, namely, the characteristic diagram is 2w x 2h x c;
in the global attention module stage, semantic segmentation information of the decoding stage and position information of the encoding stage are fused, and feature extraction information is compressed and enhanced in a global average pooling weighting mode; the global attention module takes a feature map with a coding stage scale of 2w×2h×2c and a decoding stage scale of w×2c as input, obtains feature information after global average pooling of the feature map in a decoding stage, weights the feature map with the feature information as a weight value in the coding stage to obtain a new feature map, enables the up-sampled scale of the feature map in the decoding stage to be changed into 2w×2h×2c, and finally splices the feature map obtained after up-sampling and the weighted feature map according to channels to obtain a fused feature map, wherein the scale of the feature map is 2w×2h×3c;
in the boundary learning unit stage, a residual error learning module is formed by convolution layers with different scales and shortcut connection shortcut, a convolution kernel is added on a branch, an aliasing effect generated in the process of upsampling and fusing the feature images with different scales is eliminated, and the feature information of remote sensing images with different scales is learned; the boundary learning unit takes a feature map output by a global attention module stage as input, the feature map is subjected to information circulation through three different branches, the first branch is in quick connection and does not perform data transformation, the second branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h x c, the third branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h c, then the feature map of the three branches is subjected to convolution with the size of 3 x c, and finally the feature map with the aliasing eliminated is obtained by adding the feature maps according to pixels;
wherein w, h and c represent the width, height and channel number of the feature map respectively;
s6: setting training batch size and learning rate parameters according to the calculation performance of the display card and the model parameter quantity, calling a train function to iteratively train the remote sensing image water area segmentation model by using the training set, and verifying and testing the remote sensing image water area segmentation model after each round of training by using the verification set and the test set; when the remote sensing image water area segmentation model is converged, obtaining and storing the trained remote sensing image water area segmentation model;
s7: after the trained remote sensing image water area segmentation model outputs a segmentation result, fine-tuning the segmentation result through guide filtering GF and a conditional random field model CRF; the guide filtering GF regards the tag file as a guide image, takes the original image as an input image, optimizes the boundary of the water area extraction result, and eliminates the salt and pepper noise; the binary potential function in the conditional random field model CRF constrains the colors and positions between any two pixel points, so that the pixel points with similar colors and adjacent positions can have the same classification more easily, and the edges are smoothed according to the smoothness between the adjacent pixel points;
s8: the local machine is taken as a server to issue REST service through a flash frame; the client side converts the remote sensing image into a base64 format character through a base64 coding tool, and transmits the character to the local server through a post request; the server responds to the post request, decodes the actual remote sensing image, calls the trained remote sensing image water area segmentation model and the post-processing algorithm guide filtering GF and the conditional random field model CRF for the actual remote sensing image, realizes automatic extraction of the water area, and returns the extraction result to the client through base64 coding.
2. The automatic extraction method of remote sensing image water areas based on deep learning as claimed in claim 1, wherein the method comprises the following steps: in step S4, the sizes of the 12 gray-scale images and the label file corresponding to the 12 gray-scale images are n×n×1.
3. The automatic extraction method of remote sensing image water areas based on deep learning as claimed in claim 1, wherein the method comprises the following steps: the coding stage takes 12 gray images as input data, and obtains a feature image through convolution fusion of the input data, wherein each time the feature image passes through a pooling layer, the feature image is a scale, and the feature image comprises 5 scales in total, namely 256 x 32, 128 x 64, 64 x 128, 32 x 256 and 16 x 512; after the pooling layer, the size of the feature map is halved and the number of channels is doubled, and then the water area feature information of the image is extracted through two convolutional neural networks.
4. The automatic extraction method of remote sensing image water areas based on deep learning as claimed in claim 2, wherein the method comprises the following steps: when n=256, the feature images in the decoding stage are sampled once, the feature images with the same size as the feature images in the encoding stage are fused through the global attention module, and then boundary texture information of a water domain part in the feature images is integrated and extracted through the boundary learning unit; and finally outputting a binary gray scale map with the size of 256 times and 256 times 1, wherein if the value of the binary gray scale map is 1, the binary gray scale map represents a water area part, and if the value of the binary gray scale map is 0, the binary gray scale map represents a non-water area part.
5. A remote sensing image water area automatic extraction system based on deep learning is characterized in that: the method comprises the following modules:
the data preprocessing module is used for carrying out atmospheric correction on the spectrum data of a certain remote sensing image and resampling the corrected data to obtain the data of each wave band of the remote sensing image;
the information extraction module is used for carrying out normalization processing on each wave band data of the remote sensing image through a wave band operation tool of remote sensing image processing software, respectively calculating a normalized difference water body index NDWI, an improved water body index model NDWI3, an improved normalized difference water body index MNDWI, an enhanced water body index EWI, a novel water body index NWI, a normalized vegetation coverage index NDVI and a wetland forest index, marking three visible light wave bands of red, green and blue, and near infrared NIR, mid infrared SWIR-1 and SWIR-2 wave bands, and outputting each wave band data or the index data into a gray scale map, thereby obtaining 12 gray scale maps;
the label file manufacturing module is used for creating a water area vector file water.shp through ArcGIS software, loading the 12 gray maps, vectorizing a water area distribution area according to a remote sensing visual interpretation method, finally outputting a gray map water.png, binarizing the gray map water.png, and obtaining a binary water.png file which is a label file of the manufactured remote sensing image water area distribution area;
the data set generating module is used for calling a library function of opencv-python in python to read the 12 gray images and the tag file, and cutting the 12 gray images according to the steps of 128 and the image sizes of 256 in a one-to-one correspondence manner, so that the cut images have the sizes of 256 x1 and are respectively stored under 13 folders; invoking an imgauge library function to perform one-to-one correspondence conversion and augmentation on the cut image according to a data enhancement method so as to expand a data set; finally, counting the pixel mean value and standard deviation of all data, and normalizing the data; dividing the normalized pictures to obtain a training set, a verification set and a test set;
the classification model building module is used for calling a convolution layer, a pooling layer, an up-sampling layer, a loss function and an activation function in a deep learning framework TensorFlow and Keras so as to build a remote sensing image water area segmentation model WE-Net based on deep learning, wherein the segmentation model has 13 inputs when training samples, namely 12 gray level images and a corresponding label file; the partition model WE-Net is realized through the following steps of encoding, decoding, residual learning, global attention module and boundary learning;
and a coding module: the method is used for extracting the characteristic information of the water area through convolution and pooling in the encoding stage; the coding stage takes 12 gray images as input data, and obtains a feature image through convolution fusion of the input data, wherein each time the feature image passes through a pooling layer, the feature image is a scale, and the feature image comprises 5 scales in total, namely 256 x 32, 128 x 64, 64 x 128, 32 x 256 and 16 x 512; after the pooling layer, the size of the feature map is halved and the number of channels is doubled, and then the water area feature information of the image is extracted through two convolutional neural networks;
and a decoding module: the method is used for restoring the image size through convolution and 4 times of up-sampling in the decoding stage to obtain a water area extraction result; the feature images with the same size as the feature images corresponding to the encoding stage are fused through a global attention module after being sampled once in each decoding stage, and then boundary texture information of a water domain part in the feature images is integrated and extracted through a boundary learning unit; finally, outputting a binary gray scale map with the size of 256-1, wherein if the gray scale map has a value of 1, the binary gray scale map represents a water area part, and if the binary gray scale map has a value of 0, the binary gray scale map represents a non-water area part;
residual error learning module: the method is used for setting a shortcut connection for the convolutional neural network in a residual error learning module stage, improving the information circulation speed and the network training efficiency, and simultaneously increasing two convolutional layers to improve the capability of the model for extracting the characteristic information; the residual learning module takes a characteristic diagram with a coding stage scale of 2w x 2h x c as input, the characteristic diagram is directly added with an original characteristic diagram according to pixel one-to-one correspondence through a convolution kernel of c 3 x c, then the characteristic diagram is transformed and activated through softmax, the path is called shortcut connection, and the scales of the finally obtained characteristic diagram and the original characteristic diagram are consistent, and are all 2w x 2h x c;
a global attention module: the method comprises the steps of fusing semantic segmentation information of a decoding stage and position information of an encoding stage in a global attention module stage, and compressing and enhancing feature extraction information in a global average pooling weighting mode; the global attention module takes a feature map with a coding stage scale of 2w×2h×2c and a decoding stage scale of w×2c as input, obtains feature information after global average pooling of the feature map in a decoding stage, weights the feature map with the feature information as a weight value in the coding stage to obtain a new feature map, enables the up-sampled scale of the feature map in the decoding stage to be changed into 2w×2h×2c, and finally splices the feature map obtained after up-sampling and the weighted feature map according to channels to obtain a fused feature map, wherein the scale of the feature map is 2w×2h×3c;
boundary learning module: the method is used for forming a residual error learning module through convolution layers with different scales and shortcut connection shortcut in a boundary learning unit stage, adding a convolution kernel on a branch, eliminating an aliasing effect generated in the process of upsampling and fusing the feature images with different scales, and learning the feature information of remote sensing images with different scales; the boundary learning unit takes a feature map output by a global attention module stage as input, the feature map is subjected to information circulation through three different branches, the first branch is in quick connection and does not perform data transformation, the second branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h x c, the third branch is subjected to convolution with the size of 3 x 3c, the scale size of the feature map is changed from 2w x 2h x 3c to 2w x 2h c, then the feature map of the three branches is subjected to convolution with the size of 3 x c, and finally the feature map with the aliasing eliminated is obtained by adding the feature maps according to pixels;
the model training module is used for setting training batch size and learning rate parameters according to the calculation performance and model parameter quantity of the display card, calling a train function to iteratively train the remote sensing image water area segmentation model WE-Net by using the training set, and verifying the remote sensing image water area segmentation model WE-Net after each round of training by using the verification set; the training process is visualized by taking the number of training wheels as a horizontal axis and the IOU value as a vertical axis, after tens of rounds of training, the IOU rises first and then approaches a certain IOU value infinitely, then the remote sensing image water area segmentation model WE-Net is converged, the remote sensing image water area segmentation model WE-Net parameters are saved, and the training is stopped; finally, the stored remote sensing image water area segmentation model WE-Net is called through a test function, and the accuracy of the remote sensing image water area segmentation model WE-Net is evaluated according to the IOU value calculated on the test set;
the model fine adjustment module is used for outputting a water area automatic extraction result by the trained remote sensing image water area segmentation model and then performing post-processing on the result by using the guide filtering GF and the conditional random field model CRF; the guide filtering is used for regarding the tag file as a guide image, taking an original image as an input image, and optimizing the boundary of a water area extraction result so as to eliminate salt and pepper noise; the binary potential function in the conditional random field constrains the colors and positions between any two pixel points, so that the pixel points with similar colors and adjacent positions are easier to have the same classification, and the smoothness between the adjacent pixel points is considered, so that the edge is smoothed, and the semantic segmentation result is fine-tuned;
the model application module is used for storing weight parameters after training and testing a remote sensing image water area segmentation model WE-Net to meet preset precision to obtain and store a network model WE-Net.h5, and a local machine is used as a server to issue REST service through a flash framework; the client side converts the remote sensing image into a base64 format character through a base64 coding tool, and transmits the character to the local server through a post request; the server responds to the post request, decodes the remote sensing image, calls a remote sensing image water area segmentation model WE-Net and a post-processing algorithm guide filtering GF and a conditional random field CRF, realizes automatic water area extraction, and returns an extraction result to the client through base64 coding.
CN202010493489.6A 2020-06-03 2020-06-03 Remote sensing image water area automatic extraction method and system based on deep learning Active CN111767801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010493489.6A CN111767801B (en) 2020-06-03 2020-06-03 Remote sensing image water area automatic extraction method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010493489.6A CN111767801B (en) 2020-06-03 2020-06-03 Remote sensing image water area automatic extraction method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN111767801A CN111767801A (en) 2020-10-13
CN111767801B true CN111767801B (en) 2023-06-16

Family

ID=72719334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010493489.6A Active CN111767801B (en) 2020-06-03 2020-06-03 Remote sensing image water area automatic extraction method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN111767801B (en)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
KR20210072048A (en) 2018-10-11 2021-06-16 테슬라, 인크. Systems and methods for training machine models with augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN112365508A (en) * 2020-11-03 2021-02-12 云南电网有限责任公司昆明供电局 SAR remote sensing image water area segmentation method based on visual attention and residual error network
CN112258539B (en) * 2020-11-13 2023-08-01 腾讯科技(深圳)有限公司 Water system data processing method, device, electronic equipment and readable storage medium
CN112508986B (en) * 2020-12-04 2022-07-05 武汉大学 Water level measurement method based on deep convolutional network and random field
CN112508106B (en) * 2020-12-08 2024-05-24 大连海事大学 Underwater image classification method based on convolutional neural network
CN112561876B (en) * 2020-12-14 2024-02-23 中南大学 Image-based water quality detection method and system for ponds and reservoirs
CN112989919B (en) * 2020-12-25 2024-04-19 首都师范大学 Method and system for extracting target object from image
CN112883900B (en) * 2021-03-12 2022-03-04 中科三清科技有限公司 Method and device for bare-ground inversion of visible images of remote sensing images
CN113269028B (en) * 2021-04-07 2022-02-11 南方科技大学 Water body change detection method and system based on deep convolutional neural network
CN113239972A (en) * 2021-04-19 2021-08-10 温州医科大学 Artificial intelligence auxiliary diagnosis model construction system for medical images
CN113643235B (en) * 2021-07-07 2023-12-29 青岛高重信息科技有限公司 Chip counting method based on deep learning
CN113591633B (en) * 2021-07-18 2024-04-30 武汉理工大学 Object-oriented land utilization information interpretation method based on dynamic self-attention transducer
CN113840127B (en) * 2021-08-12 2024-02-27 长光卫星技术股份有限公司 Method for automatically masking DSM (digital multimedia subsystem) in satellite video image acquisition water area
CN113642663B (en) * 2021-08-24 2022-03-22 中国水利水电科学研究院 Satellite remote sensing image water body extraction method
CN113538425B (en) * 2021-09-16 2021-12-24 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Passable water area segmentation equipment, image segmentation model training and image segmentation method
CN114565657B (en) * 2021-12-24 2023-08-15 西安电子科技大学 Method for extracting river width in remote sensing image based on edge gradient and directional texture
CN114332637B (en) * 2022-03-17 2022-08-30 北京航空航天大学杭州创新研究院 Remote sensing image water body extraction method and interaction method for remote sensing image water body extraction
CN114998738A (en) * 2022-06-10 2022-09-02 湖南大学 Water body image extraction method and system based on confidence coefficient filtering
CN115170979B (en) * 2022-06-30 2023-02-24 国家能源投资集团有限责任公司 Mining area fine land classification method based on multi-source data fusion
CN115423829B (en) * 2022-07-29 2024-03-01 江苏省水利科学研究院 Method and system for rapidly extracting water body of single-band remote sensing image
CN116452985B (en) * 2023-02-21 2023-10-31 清华大学 Surface water monitoring method, device, computer equipment and storage medium
CN116030352B (en) * 2023-03-29 2023-07-25 山东锋士信息技术有限公司 Long-time-sequence land utilization classification method integrating multi-scale segmentation and super-pixel segmentation
CN116452901B (en) * 2023-06-19 2023-09-15 中国科学院海洋研究所 Automatic extraction method for ocean culture area of remote sensing image based on deep learning
CN116481600B (en) * 2023-06-26 2023-10-20 四川省林业勘察设计研究院有限公司 Plateau forestry ecological monitoring and early warning system and method
CN116699096B (en) * 2023-08-08 2023-11-03 凯德技术长沙股份有限公司 Water quality detection method and system based on deep learning
CN117392539B (en) * 2023-10-13 2024-04-09 哈尔滨师范大学 River water body identification method based on deep learning, electronic equipment and storage medium
CN117409203B (en) * 2023-11-14 2024-04-02 自然资源部国土卫星遥感应用中心 Shallow lake area extraction method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10185891B1 (en) * 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN110781775A (en) * 2019-10-10 2020-02-11 武汉大学 Remote sensing image water body information accurate segmentation method supported by multi-scale features
CN110852225A (en) * 2019-10-31 2020-02-28 中国地质大学(武汉) Remote sensing image mangrove forest extraction method and system based on deep convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7400770B2 (en) * 2002-11-06 2008-07-15 Hrl Laboratories Method and apparatus for automatically extracting geospatial features from multispectral imagery suitable for fast and robust extraction of landmarks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10185891B1 (en) * 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN110781775A (en) * 2019-10-10 2020-02-11 武汉大学 Remote sensing image water body information accurate segmentation method supported by multi-scale features
CN110852225A (en) * 2019-10-31 2020-02-28 中国地质大学(武汉) Remote sensing image mangrove forest extraction method and system based on deep convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Water Body Extraction From Very High-Resolution Remote Sensing Imagery Using Deep U-Net and a Superpixel-Based Conditional Random Field Model;Wenqing Feng,et.al;IEEE Geoscience and Remote Sensing Letters;全文 *
基于优化导向滤波算法的遥感图像预处理仿真;么鸿原等;计算机仿真;第36卷(第9期);301-302 *
基于深度学习的高分遥感影像水体提取模型研究;陈前,等;地理与地理信息科学;第35卷(第4期);全文 *

Also Published As

Publication number Publication date
CN111767801A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN111767801B (en) Remote sensing image water area automatic extraction method and system based on deep learning
CN110852225B (en) Remote sensing image mangrove forest extraction method and system based on deep convolutional neural network
CN109919206B (en) Remote sensing image earth surface coverage classification method based on full-cavity convolutional neural network
CN109934153B (en) Building extraction method based on gating depth residual error optimization network
CN113780296B (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN113239830B (en) Remote sensing image cloud detection method based on full-scale feature fusion
CN113609889B (en) High-resolution remote sensing image vegetation extraction method based on sensitive characteristic focusing perception
CN110728197B (en) Single-tree-level tree species identification method based on deep learning
CN110991430B (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
Du et al. Segmentation and sampling method for complex polyline generalization based on a generative adversarial network
CN113239736B (en) Land coverage classification annotation drawing acquisition method based on multi-source remote sensing data
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN113256649B (en) Remote sensing image station selection and line selection semantic segmentation method based on deep learning
CN114694038A (en) High-resolution remote sensing image classification method and system based on deep learning
CN116645592B (en) Crack detection method based on image processing and storage medium
CN113486975A (en) Ground object classification method, device, equipment and storage medium for remote sensing image
CN113887472A (en) Remote sensing image cloud detection method based on cascade color and texture feature attention
CN116091937A (en) High-resolution remote sensing image ground object recognition model calculation method based on deep learning
CN117496347A (en) Remote sensing image building extraction method, device and medium
CN115527027A (en) Remote sensing image ground object segmentation method based on multi-feature fusion mechanism
CN115019163A (en) City factor identification method based on multi-source big data
CN114092803A (en) Cloud detection method and device based on remote sensing image, electronic device and medium
CN116630610A (en) ROI region extraction method based on semantic segmentation model and conditional random field
CN113516059B (en) Solid waste identification method and device, electronic device and storage medium
CN115019044A (en) Individual plant segmentation method and device, terminal device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant