CN112800851A - Water body contour automatic extraction method and system based on full convolution neuron network - Google Patents

Water body contour automatic extraction method and system based on full convolution neuron network Download PDF

Info

Publication number
CN112800851A
CN112800851A CN202011633088.2A CN202011633088A CN112800851A CN 112800851 A CN112800851 A CN 112800851A CN 202011633088 A CN202011633088 A CN 202011633088A CN 112800851 A CN112800851 A CN 112800851A
Authority
CN
China
Prior art keywords
convolution
water body
layer
body contour
multiplied
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011633088.2A
Other languages
Chinese (zh)
Other versions
CN112800851B (en
Inventor
余华芬
季顺平
顾春墚
聂晨晖
张志力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Institute Of Surveying And Mapping Science And Technology
Original Assignee
Zhejiang Institute Of Surveying And Mapping Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Institute Of Surveying And Mapping Science And Technology filed Critical Zhejiang Institute Of Surveying And Mapping Science And Technology
Priority to CN202011633088.2A priority Critical patent/CN112800851B/en
Publication of CN112800851A publication Critical patent/CN112800851A/en
Application granted granted Critical
Publication of CN112800851B publication Critical patent/CN112800851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a water body contour automatic extraction method and a system based on a full convolution neural network, wherein the water body contour automatic extraction method comprises the following steps: respectively constructing a training sample library and an image prediction library; performing iterative training on the training sample library through the multi-receptive-field characteristic combined full convolution network to obtain a network model; extracting the water body contour of the image prediction library by using the network model to obtain an extraction result of the surface water body contour; the training sample library is constructed by surface images and water body labeling data, and the image prediction library is constructed by surface images. The multi-receptive-field characteristic combined full convolution neural network has strong scale robustness, is suitable for extracting high-resolution remote sensing image water bodies with different complex situations and different scales, and can be continuously and iteratively optimized.

Description

Water body contour automatic extraction method and system based on full convolution neuron network
Technical Field
The invention relates to a coordinate fitting method, in particular to a water body contour automatic extraction method and a water body contour automatic extraction system based on a full convolution neural network.
Background
The water body extraction has very important significance for the applications of water resource monitoring, natural disaster assessment, environmental protection and the like, and the method for acquiring the earth surface information by using the remote sensing technology is the most common technical means. The traditional methods mainly comprise a wave band threshold value method, a supervision classification method, a water body index method, an inter-spectrum relation method and the like, and the main work of the methods for extracting the water body is as follows: a proper feature is designed on the spectrum features of each wave band on the remote sensing image and empirically for water body identification, and the attention to the features of the water body such as shape, size, texture, edge, semantic, shadow and the like is little, which seriously influences the extraction precision of the water body. In addition, for massive remote sensing data processing, the traditional method generally has the defects of low automation degree, poor working efficiency and low precision.
Therefore, it is important to improve the accuracy and automation of surface water extraction. The convolutional neural network in deep learning has extremely strong performance in image classification, image retrieval, target detection and semantic segmentation, and the extremely strong characteristic representation capability of the convolutional neural network is attributed to a certain degree. The convolutional neural network can perform layered abstraction on image features by using local operation, and automatically learns a multi-layer feature expression, and the capability of automatically learning the features exceeds that of a traditional method for designing the features empirically.
The extraction of the remote sensing image water body needs to pay more attention to the contour problem of the water body, and the interior of the water body is a secondary and simpler problem. The contour information of the water body relates to various semantic information, such as blocking of ridges and paddy fields, banks and river water, channels and the ground, shadows and weeds, and the like, which are main difficulties in water body extraction. In making various topographic and thematic maps, the plotter has to manually trace out the various contours of the body of water on the satellite or aerial image, which is obviously a heavy and inefficient task. Therefore, efficient and automatic extraction research for the remote sensing image water body is very important.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method and a system for automatically extracting a water body contour based on a full convolution neural network.
In order to achieve the purpose, the invention adopts the following technical scheme: the automatic water body contour extraction method based on the full convolution neuron network comprises the following steps:
respectively constructing a training sample library and an image prediction library;
performing iterative training on the training sample library through the multi-receptive-field characteristic combined full convolution network to obtain a network model;
and extracting the water body contour of the image prediction library by using the network model to obtain an extraction result of the surface water body contour.
The training sample library is constructed by surface images and water body labeling data, and the image prediction library is constructed by surface images.
Preferably, the constructing the training sample library includes:
acquiring a surface image and water body labeling data;
preprocessing water body labeling data to obtain a surface image and a labeling grid pair;
and slicing the earth surface image and the marking grid pair to obtain the training sample.
Preferably, the pretreatment comprises:
rasterizing the water body labeling data to obtain labeling grid data;
and resampling and cutting the earth surface image and the water body labeling grid data to obtain the earth surface image and labeling grid pair.
Preferably, the multi-receptive field feature joint full convolution network comprises an encoding section, a decoding section, and an output section, wherein,
the coding part consists of 5 multi-sensing-field characteristic combination modules and 4 maximum pooling layers;
the decoding part consists of 4 different semantic feature fusion modules and 4 upsampling layers;
the output part consists of 5 output layers and 1 characteristic multi-scale prediction fusion module.
Preferably, the 1 st and 2 nd multi-susceptibility-field feature combination modules of the coding part are composed of a convolution layer with a convolution kernel size of 3 × 3 and a step size of 1, a batch normalization layer and a modified linear unit in front;
the front and back of the 3 rd, 4 th and 5 th multi-susceptibility-field feature combination modules of the coding part are composed of convolution layers with the convolution kernel size of 3 multiplied by 3 and the step length of 1, a batch normalization layer and a correction linear unit; the maximum pooling layer step length of the encoding part is 2 multiplied by 2, and after the encoding part passes through the pooling layer, the height and the width of an output feature map become one half of the input feature map.
Preferably, the multi-receptive field feature combination module is composed of three different receptive field feature extractions, namely a short distance feature extraction module, a middle distance feature extraction module and a long distance feature extraction module, wherein,
the short-distance feature extraction module consists of a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1, a batch normalization layer and a Sigmoid function;
the middle distance feature extraction module consists of a convolution layer with convolution kernel size of 7 multiplied by 7 and step length of 4, a batch normalization layer, a correction linear unit, a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 1, a batch normalization layer, a correction linear unit and a 4-time bilinear upsampling layer;
the long-distance feature extraction module consists of a Global Pooling (GP) layer and two full-connection layers;
and the multi-receptive-field characteristic combination module consists of a convolution layer with convolution kernel size of 1 multiplied by 1 and step length of 1, a batch normalization layer and a correction linear unit.
Preferably, the different semantic feature fusion module of the decoding part includes:
performing convolution and batch normalization with convolution kernel size of 1 × 1 and step size of 1 on the input, and correcting the linear unit to obtain a feature map F1;
performing global pooling on the F1 feature map to obtain a feature GP1, and then performing convolution with the convolution kernel size of 1 multiplied by 1 and the step size of 1, batch normalization, linear unit correction, convolution with the convolution kernel size of 1 multiplied by 1 and the step size of 1 and Sigmoid function to obtain global information GP 2;
performing matrix operation GP 2F 1+ F1 to obtain a characteristic diagram F2;
carrying out convolution with convolution kernel size of 3 multiplied by 3 and step size of 1, batch normalization and linear unit correction on the F2 feature map;
and each convolution layer of the decoding part is input by the characteristic diagram obtained by the up-sampling layer and the series connection of the corresponding size characteristic diagram of the coding part.
Preferably, the output layer of the output part is composed of a convolution layer with a convolution kernel size of 1 × 1 and a step size of 1 and a Sigmoid function, and specifically includes:
the characteristic multi-scale prediction fusion module firstly performs convolution, batch normalization and linear unit correction operations with the convolution kernel size of 1 multiplied by 1 and the step length of 1 on input to obtain a characteristic diagram F1;
respectively processing the obtained product by convolution kernels with the sizes of 3 multiplied by 3, 5 multiplied by 5 and 7 multiplied by 7, batch normalization and Sigmoid functions to obtain W1, W2 and W3;
performing convolution with convolution kernel size of 3 × 3 and step size of 1, batch normalization, linear unit correction and convolution layer with convolution kernel size of 1 × 1 and step size of 1 on the series connection results of F1, F1 × W1, F1 × W2 and F1 × W3 to obtain a feature map F2;
sigmoid function processing is performed on F2.
And the input of the characteristic multi-scale prediction fusion module of the output part is the concatenation of the up-sampling results of the 3 rd and 4 th output layers and the 5 th output layer result.
Preferably, after the surface water body contour extraction result is obtained, edge vectorization is carried out by using a Douglas-Pock algorithm.
The invention also provides a water body contour automatic extraction system based on the full convolution neural network, which comprises the following steps:
the training sample library construction unit is used for constructing the earth surface image and the water body labeling data to obtain a training sample;
the image prediction library construction unit is used for constructing the earth surface image to obtain an image prediction library;
the network model training unit is used for carrying out iterative training on the training samples acquired by the training sample library construction unit to acquire a network model;
and the water body contour extraction unit is used for extracting the water body contour from the image prediction library by using the network model to obtain an extraction result of the surface water body contour.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, a training sample library can be constructed according to the existing high-resolution aerial or satellite image and water body labeling data; then training a Multi-field Features combined Full convolution Network (MFU-FCN) to learn the Features of the water body in the high-resolution remote sensing image; after the network training is finished, the trained parameters and the network are utilized to predict the high-resolution remote sensing image, and a high-precision extraction result of the surface water body coverage of the high-resolution remote sensing image is obtained, wherein the multiple receptive field characteristics combined full convolution neural network has strong scale robustness, is suitable for extracting the high-resolution remote sensing image water bodies in different complex situations and different scales, and can be continuously and iteratively optimized.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of the construction of the training sample library and the image prediction sample library in this embodiment.
FIG. 2 is a diagram of the multi-sensor-field feature association module in this embodiment,
figure 3 is a schematic diagram of the mid-range feature extraction module in this embodiment,
fig. 4 is a schematic diagram of the long distance feature extraction module in this embodiment.
Fig. 5 is a schematic diagram of the feature multi-scale prediction fusion module in this embodiment.
Fig. 6 is a schematic structural diagram of the multi-exposure feature extraction combined full convolution network in this embodiment.
Fig. 7 is a schematic diagram of a framework of an automatic water body contour extraction system based on a full convolution neural network in this embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 2, fig. 3, fig. 4, fig. 5, and fig. 6, in the present embodiment, a Multi-field Features combined Full convolution Network (MFU-FCN) is used to learn water characteristics in high-resolution satellite or aerial remote sensing images, and then pixel-level prediction is performed on the water coverage of the remote sensing images. The method specifically comprises the following steps:
s1, respectively constructing a training sample library and an image prediction library;
training of a network model requires obtaining of training samples, wherein the training sample library is constructed by surface images and water body labeling data, the image prediction library is constructed by surface images, a process for constructing the training sample library is shown with reference to fig. 1, and the specific process is as follows:
firstly, preparing a high-resolution satellite film or aerial film and water body labeling vector data;
and then, carrying out data preprocessing on the data, namely rasterizing the water body labeling vector data, and resampling and cutting the image and labeling grid data to obtain an image and labeling grid pair with proper resolution and consistent size.
Finally, combining the factors of computer video memory resources, the characteristics of ground features and the like, a training sample library with proper slice size (for example, 512 × 512 or 256 × 256) is manufactured. In addition, the data preprocessing is utilized to carry out the same processing on the image to be predicted, and an image prediction library is established for the direct prediction of a subsequent model. Note that the image prediction library does not contain water body labeling data.
S2, carrying out iterative training on the training sample library through the multi-sensitive-field characteristic combined full convolution network to obtain a network model;
the multi-field feature joint full convolution network in this embodiment includes 3 parts, namely encoding (encoding stage), decoding (decoding stage), and outputting (output). Wherein the content of the first and second substances,
the coding part consists of 5 Multi-field features units (MFU) and 4 Max Pooling layers (Max Pooling Layer);
the decoding part consists of 4 Different semantic feature fusion modules (DSFF) and 4 Upsampling layers (Upsampling Layer);
the output section consists of 5 output layers and 1 feature Multi-scale prediction fusion Module (MPF).
The 1 st and 2 nd multi-field feature union modules of the above coding section are previously composed of a Convolution kernel size of 3 × 3 and a Convolution layer (Convolution) with step size 1, a Batch Normalization layer (BN), and a modified Linear Unit (ReLU). The front and back of the 3 rd, 4 th and 5 th multi-sensitivity feature combination modules of the coding part are composed of convolution layers with the convolution kernel size of 3 multiplied by 3 and the step length of 1, a batch normalization layer and a correction linear unit.
In addition, the multi-receptive field feature combination module in this embodiment is composed of three different receptive field feature extractions, which are a short distance feature extraction module, a medium distance feature extraction module, and a long distance feature extraction module.
The short-distance feature extraction module consists of a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1, a batch normalization layer and a Sigmoid function; the middle distance feature extraction module consists of a convolution layer with convolution kernel size of 7 multiplied by 7 and step length of 4, a batch normalization layer, a correction linear unit, a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 1, a batch normalization layer, a correction linear unit and a 4-time bilinear upsampling layer; the long-distance feature extraction module consists of a Global Pooling (GP) layer and two full-connection layers; and the multi-receptive-field characteristic combination module consists of a convolution layer with convolution kernel size of 1 multiplied by 1 and step length of 1, a batch normalization layer and a correction linear unit.
The maximum pooling layer step size of the encoding part is 2 x 2, and after passing through the pooling layer, the height and width of the output feature map become one half of the input.
In this embodiment, different semantic feature fusion modules of the decoding portion first perform operations of convolution with a convolution kernel size of 1 × 1 and a step size of 1, batch normalization, and correction of linear units on the input to obtain a feature map F1(ii) a Then to F1The feature map is subjected to global pooling operation to obtain feature GP1Subsequently, the convolution with convolution kernel size of 1 × 1 and step size of 1, batch normalization, and correction of the line are performedObtaining global information GP by convolution and Sigmoid functions with the size of a convolution kernel of 1 multiplied by 1 and the step length of 1 through a characteristic unit2(ii) a Then, the matrix operation GP is carried out2*F1+F1Obtain a characteristic diagram F2. Last pair F2The characteristic diagram is subjected to convolution with convolution kernel size of 3 multiplied by 3 and step size of 1, batch normalization and operation of correcting linear units.
The characteristic diagram obtained by the up-sampling layer and the characteristic diagram corresponding to the size of the coding part are connected in series.
In this embodiment, the output layer of the output part is composed of a convolution layer with a convolution kernel size of 1 × 1 and a step size of 1, and a Sigmoid function, and specifically includes:
the characteristic multi-scale prediction fusion module of the output part firstly carries out the operations of convolution with the convolution kernel size of 1 multiplied by 1 and the step length of 1, batch normalization and linear unit correction on the input to obtain a characteristic diagram F1
Then respectively using convolution kernels with the sizes of 3 multiplied by 3, 5 multiplied by 5 and 7 multiplied by 7, batch normalization and Sigmoid function to process to obtain W1、W2、W3. Then to F1、F1*W1、F1*W2、F1*W3Performing convolution with convolution kernel size of 3 × 3 and step size of 1, batch normalization, correcting linear unit, and convolution layer with convolution kernel size of 1 × 1 and step size of 1 to obtain characteristic diagram F2
Last pair F2And (6) performing Sigmoid function processing.
And the input of the characteristic multi-scale prediction fusion module of the output part is the concatenation of the up-sampling results of the 3 rd and 4 th output layers and the 5 th output layer result.
And S3, extracting the water body contour of the image prediction library by using the network model to obtain an extraction result of the surface water body contour.
In this embodiment, after the training sample library is manufactured, iterative training is performed on the network model until the model is optimal, and after the model training is completed, water extraction is performed on the image prediction library by using the trained model, so that a remote sensing image water extraction result can be obtained. And after obtaining the water body extraction result, carrying out grid vectorization on the edge of the water body by using a Douglas-Pock algorithm.
Referring to fig. 7, the embodiment further provides a system for automatically extracting a water body contour based on a full convolution neural network, including:
the training sample library construction unit 1 is used for constructing the earth surface image and the water body labeling data to obtain a training sample;
the image prediction library construction unit 2 is used for constructing the earth surface image to obtain an image prediction library;
the network model training unit 3 is used for carrying out iterative training on the training samples acquired by the training sample library construction unit to acquire a network model;
and the water body contour extraction unit 4 is used for extracting the water body contour from the image prediction library by using the network model to obtain an extraction result of the surface water body contour.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program, when executed by a processor, causes the processor to perform the steps of:
constructing a training sample library and an image prediction library;
performing iterative training on the training sample library through the multi-receptive-field characteristic combined full convolution network to obtain a network model;
and extracting the water body contour of the image prediction library by using the network model to obtain an extraction result of the surface water body contour.
The training sample library is constructed by surface images and water body labeling data, and the image prediction library is constructed by surface images.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The automatic water body contour extraction method based on the full convolution neural network is characterized by comprising the following steps of:
respectively constructing a training sample library and an image prediction library;
performing iterative training on the training sample library through the multi-receptive-field characteristic combined full convolution network to obtain a network model;
extracting the water body contour of the image prediction library by using the network model to obtain an extraction result of the surface water body contour;
the training sample library is constructed by surface images and water body labeling data, and the image prediction library is constructed by surface images.
2. The method for automatically extracting the water body contour based on the full convolution neural network according to claim 1, wherein the constructing of the training sample library comprises:
acquiring a surface image and water body labeling data;
preprocessing water body labeling data to obtain a surface image and a labeling grid pair;
and slicing the earth surface image and the marking grid pair to obtain the training sample.
3. The method for automatically extracting the water body contour based on the full convolution neural network according to claim 2, wherein the preprocessing comprises:
rasterizing the water body labeling data to obtain labeling grid data;
and resampling and cutting the earth surface image and the water body labeling grid data to obtain the earth surface image and labeling grid pair.
4. The method for automatically extracting the water body contour based on the full convolution neural network as claimed in any one of claims 1 to 3, wherein the multi-receptive-field feature combined full convolution network comprises an encoding part, a decoding part and an output part,
the coding part consists of 5 multi-sensing-field characteristic combination modules and 4 maximum pooling layers;
the decoding part consists of 4 different semantic feature fusion modules and 4 upsampling layers;
the output part consists of 5 output layers and 1 characteristic multi-scale prediction fusion module.
5. The method for automatically extracting the water body contour based on the full convolution neural network as claimed in claim 4, wherein the 1 st and 2 nd multi-sensor field feature combination modules of the coding part are previously composed of convolution layers with convolution kernel size of 3 x 3 and step size of 1, a batch normalization layer and a modified linear unit;
the front and back of the 3 rd, 4 th and 5 th multi-susceptibility-field feature combination modules of the coding part are composed of convolution layers with the convolution kernel size of 3 multiplied by 3 and the step length of 1, a batch normalization layer and a correction linear unit; the maximum pooling layer step length of the encoding part is 2 multiplied by 2, and after the encoding part passes through the pooling layer, the height and the width of an output feature map become one half of the input feature map.
6. The method according to claim 4, wherein the multi-receptive field feature combination module comprises three different receptive field feature extractions, namely a short distance feature extraction module, a middle distance feature extraction module and a long distance feature extraction module,
the short-distance feature extraction module consists of a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1, a batch normalization layer and a Sigmoid function;
the middle distance feature extraction module consists of a convolution layer with convolution kernel size of 7 multiplied by 7 and step length of 4, a batch normalization layer, a correction linear unit, a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 1, a batch normalization layer, a correction linear unit and a 4-time bilinear upsampling layer;
the long-distance feature extraction module consists of a Global Pooling (GP) layer and two full-connection layers;
and the multi-receptive-field characteristic combination module consists of a convolution layer with convolution kernel size of 1 multiplied by 1 and step length of 1, a batch normalization layer and a correction linear unit.
7. The method for automatically extracting the water body contour based on the full convolution neural network according to claim 4, wherein the different semantic feature fusion module of the decoding part comprises:
performing convolution and batch normalization with convolution kernel size of 1 × 1 and step size of 1 on the input, and correcting the linear unit to obtain a feature map F1;
performing global pooling on the F1 feature map to obtain a feature GP1, and then performing convolution with the convolution kernel size of 1 multiplied by 1 and the step size of 1, batch normalization, linear unit correction, convolution with the convolution kernel size of 1 multiplied by 1 and the step size of 1 and Sigmoid function to obtain global information GP 2;
performing matrix operation GP 2F 1+ F1 to obtain a characteristic diagram F2;
carrying out convolution with convolution kernel size of 3 multiplied by 3 and step size of 1, batch normalization and linear unit correction on the F2 feature map;
and each convolution layer of the decoding part is input by the characteristic diagram obtained by the up-sampling layer and the series connection of the corresponding size characteristic diagram of the coding part.
8. The method for automatically extracting the water body contour based on the full convolution neural network as claimed in claim 4, wherein an output layer of the output part is composed of convolution layers with convolution kernel size of 1 x 1 and step size of 1 and a Sigmoid function, and specifically comprises:
the characteristic multi-scale prediction fusion module firstly performs convolution, batch normalization and linear unit correction operations with the convolution kernel size of 1 multiplied by 1 and the step length of 1 on input to obtain a characteristic diagram F1;
respectively processing the obtained product by convolution kernels with the sizes of 3 multiplied by 3, 5 multiplied by 5 and 7 multiplied by 7, batch normalization and Sigmoid functions to obtain W1, W2 and W3;
performing convolution with convolution kernel size of 3 × 3 and step size of 1, batch normalization, linear unit correction and convolution layer with convolution kernel size of 1 × 1 and step size of 1 on the series connection results of F1, F1 × W1, F1 × W2 and F1 × W3 to obtain a feature map F2;
sigmoid function processing is performed on F2.
And the input of the characteristic multi-scale prediction fusion module of the output part is the concatenation of the up-sampling results of the 3 rd and 4 th output layers and the 5 th output layer result.
9. The method for automatically extracting the water body contour based on the full convolution neural network according to claim 1, wherein after a surface water body contour extraction result is obtained, edge vectorization is performed by using a Douglas-Pock algorithm.
10. Automatic water body contour extraction system based on full convolution neuron network is characterized by comprising:
the training sample library construction unit is used for constructing the earth surface image and the water body labeling data to obtain a training sample;
the image prediction library construction unit is used for constructing the earth surface image to obtain an image prediction library;
the network model training unit is used for carrying out iterative training on the training samples acquired by the training sample library construction unit to acquire a network model;
and the water body contour extraction unit is used for extracting the water body contour from the image prediction library by using the network model to obtain an extraction result of the surface water body contour.
CN202011633088.2A 2020-12-31 2020-12-31 Water body contour automatic extraction method and system based on full convolution neuron network Active CN112800851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011633088.2A CN112800851B (en) 2020-12-31 2020-12-31 Water body contour automatic extraction method and system based on full convolution neuron network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011633088.2A CN112800851B (en) 2020-12-31 2020-12-31 Water body contour automatic extraction method and system based on full convolution neuron network

Publications (2)

Publication Number Publication Date
CN112800851A true CN112800851A (en) 2021-05-14
CN112800851B CN112800851B (en) 2022-09-23

Family

ID=75808446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011633088.2A Active CN112800851B (en) 2020-12-31 2020-12-31 Water body contour automatic extraction method and system based on full convolution neuron network

Country Status (1)

Country Link
CN (1) CN112800851B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378731A (en) * 2021-06-17 2021-09-10 武汉大学 Green space water system vector extraction method based on convolutional neural network and edge energy constraint optimization
CN115423829A (en) * 2022-07-29 2022-12-02 江苏省水利科学研究院 Method and system for rapidly extracting water body from single-band remote sensing image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110692A (en) * 2019-05-17 2019-08-09 南京大学 A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight
CN110334656A (en) * 2019-07-08 2019-10-15 中国人民解放军战略支援部队信息工程大学 Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
EP3614308A1 (en) * 2018-08-24 2020-02-26 Ordnance Survey Limited Joint deep learning for land cover and land use classification
CN111860351A (en) * 2020-07-23 2020-10-30 中国石油大学(华东) Remote sensing image fishpond extraction method based on line-row self-attention full convolution neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3614308A1 (en) * 2018-08-24 2020-02-26 Ordnance Survey Limited Joint deep learning for land cover and land use classification
CN110110692A (en) * 2019-05-17 2019-08-09 南京大学 A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight
CN110334656A (en) * 2019-07-08 2019-10-15 中国人民解放军战略支援部队信息工程大学 Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
CN111860351A (en) * 2020-07-23 2020-10-30 中国石油大学(华东) Remote sensing image fishpond extraction method based on line-row self-attention full convolution neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZIMING MIAO,ET AL: "《Automatic Water-Body Segmentation From High-Resolution Satellite Images via Deep Networks》", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
王雪,等: "《全卷积神经网络用于遥感影像水体提取》", 《测绘通报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378731A (en) * 2021-06-17 2021-09-10 武汉大学 Green space water system vector extraction method based on convolutional neural network and edge energy constraint optimization
CN115423829A (en) * 2022-07-29 2022-12-02 江苏省水利科学研究院 Method and system for rapidly extracting water body from single-band remote sensing image
CN115423829B (en) * 2022-07-29 2024-03-01 江苏省水利科学研究院 Method and system for rapidly extracting water body of single-band remote sensing image

Also Published As

Publication number Publication date
CN112800851B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN108154192B (en) High-resolution SAR terrain classification method based on multi-scale convolution and feature fusion
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN111767801A (en) Remote sensing image water area automatic extraction method and system based on deep learning
Abdollahi et al. Improving road semantic segmentation using generative adversarial network
CN111860233B (en) SAR image complex building extraction method and system based on attention network selection
CN112800851B (en) Water body contour automatic extraction method and system based on full convolution neuron network
CN110570440A (en) Image automatic segmentation method and device based on deep learning edge detection
CN111680755B (en) Medical image recognition model construction and medical image recognition method, device, medium and terminal
CN112561876A (en) Image-based pond and reservoir water quality detection method and system
CN116994140A (en) Cultivated land extraction method, device, equipment and medium based on remote sensing image
CN111524117A (en) Tunnel surface defect detection method based on characteristic pyramid network
He et al. Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks
CN114783034A (en) Facial expression recognition method based on fusion of local sensitive features and global features
CN112950780A (en) Intelligent network map generation method and system based on remote sensing image
Li et al. A CNN-GCN framework for multi-label aerial image scene classification
CN115375548A (en) Super-resolution remote sensing image generation method, system, equipment and medium
Lu et al. Edge-reinforced convolutional neural network for road detection in very-high-resolution remote sensing imagery
CN116309612B (en) Semiconductor silicon wafer detection method, device and medium based on frequency decoupling supervision
CN111274936B (en) Multispectral image ground object classification method, system, medium and terminal
CN114998756A (en) Yolov 5-based remote sensing image detection method and device and storage medium
CN113743487A (en) Enhanced remote sensing image target detection method and system
CN111967292A (en) Lightweight SAR image ship detection method
CN117788296B (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
CN114418001B (en) Character recognition method and system based on parameter reconstruction network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant