CN114119630A - Coastline deep learning remote sensing extraction method based on coupling map features - Google Patents

Coastline deep learning remote sensing extraction method based on coupling map features Download PDF

Info

Publication number
CN114119630A
CN114119630A CN202111332318.6A CN202111332318A CN114119630A CN 114119630 A CN114119630 A CN 114119630A CN 202111332318 A CN202111332318 A CN 202111332318A CN 114119630 A CN114119630 A CN 114119630A
Authority
CN
China
Prior art keywords
image
coastline
sea
remote sensing
land
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111332318.6A
Other languages
Chinese (zh)
Inventor
田森
叶秋果
陈军
蔺楠
孙记红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
92859 TROOPS PLA
Shaanxi Jiuzhou Remote Sensing Information Technology Co ltd
Xian Jiaotong University
Qingdao Institute of Marine Geology
Original Assignee
92859 TROOPS PLA
Shaanxi Jiuzhou Remote Sensing Information Technology Co ltd
Xian Jiaotong University
Qingdao Institute of Marine Geology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 92859 TROOPS PLA, Shaanxi Jiuzhou Remote Sensing Information Technology Co ltd, Xian Jiaotong University, Qingdao Institute of Marine Geology filed Critical 92859 TROOPS PLA
Priority to CN202111332318.6A priority Critical patent/CN114119630A/en
Publication of CN114119630A publication Critical patent/CN114119630A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a coastline deep learning remote sensing extraction method based on coupling map features. The model automatically compares and analyzes the sea-land binary image obtained by training the training set sample with the labeled sea-land binary image, and then performs back propagation optimization network and self learning to obtain a sea-land binary image network model; and then inputting the remote sensing image into a sea-land binary image network model, carrying out quality control on the input remote sensing image by the sea-land binary image network model, carrying out vectorization and coastline generation operation on the obtained sea-land segmentation region binary image, and finally obtaining the coastline of the remote sensing image of the coastline region. The method solves the problem of low coastline extraction precision by using a map feature coupling mode, improves the coastline extraction speed and provides support for automatically and efficiently extracting the high-resolution image coastline.

Description

Coastline deep learning remote sensing extraction method based on coupling map features
Technical Field
The invention relates to the technical field of remote sensing detection, in particular to a coastline deep learning remote sensing extraction method based on coupling map features.
Background
With the rapid development of coastal area economy, the urbanization speed is increased, but a series of ecological related problems are caused, such as: the natural shoreline is reduced, the artificial shoreline is increased, the shape of the shoreline is changed, and even more, the natural bay disappears due to cost saving; causing the deterioration of the ecological environment and water quality of the coastal zone, and aggravating the seawater pollution, the ecological environment of the coastal zone area is more and more fragile.
Remote sensing is an emerging technology developed at the end of the 50 s of the 20 th century for space observation of the earth and exploration of the universe. As a high and new technology, remote sensing has the unique advantages of large-area synchronous data acquisition and real-time dynamic monitoring, has incomparable advantages in aspects of coastal zone resource development, environment monitoring, management, planning and evaluation, marine environment protection, marine navigation, production safety and the like, and provides powerful scientific and technological support and service for the economic sustainable development of China. The method has the advantages that the remote sensing technology is utilized to carry out real-time monitoring and change condition analysis on the sea reclamation of the coastline, and protective measures are implemented in time according to specific conditions, so that the method is rapid and efficient and has incomparable advantages compared with other monitoring technologies.
With the rapid development of aviation and aerospace technologies in recent years, sensors with higher resolution are carried on various flight platforms, short-term repeated observation of the same region is realized, massive remote sensing data is accumulated, and the remote sensing big data has the characteristics of multiple sensor types, high transmission speed, high resolution and the like. The artificial intelligence technology abstractly expresses the characteristics of input original data through a nonlinear mathematical model, is different from an information extraction method of a traditional method, does not need to rely on manual characteristics, but automatically learns the characteristics of an object, and ensures accurate and efficient extraction of information. At present, the artificial intelligence technology can be used for automatically and intelligently processing massive geographic information basic data, and has the advantages which are incomparable with the traditional method. Based on remote sensing big data and artificial intelligence technology, the method has great promotion effect on accurate extraction of coastlines and prevention and control of coastal zone disasters.
At present, scholars at home and abroad have proposed a plurality of methods for extracting coastlines of remote sensing images and research results, and the methods for automatically extracting coastlines are mainly classified into four types: threshold segmentation, edge detection, object-oriented classification, and region-growing extraction.
The threshold segmentation method is to select a series of segmentation thresholds by analyzing the pixel values of the remote sensing image, and use the segmentation thresholds to segment the image into different areas; the edge detection method is that according to the principle that the difference of gray value changes of pixels on a boundary is large, points with obvious brightness changes in a digital image are identified to judge whether the pixels are on the boundary; the object-oriented classification method is provided for the high-resolution remote sensing image, firstly, the remote sensing image is segmented, pixels with the same characteristics form a homogeneous object, the relevant characteristic attributes of a target land class are analyzed, then, a corresponding fuzzy discrimination rule is established, and the image classification and information extraction are carried out on the homogeneous object obtained by segmentation; the region growing extraction method is to combine the image element points with similar properties into the same region, and continue the outward growth after combining the points with similar properties until no image element meeting the condition is included.
The above methods for coastline extraction have advantages, but when the quality of the remote sensing image is poor and the interference of coastline noise is large, large errors are generated, most models only study red, green and blue wave band data of the remote sensing image, the huge advantage of multi-wave bands of the remote sensing large data is ignored, and near infrared, middle infrared and other wave bands in the remote sensing data can see information invisible to human eyes. Therefore, how to rapidly, accurately and comprehensively monitor and extract the coastline is a key problem which needs to be solved at present, and the method has great promotion effect on the precise extraction of the coastline and the prevention and control of the disaster of the coastline.
Disclosure of Invention
The invention provides a coastline deep learning remote sensing extraction method based on coupling map features to solve the defects of large error, low precision and the like of the existing remote sensing image coastline extraction method so as to realize rapid, accurate and comprehensive monitoring and extraction of coastlines.
The invention is realized by adopting the following technical scheme: a coastline deep learning remote sensing extraction method based on coupling map features comprises the following steps:
step S1, remote sensing image preprocessing and information extraction:
step S11, marking the preprocessing result of the original remote sensing satellite image, acquiring a water part and a land part of the remote sensing satellite image, and obtaining a marked sea-land binary image;
s12, cutting the labeled sea-land binary image obtained in S11 to obtain a training set and a test set, wherein the remote sensing image in the data set comprises four wave bands of RGB and near infrared wave bands to realize the coupling of the image and the multi-wave band spectrum;
step S2, constructing a deep learning network model and training:
s21, building a deep learning multiband shoreline extraction model, inputting the training set sample obtained in the step S1 into the deep learning multiband shoreline extraction model for training, automatically comparing and analyzing a sea-land binary image obtained by training the training set sample with a labeled sea-land binary image by the deep learning multiband shoreline extraction model, and then performing back propagation optimization network and self-learning to obtain a sea-land binary image network model;
step S22, inputting the test set obtained in the step S1 into the sea-land binary diagram network model obtained in the step S21 for accuracy detection; inputting the remote sensing satellite image into a sea-land binary image network model, and performing quality control on the input remote sensing satellite image by the sea-land binary image network model;
step S23, when the sea-land binary image network model does not meet the quality control condition, repeating the operation of S22 until the sea-land binary image network model meets the quality control condition; when the sea-land binary map network model satisfies the condition, executing step S3;
step S3, coastline extraction and vectorization: and carrying out vectorization and coastline generation operation on the sea-land segmentation region binary image extracted by the S2 model to obtain a coastline of the remote sensing image of the coastline region.
Further, in step S2, the image reading size of the deep learning network model and the size of the convolution kernel in the convolution layer are increased to four bands, namely RGB and near-infrared; in step S21, model training is carried out based on a semantic segmentation network, and different semantic categories are respectively allocated to the water body and the non-water body regions of the input image aiming at water body identification, so that the water body and the non-water body regions are identified; the deep learning network model comprises a convolutional layer, a pooling layer, an activation function, a deconvolution layer and an anti-pooling layer.
Further, in step S22, in the accuracy detection, the average intersection ratio evaluation is adopted, and the intersection ratio between the truly classified image element and the image element predicted by the model is calculated, that is:
Figure BDA0003349354880000031
in the formula, k is the total kind of calculation, namely water body and land, i and j are respectively true value and predicted value, PijIndicating that i is predicted to j.
Further, when performing the quality control in step S22, the following method is specifically adopted:
(1) strip repair: when the image has a damaged strip, calculating pixel points without values in the image according to surrounding values, and adopting a cubic convolution interpolation algorithm to realize repair, namely:
Figure BDA0003349354880000032
in the formula, row and col respectively represent the deviation between the three-time convolution interpolation calculation window in the original image and the row number of a plurality of pixel points and the point to be calculated, the ranges of i and j are [ i-1, i +2], [ j-1, j +2], F represents the original existing value, F represents the value after interpolation, and S (x) is a sampling formula;
(2) cloud and snow removal correction: for the correction of interference of cloud weather and snow season influences on the remote sensing image, the influences of cloud layers and snow are eliminated in an original image by combining the normalized vegetation index NDVI and the normalized snow index NDSI, wherein the calculation formulas of the NDVI and the NDSI are shown in a formula (4) and a formula (5):
NDVI=(NIR-R)/(NIR+R) (4)
NDSI=(G-SWIR)/(G+SWIR) (5)
when the coverage of the cloud is less than a certain percentage, the Landsat image header file is operated as above, otherwise, the image data quality does not meet the requirement, and the shoreline cannot be extracted.
Further, in step S3, the vectorization and the shore-line tracing operations are performed based on a third-party open-source GDAL library in Python, and specifically include the following contents:
firstly, reading an extraction result of a deep learning network model, reading an image, converting the image into vector surface elements, deleting land areas in the vector surface elements by using an element selection function, calculating the area of each surface element by using an area calculation function, and deleting elements with undersized areas;
and (3) performing statistical reading on the coordinate points in the coastline area by using a coordinate point reading function, creating line vector elements by using a vector creating function, and writing all coordinate point information to obtain final coastline vector line element data.
Compared with the prior art, the invention has the advantages and positive effects that:
according to the scheme, the remote sensing image selects four wave bands of RGB and near infrared wave bands, and the image and the spectrum multi-wave band are combined to play the great advantages of the multi-wave band multi-information of the remote sensing image; building a deep learning network model and modifying a model structure so as to fully extract water body characteristics from a near infrared band of a remote sensing image and improve the extraction accuracy; the model automatically compares and analyzes the sea-land binary image obtained by training the training set sample with the labeled sea-land binary image, and then performs back propagation optimization network and self learning to obtain a sea-land binary image network model; and then inputting the remote sensing image into a sea-land binary image network model, carrying out quality control on the input remote sensing image by the sea-land binary image network model, carrying out vectorization and coastline generation operation on the obtained sea-land segmentation region binary image, and finally obtaining the coastline of the coastline region remote sensing image, so that the coastline can be rapidly, accurately and comprehensively monitored and extracted, and the coastline accurate extraction and the coastline disaster prevention and control can be greatly promoted.
Drawings
FIG. 1 is a schematic flow chart of an automatic extraction method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the bands of a remote sensing image used in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a sample after manual labeling of a water body according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a deep learning model according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating results of a deep learning network model after operation according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a shore line obtained by vectorizing a result after a model is run in the embodiment of the present invention;
fig. 7 is a schematic diagram illustrating the final shoreline and the original image in an embodiment of the present invention.
Detailed Description
In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be further described with reference to the accompanying drawings and examples. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and thus, the present invention is not limited to the specific embodiments disclosed below.
The embodiment provides a coastline deep learning remote sensing extraction method based on coupling map features, which aims to solve the technical problems of low coastline automatic extraction speed, low extraction precision and difficulty in monitoring, and comprises the following steps as shown in fig. 1:
step S1, remote sensing image preprocessing and information extraction:
step S11, marking the preprocessing result of the original remote sensing satellite image, acquiring a water part and a land part of the remote sensing satellite image, and obtaining a marked sea-land binary image;
s12, cutting the sea-land binary image obtained in the S11 to obtain a training set and a test set, wherein the remote sensing image in the data set comprises four wave bands of RGB and near infrared wave bands;
step S2, constructing a deep learning network model and training:
s21, building a deep learning multiband shoreline extraction model, inputting the training set sample obtained in the step S1 into the deep learning multiband shoreline extraction model for training, automatically comparing and analyzing a sea-land binary image obtained by training the training set sample with a labeled sea-land binary image by the deep learning multiband shoreline extraction model, and then performing back propagation optimization network and self-learning to obtain a sea-land binary image network model;
step S22, sending the test set sample obtained in the step S1 into the sea-land binary diagram network model obtained in the step S21 for accuracy detection; inputting the remote sensing satellite image into the sea-land binary image network model obtained in S21, and performing quality control on the input remote sensing satellite image by using the sea-land binary image network model;
step S23, when the sea-land binary image network model does not meet the quality control condition, repeating the operation of S22 until the sea-land binary image network model meets the quality control condition; when the sea-land binary map network model satisfies the condition, executing step S3;
step S3, coastline extraction and vectorization: and carrying out vectorization and coastline generation operation on the sea-land segmentation region binary image extracted by the S2 model to obtain a coastline of the remote sensing image of the coastline region.
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the following steps of the embodiments of the present invention are further described in detail with reference to the accompanying drawings:
in step S1, an original remote sensing satellite image of the coastal zone is obtained, and the satellite image is preprocessed to obtain a preprocessed image, where the original image of the coastal zone includes four bands, namely a red band, a green band, a blue band (RGB band), and a near-infrared band, as can be seen from fig. 2, the reflectivities of the sea and the land in the RGB band are relatively close, and the reflectivities of the sea and the land in the near-infrared band are relatively different, so that the sea and the land can be better distinguished.
Then, manually marking a water body area based on a marking tool, wherein the marked area is a water body part in the remote sensing image, and the rest part is a land part, so that a sea-land binary image is obtained, as shown in fig. 3;
and selecting a proper size to cut the marked sea-land binary image and the coastal zone area image, correspondingly naming the binary image corresponding to the same area and the coastal zone original image, storing the binary image and the coastal zone original image in different folders, and dividing all sample sets into a training set and a test set according to a certain proportion.
In step S2, for the constructed deep learning network model, the image reading size of the model and the size of the convolution kernel in the convolution layer are increased to four bands, namely RGB and near-infrared, so that the model can fully extract the water body features from the near-infrared band of the remote sensing image, the extraction accuracy is improved, the accuracy of the finally trained model, especially in the regions such as muddy coast and sandy coast, is higher than that of the ordinary deep learning model, and the coupling of the image and the multiband spectrum is realized.
Step S21, the training set sample obtained in step S1 is sent to a model for training, the input of the model is a coastal zone remote sensing image in the sample set, the model is output as a sea-land binary image obtained by the model through calculation, a deep learning network model splices the feature image of an encoder to the up-sampling feature image of a decoder in each stage, so that a ladder structure is formed, the decoder is allowed to learn the related features lost in the encoder pooling in each stage through a jumping connection framework, and the calculated sea-land binary image is automatically compared with the manually extracted sea-land binary image: the deep learning model is to establish a functional relationship between the two, wherein the most important is to determine the weight parameter, the comparison process is to calculate the loss between the two by using a loss function, and a specific regularization punishment is applied to the weight parameter to continuously reduce the loss, so that the extraction accuracy is improved, the process is also called as a back propagation optimization network, and finally, a network model capable of accurately extracting the sea-land binary image is obtained.
In the embodiment, a semantic segmentation network in the deep learning network is selected for model training, so that the requirement for coastline extraction is met. Semantic segmentation of an image is to assign a semantic class to each pixel in an input image to obtain a pixilated dense classification. Aiming at the water body identification, different semantic categories are respectively allocated to the water body and the non-water body areas of the input image, so that the water body and the non-water body areas are identified.
The deep learning network model comprises five parts of convolution, pooling, activation function, deconvolution and anti-pooling, and the specific structure of the network is shown in FIG. 4;
(1) and (3) rolling layers: the convolution layer is formed by convolution operation of convolution kernel, namely, a fixed filter matrix is used for carrying out inner product on an image, namely, the inner product is multiplied by elements in the image one by one and then summed, the convolution kernel with the size of k multiplied by k is usually used for carrying out convolution operation on an M multiplied by N picture, and the purpose of the convolution operation is to realize feature extraction and weight sharing;
(2) a pooling layer: the pooling layer is usually positioned behind the convolutional layer, and the purpose of using the pooling layer is to perform feature compression on the result obtained by the convolutional layer, so that on one hand, a feature diagram is reduced, and the calculation process is simplified; on the other hand, the main features in the image can be extracted through pooling;
(3) activation function: for an image, the convolution operation is generally linear, that is, each pixel point of the image has a weight, but the image in a real scene is not linearly separable, so that a non-linear factor, namely an activation function, needs to be added.
The activation function has the function of mapping linear features into non-linearity through mapping, and the Relu activation function formula used in the embodiment is shown as formula (1):
Figure BDA0003349354880000061
the Relu function is the most used function in the deep learning network, convergence is easier, and the output can be infinite when x is greater than or equal to 0, so that the problem of gradient disappearance is effectively relieved.
Step S22, after the model training is completed, sending the sample of the test set into the model for accuracy detection, and selecting a suitable evaluation method to evaluate the accuracy of the model result, where the model evaluation method adopted in this embodiment is average cross-over ratio evaluation, and it calculates the overlapping proportion of the intersection of the two sets and the union thereof, and in the method, it calculates the cross-over ratio between the truly classified pixels and the pixels predicted by the model. The calculation formula is shown as formula (2):
Figure BDA0003349354880000062
in the formula, k is the total kind of calculation, namely water body and land, i and j are respectively true value and predicted value, PijIndicating that i is predicted to j. The model accuracy was determined using this evaluation method.
If the required accuracy is achieved, performing model integration, performing the next step, if the required accuracy is not achieved, continuously adjusting the number of samples of the model, the model training learning rate and the training batch parameters, and performing model training again until the required accuracy is achieved;
in addition, when the coastline is extracted from the remote sensing image, the image quality can be automatically controlled, and the correction of stripe interference generated when the Landsat 7 satellite onboard scanning line corrector fails, interference of cloudy weather and accumulated snow season influences on the remote sensing image and the like is included.
(1) Strip repair: and repairing the stripe interference generated when the Landsat 7 satellite airborne scanning line corrector fails, and when the image has a damaged stripe, calculating pixel points without values in the image according to surrounding values:
in this embodiment, the patch is implemented by using a cubic convolution interpolation algorithm, and a calculation formula of the patch is shown in formula (3):
Figure BDA0003349354880000071
in the formula, row and col respectively represent the deviation between the row number of 16 primitive points and the point to be calculated in the cubic convolution interpolation calculation window in the original image. i. The ranges of j are [ i-1, i +2], [ j-1, j +2], respectively. F the original existing value, and F represents the interpolated value. S (x) is a sampling formula, and a basis function can be selected according to requirements;
(2) cloud and snow removal correction: because cloud cover and snow have higher reflectivity at infrared band, the correction of long-pending day and snow season influence to remote sensing image interference:
in this embodiment, the normalized vegetation index (NDVI) and the normalized snow index (NDSI) are combined, so that the influence of clouds and snow can be eliminated from the original image, and the influence on land and water in the land-sea junction can be reduced. The formula for calculating NDVI and NDSI is shown in formula (4) and formula (5):
NDVI=(NIR-R)/(NIR+R) (4)
NDSI=(G-SWIR)/(G+SWIR) (5)
when the coverage of the cloud is less than a certain percentage, the Landsat image header file is operated as above, otherwise, the image data quality does not meet the requirement, and the shoreline cannot be extracted. The image meeting the quality standard is sent into a model for shoreline extraction, and a sea-land binary image obtained by model operation is shown in fig. 5.
And step S3, carrying out vectorization and shoreline generation operation on the sea-land segmentation region binary image obtained by model extraction, and obtaining the shoreline of the shoreline region remote sensing image. Vectorization and shoreline tracing operations are performed based on a third-party open source GDAL library in Python, and specifically include the following contents:
firstly, reading an extraction result of a deep learning network model, reading an image, converting the image into vector surface elements, deleting land areas in the image by using an element selection function, calculating the area of each surface element by using an area calculation function, deleting the elements with the excessively small areas so as to eliminate the influence of blocky areas such as islands and the like in the extraction process, performing statistical reading on coordinate points in a shoreline area by using a coordinate point reading function, creating line vector elements by using a vector creation function, and writing information of all the coordinate points into the vector line elements to obtain final data of the shoreline vector line elements. As shown in fig. 6, the finally obtained coastline and the remote sensing image are superimposed, and the display result is shown in fig. 7.
The scheme of the invention solves the problem of low coastline extraction precision on one hand, improves the coastline extraction speed on the other hand, provides support for automatically and efficiently extracting the coastline of the high-resolution image, and the extracted coastline is a commonly used shp format file in geographic information system analysis, thereby facilitating analysis such as coastline transition, sea reclamation investigation and the like and providing support for automatically and efficiently extracting the coastline of the high-resolution image.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention in other forms, and any person skilled in the art may apply the above modifications or changes to the equivalent embodiments with equivalent changes, without departing from the technical spirit of the present invention, and any simple modification, equivalent change and change made to the above embodiments according to the technical spirit of the present invention still belong to the protection scope of the technical spirit of the present invention.

Claims (6)

1. The coastline deep learning remote sensing extraction method based on the coupling map features is characterized by comprising the following steps of:
step S1, remote sensing image preprocessing and information extraction:
step S11, marking the preprocessing result of the original remote sensing satellite image, acquiring a water part and a land part of the remote sensing satellite image, and obtaining a marked sea-land binary image;
s12, cutting the labeled sea-land binary image obtained in S11 to obtain a training set and a test set, wherein the remote sensing image in the data set comprises four wave bands of RGB and near infrared wave bands;
step S2, constructing a deep learning network model and training:
step S21, a deep learning multiband atlas coupling shoreline extraction model is built, the training set sample in the step S1 is input into the deep learning multiband shoreline extraction model for training, the deep learning multiband shoreline extraction model automatically compares and analyzes a sea-land binary image obtained by training the training set sample with a labeled sea-land binary image, and then a back propagation optimization network and self-learning are carried out to obtain a sea-land binary image network model;
step S22, inputting the test set obtained in the step S1 into the sea-land binary diagram network model obtained in the step S21 for accuracy detection; inputting the remote sensing satellite image into a sea-land binary image network model, and performing quality control on the input remote sensing satellite image by the sea-land binary image network model;
step S23, when the sea-land binary image network model does not meet the quality control condition, repeating the operation of S22 until the sea-land binary image network model meets the quality control condition; when the sea-land binary map network model satisfies the condition, executing step S3;
step S3, coastline extraction and vectorization: and performing vectorization and coastline generation operation on the sea and land segmentation region binary image extracted in the step S2 to obtain a coastline of the remote sensing image of the coastline region.
2. The coastline deep learning remote sensing extraction method based on coupling map features as claimed in claim 1, wherein: in the step S2, the image reading size of the deep learning network model and the size of the convolution kernel in the convolution layer are increased to four bands of RGB and near infrared, and the image is combined with the characteristics of the spectrum multiband; in step S21, model training is carried out based on a semantic segmentation network, and different semantic categories are respectively allocated to the water body and the non-water body regions of the input image aiming at water body identification, so that the water body and the non-water body regions are identified; the deep learning network model comprises a convolutional layer, a pooling layer, an activation function, a deconvolution layer and an anti-pooling layer.
3. The coastline deep learning remote sensing extraction method based on coupling map features as claimed in claim 1, wherein: in step S22, when performing accuracy detection, an average cross-over ratio evaluation is adopted, and the cross-over ratio between the truly classified pixels and the pixels predicted by the model is calculated, that is:
Figure FDA0003349354870000011
in the formula, k is the total kind of calculation, namely water body and land, i and j are respectively true value and predicted value, PijIndicating that i is predicted to j.
4. The coastline deep learning remote sensing extraction method based on coupling map features as claimed in claim 1, wherein: in step S21, the deep learning network model splices the feature map of the encoder to the upsampled feature map of the decoder at each stage to form a ladder structure, and allows the decoder to learn the relevant features lost in the encoder pooling at each stage through a framework of jump connection, and automatically compares the calculated sea-land binary image with the manually extracted sea-land binary image, and when comparing, firstly, calculates the loss between the two by using a loss function, and uses regularization penalty for the weight parameter, so that the loss is continuously reduced.
5. The coastline deep learning remote sensing extraction method based on coupling map features as claimed in claim 1, wherein: when the quality control is performed in step S22, the following method is specifically adopted:
(1) strip repair: when the image has a damaged strip, calculating pixel points without values in the image according to surrounding values, and adopting a cubic convolution interpolation algorithm to realize repair, namely:
Figure FDA0003349354870000021
in the formula, row and col respectively represent the deviation between the three-time convolution interpolation calculation window in the original image and the row number of a plurality of pixel points and the point to be calculated, the ranges of i and j are [ i-1, i +2], [ j-1, j +2], F represents the original existing value, F represents the value after interpolation, and S (x) is a sampling formula;
(2) cloud and snow removal correction: for the correction of interference of cloud weather and snow season influences on the remote sensing image, the influences of cloud layers and snow are eliminated in an original image by combining the normalized vegetation index NDVI and the normalized snow index NDSI, wherein the calculation formulas of the NDVI and the NDSI are shown in a formula (4) and a formula (5):
NDVI=(NIR-R)/(NIR+R) (4)
NDSI=(G-SWIR)/(G+SWIR) (5)
when the coverage of the cloud is less than a certain percentage, the Landsat image header file is operated as above, otherwise, the image data quality does not meet the requirement, and the shoreline cannot be extracted.
6. The coastline deep learning remote sensing extraction method based on coupling map features as claimed in claim 1, wherein: in step S3, vectorization and shore line tracing operations are performed based on the third-party open source GDAL library in Python, and specifically include the following contents:
firstly, reading an extraction result of a deep learning network model, reading an image, converting the image into vector surface elements, deleting land areas in the vector surface elements by using an element selection function, calculating the area of each surface element by using an area calculation function, and deleting elements with undersized areas;
and (3) performing statistical reading on the coordinate points in the coastline area by using a coordinate point reading function, creating line vector elements by using a vector creating function, and writing all coordinate point information to obtain final coastline vector line element data.
CN202111332318.6A 2021-11-11 2021-11-11 Coastline deep learning remote sensing extraction method based on coupling map features Pending CN114119630A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111332318.6A CN114119630A (en) 2021-11-11 2021-11-11 Coastline deep learning remote sensing extraction method based on coupling map features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111332318.6A CN114119630A (en) 2021-11-11 2021-11-11 Coastline deep learning remote sensing extraction method based on coupling map features

Publications (1)

Publication Number Publication Date
CN114119630A true CN114119630A (en) 2022-03-01

Family

ID=80378208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111332318.6A Pending CN114119630A (en) 2021-11-11 2021-11-11 Coastline deep learning remote sensing extraction method based on coupling map features

Country Status (1)

Country Link
CN (1) CN114119630A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721346A (en) * 2023-06-14 2023-09-08 山东省煤田地质规划勘察研究院 Shore line intelligent recognition method based on deep learning algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721346A (en) * 2023-06-14 2023-09-08 山东省煤田地质规划勘察研究院 Shore line intelligent recognition method based on deep learning algorithm
CN116721346B (en) * 2023-06-14 2024-05-07 山东省煤田地质规划勘察研究院 Shore line intelligent recognition method based on deep learning algorithm

Similar Documents

Publication Publication Date Title
US11521379B1 (en) Method for flood disaster monitoring and disaster analysis based on vision transformer
CN108573276A (en) A kind of change detecting method based on high-resolution remote sensing image
CN106780091B (en) Agricultural disaster information remote sensing extraction method based on vegetation index time-space statistical characteristics
CN112287807B (en) Remote sensing image road extraction method based on multi-branch pyramid neural network
CN111024618A (en) Water quality health monitoring method and device based on remote sensing image and storage medium
CN111767801A (en) Remote sensing image water area automatic extraction method and system based on deep learning
CN111914611B (en) Urban green space high-resolution remote sensing monitoring method and system
Van de Voorde et al. Improving pixel-based VHR land-cover classifications of urban areas with post-classification techniques
CN107247927B (en) Method and system for extracting coastline information of remote sensing image based on tassel cap transformation
CN111178149B (en) Remote sensing image water body automatic extraction method based on residual pyramid network
CN110414509B (en) Port docking ship detection method based on sea-land segmentation and characteristic pyramid network
CN110443195B (en) Remote sensing image burned area analysis method combining superpixels and deep learning
CN113312993B (en) Remote sensing data land cover classification method based on PSPNet
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
CN112037244B (en) Landsat-8 image culture pond extraction method combining index and contour indicator SLIC
CN112418049B (en) Water body change detection method based on high-resolution remote sensing image
CN113327255A (en) Power transmission line inspection image processing method based on YOLOv3 detection, positioning and cutting and fine-tune
CN115452759A (en) River and lake health index evaluation method and system based on satellite remote sensing data
CN111104850A (en) Remote sensing image building automatic extraction method and system based on residual error network
CN114119630A (en) Coastline deep learning remote sensing extraction method based on coupling map features
CN113705538A (en) High-resolution remote sensing image road change detection device and method based on deep learning
CN116682026A (en) Intelligent deep learning environment remote sensing system
CN115497006B (en) Urban remote sensing image change depth monitoring method and system based on dynamic mixing strategy
CN116452872A (en) Forest scene tree classification method based on improved deep pavv3+
CN111007474A (en) Weather radar echo classification method based on multiple features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination